Skip to main content
The Product Experience

How to use Premortems to predict failure - Anu Jagga-Narang (AT&T)

34 min episode · 2 min read
·

Episode

34 min

Read time

2 min

AI-Generated Summary

Key Takeaways

  • Premortem Framework: Conduct sessions at two key moments—before development starts after discovery is complete, or midway when signals indicate dysfunction. Sessions take 30 minutes maximum, focusing on specific launches with clear objectives rather than vague backlogs. The exercise works by having teams imagine the product failed six months post-launch and writing detailed stories explaining why.
  • Risk Classification System: Use Shreyas Doshi's framework to categorize risks into three types—tigers (clear threats that will kill the launch), paper tigers (risks under control but need team reassurance), and elephants (unspoken issues everyone feels but nobody discusses). Teams vote once per issue, with two votes allowed for paper tigers and elephants, preventing vote inflation and maintaining focus.
  • Psychological Safety Requirements: Run sessions with core product managers and engineers using anonymous tools like Idea Boards rather than named platforms like Coda. Leaders must create permission for teams to surface weak assumptions and hidden dependencies that are socially difficult to voice. Including stakeholders can shift focus from tactical to strategic, potentially reducing team openness about real concerns.
  • Action Planning Process: After voting, assign owners and next steps to the most upvoted critical issues. Track mitigation plans over time and revisit premortems periodically as projects evolve. One team discovered performance issues were the top concern despite appearing fine in testing—addressing this immediately prevented launch failure. Premortems without follow-through actions become meaningless noise.
  • AI Impact Considerations: AI increases the need for premortems rather than reducing it, enabling teams to ship features for wrong problems faster with false confidence. AI can summarize themes, identify duplicate risks, and track mitigation plans, but cannot create psychological safety, surface unspoken elephants, make trade-off decisions, or own consequences of failed launches—keeping premortems fundamentally human judgment exercises.

What It Covers

Anu Jagga-Narang, product leader at AT&T, explains how to conduct premortems—hypothetical disaster prevention exercises created by psychologist Gary Klein. The technique uses prospective hindsight to combat optimism bias, helping teams identify risks before launch by imagining failure has occurred and working backwards to understand why.

Key Questions Answered

  • Premortem Framework: Conduct sessions at two key moments—before development starts after discovery is complete, or midway when signals indicate dysfunction. Sessions take 30 minutes maximum, focusing on specific launches with clear objectives rather than vague backlogs. The exercise works by having teams imagine the product failed six months post-launch and writing detailed stories explaining why.
  • Risk Classification System: Use Shreyas Doshi's framework to categorize risks into three types—tigers (clear threats that will kill the launch), paper tigers (risks under control but need team reassurance), and elephants (unspoken issues everyone feels but nobody discusses). Teams vote once per issue, with two votes allowed for paper tigers and elephants, preventing vote inflation and maintaining focus.
  • Psychological Safety Requirements: Run sessions with core product managers and engineers using anonymous tools like Idea Boards rather than named platforms like Coda. Leaders must create permission for teams to surface weak assumptions and hidden dependencies that are socially difficult to voice. Including stakeholders can shift focus from tactical to strategic, potentially reducing team openness about real concerns.
  • Action Planning Process: After voting, assign owners and next steps to the most upvoted critical issues. Track mitigation plans over time and revisit premortems periodically as projects evolve. One team discovered performance issues were the top concern despite appearing fine in testing—addressing this immediately prevented launch failure. Premortems without follow-through actions become meaningless noise.
  • AI Impact Considerations: AI increases the need for premortems rather than reducing it, enabling teams to ship features for wrong problems faster with false confidence. AI can summarize themes, identify duplicate risks, and track mitigation plans, but cannot create psychological safety, surface unspoken elephants, make trade-off decisions, or own consequences of failed launches—keeping premortems fundamentally human judgment exercises.

Notable Moment

A team running a premortem before a major launch discovered performance issues topped everyone's concerns, despite testing environments showing acceptable results. Nobody had voiced this worry until the anonymous exercise created safety to speak up. Immediately prioritizing performance fixes turned the situation around, preventing what would have been a failed launch.

Know someone who'd find this useful?

You just read a 3-minute summary of a 31-minute episode.

Get The Product Experience summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from The Product Experience

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

This podcast is featured in Best Product Management Podcasts (2026) — ranked and reviewed with AI summaries.

You're clearly into The Product Experience.

Every Monday, we deliver AI summaries of the latest episodes from The Product Experience and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime