Post-mortem structure, 5 Whys root cause analysis, blameless retrospective patterns, and improvement recommendation formats. Use when facilitating a cycle retrospective, conducting a post-mortem, or generating structured improvement recommendations.
A retrospective that produces a list of "what went wrong" without actionable improvements is a venting session, not a process tool. This skill covers the formats for structuring retrospectives so they reliably produce root causes and committed improvements — not just observations.
Systems and processes fail more often than individuals. A blameless retrospective:
This is not about avoiding accountability — it's about producing improvements that prevent recurrence.
A symptom is what was observed. A root cause is why it happened. Treating symptoms produces local fixes. Treating root causes prevents recurrence.
Symptom: "The feature shipped 2 weeks late"
Root cause: "Acceptance criteria were incomplete at the start of the cycle,
requiring two rounds of rework during implementation"
"Improve communication" is not an improvement. "Before each cycle, Product and Engineering will co-review the feature spec within 2 days of PRD approval" is an improvement.
Each improvement must have:
# Post-Mortem: [Cycle / Feature Name]
**Cycle dates:** [Start] → [End]
**Facilitator:** [Name]
**Participants:** [Names / roles]
**Date of retrospective:** [Date]
---
## Cycle Summary
**What we set out to do:**
[Restate the PRD goal and success criteria]
**What we shipped:**
[What actually launched — including any descopes]
**Outcome vs. target:**
| Metric | Target | Actual | Delta |
|--------|--------|--------|-------|
| [Primary metric] | [Target] | [Actual] | [+/-] |
| [Secondary metric] | [Target] | [Actual] | [+/-] |
**Timeline:**
| Planned | Actual | Variance |
|---------|--------|---------|
| [Milestone] | [Actual date] | [+/- days] |
---
## What Went Well
[Things that worked and should be preserved or amplified]
| What | Why It Worked | Should We Formalize It? |
|------|--------------|------------------------|
| [Observation] | [Explanation] | [Yes / No / Maybe] |
---
## What Went Wrong
[Problems encountered — factual, not evaluative]
| What | When It Occurred | Impact |
|------|-----------------|--------|
| [Problem] | [Phase / date] | [Consequence] |
---
## Root Cause Analysis
[For each significant problem, identify the root cause]
### Problem: [Name]
**5 Whys:**
1. Why did [Problem] occur? → [Answer]
2. Why did [Answer from 1]? → [Answer]
3. Why did [Answer from 2]? → [Answer]
4. Why did [Answer from 3]? → [Answer]
5. Why did [Answer from 4]? → [Root cause]
**Root cause:** [Final answer from the Why chain]
**Category:** Process / Tooling / Spec quality / Communication / External
---
## Improvement Recommendations
| # | Improvement | Root Cause Addressed | Owner | Target Cycle |
|---|------------|---------------------|-------|-------------|
| 1 | [Specific, actionable change] | [Which root cause] | [Name] | [When] |
---
## Recurring Patterns (across cycles)
[Issues that have appeared in previous post-mortems and haven't been resolved]
| Pattern | First Observed | Occurrences | Status |
|---------|---------------|-------------|--------|
| [Pattern] | [Cycle name] | [X times] | [Open / In Progress / Resolved] |
Start with the symptom (the observable problem). Ask "why did this happen?" five times, going deeper each time. Stop when you reach a root cause that is:
Problem: The checkout feature missed the launch date by 10 days.
Why 1: Why did it miss the launch date?
→ The backend API wasn't ready when frontend integration was scheduled to start.
Why 2: Why wasn't the backend API ready on time?
→ The API contract changed mid-cycle, requiring a significant rework.
Why 3: Why did the API contract change mid-cycle?
→ The feature spec didn't define the API contract — it was left to engineering to figure out during implementation.
Why 4: Why didn't the feature spec define the API contract?
→ The spec review process doesn't require API contracts to be defined before development begins.
Why 5: Why doesn't the spec review process require API contracts?
→ There's no checklist for spec completeness review; completeness is assessed informally.
Root cause: Spec completeness review has no formal checklist, allowing API contracts
to be undefined at the start of development cycles.
Improvement: Add API contract definition as a required gate in the spec review
checklist, with Product and Engineering sign-off required before
the cycle begins.
A good improvement recommendation is:
When multiple improvements are identified, prioritize by:
| Score | Frequency × Impact |
|---|---|
| High | Recurring issue that affects cycle outcomes significantly |
| Medium | Recurring issue with moderate impact OR first-time issue with high impact |
| Low | First-time issue with low impact |
Commit to high-priority improvements. Note medium-priority improvements. Document low-priority improvements but don't commit resources.
Review the last 3–5 post-mortems before each new one to identify:
## Cross-Cycle Trend Summary
### Root Cause Categories (last 5 cycles)
| Category | Frequency | Trend |
|----------|-----------|-------|
| Spec quality / completeness | 4/5 cycles | Persistent |
| External dependency delays | 3/5 cycles | Persistent |
| Scope expansion mid-cycle | 2/5 cycles | Improving |
| Test coverage gaps | 1/5 cycles | Resolved |
### Improvement Adoption Rate
| Improvement | Committed In | Status |
|-------------|-------------|--------|
| [Improvement] | [Cycle] | Adopted / Partially adopted / Not adopted |