Analyze flow metrics to identify delivery bottlenecks and improve throughput. Use when the user says "flow metrics", "why is delivery slow", "cycle time analysis", "WIP analysis", "throughput review", "what's our lead time", "DORA metrics", "where is work getting stuck", "Kanban metrics", or wants to diagnose delivery performance using data - even if they don't explicitly say "flow metrics review". Based on "Making Work Visible" by Dominica DeGrandis (WIP limits, flow blockers, time thieves) and "Accelerate" by Forsgren, Humble & Kim (DORA metrics, delivery performance indicators).
Based on "Making Work Visible" by Dominica DeGrandis and "Accelerate" by Forsgren, Humble & Kim. The core insight from both books: you cannot improve what you cannot see. Flow metrics make the invisible queue visible. DeGrandis identifies five time thieves that destroy flow (too much WIP, unknown dependencies, unplanned work, conflicting priorities, neglected work). Accelerate adds four DORA metrics that distinguish high-performing from low-performing delivery organizations. Together, these give a complete picture of where work is slowing down and why.
Ask the user what metrics data they have access to. Common sources:
Minimum viable dataset for this review:
If the user has no data, recommend: "Start by instrumenting your board. At minimum, track when each item moves to 'in progress' and 'done'. After 2-3 sprints you will have enough data for a meaningful review."
If deployment/delivery frequency data is available, map to DORA tiers:
| Metric | Elite | High | Medium | Low |
|---|---|---|---|---|
| Deployment frequency | Multiple/day | 1/week-1/day | 1/month-1/week | < 1/month |
| Lead time for changes | < 1 hour | 1 day-1 week | 1 week-1 month | > 1 month |
| Change failure rate | 0-5% | 5-10% | 10-15% | > 15% |
| MTTR (mean time to restore) | < 1 hour | < 1 day | 1 day-1 week | > 1 week |
State clearly which tier the team is in and what the next tier requires.
For each thief, ask the user targeted questions and assess its presence:
Thief 1: Too much WIP
Thief 2: Unknown dependencies
Thief 3: Unplanned work
Thief 4: Conflicting priorities
Thief 5: Neglected work
Using the data the user provides:
Cycle time = average time from "in progress" to "done"
Throughput = items completed per sprint or week
WIP age distribution
Flow efficiency (if data allows)
FLOW METRICS REVIEW - [your team/program]
Period: [date range]
Data source: [Jira / Linear / manual]
DORA PERFORMANCE TIER: [Elite / High / Medium / Low]
- Deployment frequency: [value] ([tier])
- Lead time: [value] ([tier])
- Change failure rate: [value] ([tier])
- MTTR: [value] ([tier])
FLOW HEALTH SUMMARY
Cycle time (p50): [N] days
Cycle time (p85): [N] days <- your de facto SLA
Throughput: [N] items/sprint (trend: [up/stable/down])
Current WIP: [N] items (team size: [N], recommended max: [1.5x team size])
Flow efficiency: [N]% [if calculable]
TIME THIEF ANALYSIS
[Thief]: [Present / Not detected] - [1-sentence evidence]
[Thief]: [Present / Not detected] - [1-sentence evidence]
[Thief]: [Present / Not detected] - [1-sentence evidence]
[Thief]: [Present / Not detected] - [1-sentence evidence]
[Thief]: [Present / Not detected] - [1-sentence evidence]
TOP 3 BOTTLENECKS
1. [Bottleneck]: [evidence] - Recommended action: [specific action]
2. [Bottleneck]: [evidence] - Recommended action: [specific action]
3. [Bottleneck]: [evidence] - Recommended action: [specific action]
RECOMMENDED EXPERIMENTS (pick one to try next sprint)
- [Specific change]: expected impact on [metric]
Do not give a list of 10 improvements. Pick one experiment based on the biggest bottleneck identified. Format:
EXPERIMENT: [name]
Hypothesis: If we [action], then [metric] will improve by [amount] because [reasoning].
How to measure: Track [metric] before and after for [N] sprints.
Owner: [name]
Start: [date]
Review: [date]
1. Reporting metrics without interpreting them Bad: "Cycle time is 8.3 days." Good: "Cycle time is 8.3 days (p85: 14 days). The spread between p50 and p85 is high - it signals that a subset of items are regularly getting stuck. Likely cause: unknown dependencies."
2. Focusing on velocity instead of flow Bad: "Our velocity dropped from 42 to 38 story points this sprint." Good: Velocity measures effort, not flow. Cycle time and throughput measure whether work is actually moving. Use flow metrics to diagnose, velocity to plan.
3. Too many WIP items treated as a productivity signal Bad: "The team has 18 items in progress - they are really busy." Good: 18 items in progress for a team of 8 is 2.25x the recommended limit. High WIP is the cause of slow cycle time, not a sign of productivity.
4. Ignoring aging work Bad: Status report that only shows items completed this sprint. Good: Flag any item that has been in-progress for more than 2x the average cycle time. These are hidden blockers.
5. Recommending many improvements at once Bad: 8-point action plan to improve flow. Good: One focused experiment. Changing too many variables at once makes it impossible to know what worked.