Systematically investigates causal relationships to identify true root causes rather than correlations or symptoms. Distinguishes genuine causation from spurious associations, tests competing explanations, and designs interventions addressing underlying drivers. Use when investigating why something happened, debugging systems, analyzing failures, evaluating policy impacts, or when user mentions root cause, causal chain, confounding, spurious correlation, or asks "why did this really happen?"
Key concepts: root cause (fundamental issue), proximate cause (immediate trigger), confounding variable (third factor creating spurious correlation), counterfactual ("what would have happened without X?"), and causal mechanism (pathway through which X affects Y).
Quick Example:
# Effect: Website conversion rate dropped 30%
## Competing Hypotheses:
1. New checkout UI is confusing (proximate)
2. Payment processor latency increased (proximate)
3. We changed to a cheaper payment processor that's slower (root cause)
## Test:
- Rollback UI (no change) → UI not cause
- Check payment logs (confirm latency) → latency is cause
- Trace to processor change → processor change is root cause
## Counterfactual:
"If we hadn't switched processors, would conversion have dropped?"
→ No, conversion was fine with old processor
## Conclusion:
Root cause = processor switch
Mechanism = slow checkout → user abandonment
Copy this checklist and track your progress:
Root Cause Analysis Progress:
- [ ] Step 1: Define the effect
- [ ] Step 2: Generate hypotheses
- [ ] Step 3: Build causal model
- [ ] Step 4: Test causality
- [ ] Step 5: Document and validate
Step 1: Define the effect
Describe effect/outcome (what happened, be specific), quantify if possible (magnitude, frequency), establish timeline (when it started, is it ongoing?), determine baseline (what's normal, what changed?), and identify stakeholders (who's impacted, who needs answers?). Key questions: What exactly are we explaining? One-time event or recurring pattern? How do we measure objectively?
Step 2: Generate hypotheses
List proximate causes (immediate triggers/symptoms), identify potential root causes (underlying factors), consider confounders (third factors creating spurious associations), and challenge assumptions (what if initial theory wrong?). Techniques: 5 Whys (ask "why" repeatedly), Fishbone diagram (categorize causes), Timeline analysis (what changed before effect?), Differential diagnosis (what else explains symptoms?). For simple investigations → Use resources/template.md. For complex problems → Study resources/methodology.md for advanced techniques.
Step 3: Build causal model
Draw causal chains (A → B → C → Effect), identify necessary vs sufficient causes, map confounding relationships (what influences both cause and effect?), note temporal sequence (cause precedes effect - necessary for causation), and specify mechanisms (HOW X causes Y). Model elements: Direct cause (X → Y), Indirect (X → Z → Y), Confounding (Z → X and Z → Y), Mediating variable (X → M → Y), Moderating variable (X → Y depends on M).
Step 4: Test causality
Check temporal sequence (cause before effect?), assess strength of association (strong correlation?), look for dose-response (more cause → more effect?), test counterfactual (what if cause absent/removed?), search for mechanism (explain HOW), check consistency (holds across contexts?), and rule out confounders. Evidence hierarchy: RCT (gold standard) > natural experiment > longitudinal > case-control > cross-sectional > expert opinion. Use Bradford Hill Criteria (9 factors: strength, consistency, specificity, temporality, dose-response, plausibility, coherence, experiment, analogy).
Step 5: Document and validate
Create causal-inference-root-cause.md with: effect description/quantification, competing hypotheses, causal model (chains, confounders, mechanisms), evidence assessment, root cause(s) with confidence level, recommended tests/interventions, and limitations/alternatives. Validate using resources/evaluators/rubric_causal_inference_root_cause.json: verify distinguished proximate from root cause, controlled confounders, explained mechanism, assessed evidence systematically, noted uncertainty, recommended interventions, acknowledged alternatives. Minimum standard: Score ≥ 3.5.
For incident investigation (engineering):
For metric changes (product/business):
For policy evaluation (research/public policy):
For debugging (software):
Do:
Don't:
Common Pitfalls:
resources/template.md - Structured framework for root cause analysisresources/methodology.md - Advanced techniques (DAGs, confounding control, Bradford Hill criteria)resources/evaluators/rubric_causal_inference_root_cause.jsoncausal-inference-root-cause.md