Reference Class Forecasting Anchors predictions in historical reality by identifying a class of similar past events and using their statistical frequency as a baseline (outside view) before analyzing case-specific details. Use when starting a forecast, establishing base rates, testing "this time is different" claims, or when user mentions reference classes, outside view, base rates, or starting a new prediction.
80 estrellas
15 abr 2026
Ocupación Categorías Finanzas e Inversión Contenido de la habilidad
Table of Contents
What would you like to do?
Core Workflows
1. Find My Base Rate - Identify reference class and get statistical baseline
Guided process to select correct reference class
Instalación rápida
Reference Class Forecasting npx skills add lyndonkl/claude
estrellas 80
Actualizado 15 abr 2026
Ocupación
Search strategies for finding historical frequencies
Validation that you have the right anchor
Reversal test for uniqueness bias
Similarity matching framework
Burden of proof calculator
When no single base rate exists
Sequential probability modeling
Product rule for compound events
Too broad vs too narrow test
Homogeneity check
Sample size evaluation
6. Exit - Return to main forecasting workflow
1. Find My Base Rate Let's establish your statistical baseline.
Step 1: What are you forecasting? Tell me the specific event or outcome you're predicting.
"Will this startup succeed?"
"Will this bill pass Congress?"
"Will this project launch on time?"
Step 2: Identify the Reference Class I'll help you identify what bucket this belongs to.
Too broad: "All companies" → meaningless
Just right: "Seed-stage B2B SaaS startups in fintech"
Too narrow: "Companies founded by people named Steve in 2024" → no data
What type of entity is this? (company, bill, project, person, etc.)
What stage/size/category?
What industry/domain?
What time period is relevant?
I'll work with you to refine this until we have a specific, searchable class.
Step 3: Search for Historical Data I'll help you find the base rate using:
Web search for published statistics
Academic studies on success rates
Government/industry reports
Proxy metrics if direct data unavailable
"historical success rate of [reference class]"
"[reference class] failure statistics"
"[reference class] survival rate"
"what percentage of [reference class]"
Step 4: Set Your Anchor Once we find the base rate, that becomes your starting probability .
Treat this base rate as your starting point. Adjust only when you have specific,
evidence-based reasons from your "inside view" analysis.
Default anchors if no data found:
Novel innovation: 10-20% (most innovations fail)
Established industry: 50% (uncertain)
Regulated/proven process: 70-80% (systems work)
Next: Return to menu or proceed to inside view analysis.
2. Test "This Time Is Different" Challenge uniqueness bias.
When someone (including yourself) believes "this case is special," we need to stress-test that belief.
The Uniqueness Audit Question 1: Similarity Matching
What are 5 historical cases that are most similar to this one?
For each, what was the outcome?
How is your case materially different from these?
Question 2: The Reversal Test
If someone claimed a different case was "unique" for the same reasons you're claiming, would you accept it?
Are you applying special pleading?
Question 3: Burden of Proof
The base rate says [X]%. You claim it should be [Y]%.
Calculate the gap: |Y - X|
Required evidence strength:
Gap < 10%: Minimal evidence needed
Gap 10-30%: Moderate evidence needed (2-3 specific factors)
Gap > 30%: Extraordinary evidence needed (multiple independent strong signals)
Output
Whether "this time is different" is justified
How much you can reasonably adjust from the base rate
What evidence would be needed to justify larger moves
3. Calculate Funnel Base Rates For multi-stage processes without a single base rate.
When to Use
No direct statistic exists (e.g., "success rate of X")
Event requires multiple sequential steps
Each stage has independent probabilities
The Funnel Method Example: "Will Bill X become law?"
No direct data on "Bill X success rate," but we can model the funnel:
Stage 1: Bills introduced → Bills that reach committee
P(committee | introduced) = ?
Stage 2: Bills in committee → Bills that reach floor vote
Stage 3: Bills voted on → Bills that pass
P(law) = P(committee) × P(floor) × P(pass)
Process
Decompose the event into sequential stages
Search for statistics on each stage
Multiply probabilities using the product rule
Validate the model (are stages truly independent?)
Common Funnels
Startup success: Seed → Series A → Profitability → Exit
Drug approval: Discovery → Trials → FDA → Market
Project delivery: Planning → Development → Testing → Launch
4. Validate My Reference Class Ensure you chose the right comparison set.
The Three Tests
Are the members of this class actually similar enough?
Is there high variance in outcomes?
Should you subdivide further?
Example: "Tech startups" is too broad (consumer vs B2B vs hardware are very different). Subdivide.
Do you have enough historical cases?
Minimum: 20-30 cases for meaningful statistics
If N < 20: Widen the class or acknowledge high uncertainty
Have conditions changed since the historical data?
Are there structural differences (regulation, technology, market)?
Time decay: Data from >10 years ago may be stale
Validation Checklist Output: Confidence level in your reference class (High/Medium/Low)
5. Learn the Framework Deep dive into the methodology.
Resource Files
Statistical thinking vs narrative thinking
Why the outside view beats experts
Kahneman's planning fallacy research
When outside view fails
Systematic method for choosing comparison sets
Balancing specificity vs data availability
Similarity metrics and matching
Edge cases and judgment calls
Base rate neglect examples
"This time is different" bias
Overfitting to small samples
Ignoring regression to the mean
Availability bias in class selection
Quick Reference
The Outside View Commandments
Base Rate First: Establish statistical baseline BEFORE analyzing specifics
Assume Average: Treat case as typical until proven otherwise
Burden of Proof: Large deviations from base rate require strong evidence
Class Precision: Reference class should be specific but data-rich
No Narratives: Resist compelling stories; trust frequencies
One-Sentence Summary
Find what usually happens to things like this, start there, and only move with evidence.
Integration with Other Skills
Before: Use estimation-fermi if you need to calculate base rate from components
After: Use bayesian-reasoning-calibration to update from base rate with new evidence
Companion: Use scout-mindset-bias-check to validate you're not cherry-picking the reference class
Resource Files
Ready to start? Choose a number from the menu above.
02
Interactive Menu
Finanzas e Inversión
Energy Procurement Codified expertise for electricity and gas procurement, tariff optimization, demand charge management, renewable PPA evaluation, and multi-facility energy cost management. Informed by energy procurement managers with 15+ years experience at large commercial and industrial consumers. Includes market structure analysis, hedging strategies, load profiling, and sustainability reporting frameworks. Use when procuring energy, optimizing tariffs, managing demand charges, evaluating PPAs, or developing energy strategies.
Finanzas e Inversión
Carrier Relationship Management Codified expertise for managing carrier portfolios, negotiating freight rates, tracking carrier performance, allocating freight, and maintaining strategic carrier relationships. Informed by transportation managers with 15+ years experience. Includes scorecarding frameworks, RFP processes, market intelligence, and compliance vetting. Use when managing carriers, negotiating rates, evaluating carrier performance, or building freight strategies.
Reference Class Forecasting | Skills Pool