Systematically update your confidence in hypotheses as new evidence arrives, weighting both prior beliefs and new information by their respective reliability
Probabilistic belief revision framework: Systematically update your confidence in hypotheses as new evidence arrives, weighting both prior beliefs and new information by their respective reliability.
Core insight: Neither trust new data blindly nor cling to old beliefs stubbornly—optimal belief revision lies between these extremes, proportional to precision of evidence.
Source: Thomas Bayes (18th century), formalized by LessWrong rationality community for practical cognition
Updated Belief = (Prior Belief × Likelihood of Evidence) / Total Probability of Evidence
In plain language:
Step 1 - Establish Prior: What's your current confidence level in hypothesis H?
Step 2 - Observe Evidence: New information arrives
Step 3 - Evaluate Likelihood: How expected is this evidence under different hypotheses?
Step 4 - Calculate Posterior: Update belief proportionally
Step 5 - Iterate: Your posterior becomes the new prior for the next evidence cycle
Explicit Bayesian updating when:
Implicit Bayesian thinking for:
Make your existing belief explicit:
Avoid vague language like "probably" or "might"—use numbers.
What new information are you receiving?
Be specific about what you're observing, not your interpretation yet.
Ask: "How much more expected is this evidence if my hypothesis is true versus false?"
Strong evidence: Very expected under hypothesis, very unexpected otherwise
Weak evidence: Only somewhat more expected under hypothesis
Misleading evidence: More expected if hypothesis is false
Rough heuristic (for intuitive updating without calculation):
Direction: Move toward the hypothesis the evidence supports.
Magnitude: Stronger evidence = larger update, but never jump to 100% certainty from single data point.
Periodically check: Are your 70% predictions actually coming true 70% of the time?
This feedback loop improves your Bayesian instincts over time.
Not all evidence is equally reliable:
High precision (trust more, update more):
Low precision (trust less, update less):
Adjust your update magnitude by source reliability.
Ignoring base rates: Jumping to conclusions from evidence without considering prior probability. (Example: Rare disease with 99% accurate test can still be unlikely even with positive result if disease is 0.1% prevalent.)
Confirmation bias: Selectively updating on evidence that supports existing beliefs, dismissing contradictory evidence. True Bayesian updating is symmetric—update in both directions.
Overconfidence: Updating too much from single data points, reaching near-certainty prematurely. Keep some probability mass on alternative hypotheses.
Binary thinking: Treating beliefs as true/false rather than probabilistic confidence levels. Everything is a percentage.
Neglecting alternative hypotheses: Updating P(H) without considering P(not-H) and other competing explanations for the evidence.
Anchoring on priors: Refusing to update sufficiently when strong evidence arrives. Your prior shouldn't be sacred—it's just your starting point.
Conservation of expected evidence: If you think evidence might arrive, you should already have an opinion on what different results would mean. Don't wait for the data to decide how to interpret it.
Medical diagnosis: Doctor starts with base rate of disease prevalence (prior), updates based on symptoms (evidence), orders tests (more evidence), revises diagnosis (posterior).
Software debugging: Initial hypothesis about bug location (prior), run test revealing error location (evidence), update belief about root cause (posterior), test fix (more evidence).
Hiring decisions: Initial assessment from resume (prior), performance on technical interview (evidence), reference checks (more evidence), final confidence in candidate fit (posterior).
Investment analysis: Market belief about company value (prior), earnings report (evidence), updated stock price reflecting collective Bayesian updating (posterior).
Product development: Hypothesis about user need (prior), user research findings (evidence), A/B test results (more evidence), conviction to ship feature (posterior).
Pre-commit to belief changes: Before seeing evidence, state explicitly: "If I see X, I'll update my belief from Y% to Z%." This prevents post-hoc rationalization.
Calibration training: Make many probabilistic predictions, track accuracy, adjust to hit calibration targets. This builds Bayesian intuition.
Likelihood ratio shortcut: Instead of full Bayes calculation, ask "How many times more likely is this evidence under hypothesis A vs. B?" Adjust beliefs proportionally.
Update incrementally: Don't wait for "decisive" evidence. Small updates from weak evidence compound over time into strong beliefs when consistent.
Separate observation from interpretation: Clearly distinguish what you observed (evidence) from what it means (likelihood). Mix these up and you double-count the same information.
Quantify uncertainty explicitly: Force yourself to use numbers. "Probably" is too vague—is it 60% or 90%? Numbers enable proper updating.
Full Bayesian calculation:
P(H|E) = P(E|H) × P(H) / P(E)
Where:
P(H|E) = Posterior (updated belief in hypothesis given evidence)
P(E|H) = Likelihood (probability of evidence if hypothesis true)
P(H) = Prior (initial belief in hypothesis)
P(E) = Total probability of evidence
For practical use, focus on likelihood ratios:
P(H|E) / P(~H|E) = [P(E|H) / P(E|~H)] × [P(H) / P(~H)]
Posterior Odds = Likelihood Ratio × Prior Odds
This "odds form" is often more intuitive for incremental updating.