/em -stress-test — Business Assumption Stress Testing
Command: /em:stress-test <assumption>
Take any business assumption and break it before the market does. Revenue projections. Market size. Competitive moat. Hiring velocity. Customer retention.
Founders are optimists by nature. That's a feature — you need optimism to start something from nothing. But it becomes a liability when assumptions in business models get inflated by the same optimism that got you started.
The most dangerous assumptions are the ones everyone agrees on.
When the whole team believes the $50M market is real, when every investor call goes well so you assume the round will close, when your model shows $2M ARR by December and nobody questions it — that's when you're most exposed.
Stress testing isn't pessimism. It's calibration.
State it explicitly. Not "our market is large" but "the total addressable market for B2B spend management software in German SMEs is €2.3B."
The more specific the assumption, the more testable it is. Vague assumptions are unfalsifiable — and therefore useless.
Common assumption types:
For every assumption, actively search for evidence that it's wrong.
Ask:
Sources of counter-evidence:
The goal isn't to find a reason to stop — it's to surface what you don't know.
Most plans model the base case and the upside. Stress testing means modeling the downside explicitly.
For quantitative assumptions (revenue, growth, conversion):
| Scenario | Assumption Value | Probability | Impact |
|---|---|---|---|
| Base case | [Original value] | ? | |
| Bear case | -30% | ? | |
| Stress case | -50% | ? | |
| Catastrophic | -80% | ? |
Key question at each level: Does the business survive? Does the plan make sense?
For qualitative assumptions (moat, product-market fit, team capability):
Some assumptions matter more than others. Sensitivity analysis answers: if this one assumption changes, how much does the outcome change?
Example:
High sensitivity = the assumption is a key lever. Wrong = big problem.
For every high-risk assumption, there should be a hedge:
Common failures:
Stress questions:
Test: Build the revenue model from historical win rates, not hoped-for ones.
Common failures:
Stress questions:
Test: Build a list of target accounts. Count them. Multiply by ACV. That's your SAM.
Common failures:
Stress questions:
Test: Ask churned customers why they left and whether a competitor could have kept them.
Common failures:
Stress questions:
Test: Model the plan with 0 net new hires. What still works?
Common failures:
Stress questions:
ASSUMPTION: [Exact statement]
SOURCE: [Where this came from — model, investor pitch, team gut feel]
COUNTER-EVIDENCE
• [Specific evidence that challenges this assumption]
• [Comparable failure case]
• [Data point that contradicts the assumption]
DOWNSIDE MODEL
• Bear case (-30%): [Impact on plan]
• Stress case (-50%): [Impact on plan]
• Catastrophic (-80%): [Impact on plan — does the business survive?]
SENSITIVITY
This assumption has [HIGH / MEDIUM / LOW] sensitivity.
A 10% change → [X] change in outcome.
HEDGE
• Validation: [How to test this before betting on it]
• Contingency: [Plan B if it's wrong]
• Early warning: [Leading indicator to watch — and at what threshold to act]