Documentation-first development methodology. The goal is AI-ready documentation - when docs are clear enough, code generation becomes automatic. Triggers on "Build", "Create", "Implement", "Document", or "Spec out". Version 3.4 adds complete 13-item Clarity Gate with scoring rubric and self-assessment.
The Goal: AI-ready documentation. When documentation is clear enough, code generation becomes automatic.
The Insight:
"If your docs are good enough, AI writes the code. The hard work IS the documentation. Code is just the printout."
v3.4 Core Addition: Complete 13-item Clarity Gate with scoring rubric. The gate is the methodology—skip it and you're back to vibe coding.
| Version | Changes |
|---|---|
| 3.0 | Initial Stream Coding methodology |
| 3.1 | Clearer terminology, mandatory Clarity Gate |
| 3.3 | Document-type-aware placement (Anti-patterns, Test Cases, Error Handling in implementation docs) |
| 3.3.1 | Corrected time allocation (40/40/20), added Phase 4, added Rule of Divergence |
| 3.4 | Complete 13-item Clarity Gate, scoring rubric with weights, self-assessment questions, 4 mandatory section templates, Documentation Audit integrated into Phase 1 |
Messy Docs → Vague Specs → AI Guesses → Rework Cycles → 2-3x Velocity
Clear Docs → Clear Specs → AI Executes → Minimal Rework → 10-20x Velocity
Why Most "AI-Assisted Development" Fails:
Why Stream Coding Achieves 10-20x:
The Rule: Not all documents need all sections. Putting implementation details in strategic documents violates single-source-of-truth.
"If AI has to decide where to find information, you've already lost velocity."
| Type | Purpose | Examples |
|---|---|---|
| Strategic | WHAT and WHY | Master Blueprint, PRD, Vision docs, Business cases |
| Implementation | HOW | Technical Specs, API docs, Module specs, Architecture docs |
| Reference | Lookup | Schema Reference, Glossary, Configuration |
| Section | Strategic Docs | Implementation Docs | Reference Docs |
|---|---|---|---|
| Deep Links (References) | ✅ Required | ✅ Required | ✅ Required |
| Anti-patterns | ❌ Pointer only | ✅ Required | ❌ N/A |
| Test Case Specifications | ❌ Pointer only | ✅ Required | ❌ N/A |
| Error Handling Matrix | ❌ Pointer only | ✅ Required | ❌ N/A |
Wrong (violates single-source-of-truth):
Master Blueprint
├── Strategy content
├── Anti-patterns ← WRONG: duplicates Technical Spec
├── Test Cases ← WRONG: duplicates Testing doc
└── Error Matrix ← WRONG: duplicates Error Handling doc
Right (single-source-of-truth):
Master Blueprint (Strategic)
├── Strategy content
└── References
└── Pointer: "Anti-patterns → Technical Spec, Section 7"
Technical Spec (Implementation)
├── Implementation details
├── Anti-patterns ← CORRECT: lives here
├── Test Cases ← CORRECT: lives here
└── Error Matrix ← CORRECT: lives here
| Phase | Time | Focus |
|---|---|---|
| Phase 1: Strategic Thinking | 40% | WHAT to build, WHY it matters |
| Phase 2: AI-Ready Documentation | 40% | HOW to build (specs so clear AI has zero decisions) |
| Phase 3: Execution | 15% | Code generation + implementation |
| Phase 4: Quality & Iteration | 5% | Testing, refinement, divergence prevention |
The Counterintuitive Truth: 80% of time goes to documentation. 20% to code. This is why velocity is 10-20x—not because coding is faster, but because rework approaches zero.
Phase 1: Strategic Product Thinking
│
├─ Have existing documentation?
│ └─ YES → Start with Documentation Audit → then 7 Questions
│
└─ Starting fresh?
└─ Skip to 7 Questions
Skip this step if starting from scratch. The Documentation Audit only applies when you have existing documentation—previous specs, inherited docs, or accumulated notes.
Why clean existing docs? Because most documentation accumulates cruft:
The Audit Process:
Apply the Clarity Test to all existing documentation:
| Check | Question |
|---|---|
| Actionable | Can AI act on this? If aspirational, delete it. |
| Current | Is this still the decision? If changed, update or remove. |
| Single Source | Is this said elsewhere? Consolidate to one place. |
| Decision | Is this decided? If not, don't include it. |
| Prompt-Ready | Would you put this in an AI prompt? If not, delete. |
Audit Checklist:
Target: 40-50% reduction in volume without losing actionable information.
Once clean, proceed to the 7 Questions.
Before ANY new documentation, answer these with specificity. Vague answers = vague code.
| # | Question | ❌ Reject | ✅ Require |
|---|---|---|---|
| 1 | What exact problem are you solving? | "Help users manage tasks" | "Help [specific persona] achieve [measurable outcome] in [specific context]" |
| 2 | What are your success metrics? | "Users save time" | Numbers + timeline: "100 users, 25% conversion, 3 months" |
| 3 | Why will you win? | "Better UI and features" | Structural advantage: architecture, data moat, business model |
| 4 | What's the core architecture decision? | "Let AI decide" | Human decides based on explicit trade-off analysis |
| 5 | What's the tech stack rationale? | "Node.js because I like it" | Business rationale: "Node—team expertise, ship fast" |
| 6 | What are the MVP features? | 10+ "must-have" features | 3-5 truly essential, rest explicitly deferred |
| 7 | What are you NOT building? | "We'll see what users want" | Explicit exclusions with rationale |
Every implementation document MUST include these four sections. Without them, AI guesses—and guessing creates the velocity mirage.
Why: AI needs to know what NOT to do.
## Anti-Patterns (DO NOT)
| ❌ Don't | ✅ Do Instead | Why |
| --------------------------------- | -------------------------------- | -------------------------------- |
| Store timestamps as Date objects | Use ISO 8601 strings | Serialization issues |
| Hardcode configuration values | Use environment variables | Deployment flexibility |
| Use generic error messages | Specific error codes per failure | Debugging impossible otherwise |
| Skip validation on internal calls | Validate everything | Internal calls can have bugs too |
| Expose internal IDs in APIs | Use UUIDs or slugs | Security and flexibility |
Rules: Minimum 5 anti-patterns per implementation document.
Why: AI needs concrete verification criteria.
## Test Case Specifications
### Unit Tests Required
| Test ID | Component | Input | Expected Output | Edge Cases |
| ------- | ---------------- | -------------- | ---------------------- | -------------------------- |
| TC-001 | Tier classifier | 100 contacts | 20-30 in Critical tier | Empty list, all same score |
| TC-002 | Score calculator | Activity array | Score 0-100 | No events, >1000 events |
### Integration Tests Required
| Test ID | Flow | Setup | Verification | Teardown |
| ------- | --------- | ---------------- | ------------------- | ---------------- |
| IT-001 | Auth flow | Create test user | Token refresh works | Delete test user |
Rules: Minimum 5 unit tests, 3 integration tests per component.
Why: AI needs to know how to handle every failure mode.
## Error Handling Matrix
### External Service Errors
| Error Type | Detection | Response | Fallback | Logging | Alert |
| ----------- | ------------ | -------------------- | --------------- | ------- | ------------- |
| API timeout | >5s response | Retry 3x exponential | Return cached | ERROR | If 3 in 5 min |
| Rate limit | 429 response | Pause 15 min | Queue for retry | WARN | If >5/hour |
### User-Facing Errors
| Error Type | User Message | Code | Recovery Action |
| --------------- | ------------------------------------ | ---- | ----------------- |
| Quota exceeded | "You've used all checks this month." | 403 | Show upgrade CTA |
| Session expired | "Please sign in again." | 401 | Redirect to login |
Rules: Every external service and user-facing error must be specified.
Why: AI needs to navigate to exact locations. "See Technical Annexes" is useless.
## References
### Schema References
| Topic | Location | Anchor |
| ------------- | ------------------------------------------------------ | --------------- |
| User profiles | [Schema Reference](../schemas/schema.md#user_profiles) | `user_profiles` |
| Events table | [Schema Reference](../schemas/schema.md#events) | `events` |
### Implementation References
| Topic | Document | Section |
| ------------- | ------------------------------------------ | ----------- |
| Auth flow | [API Spec](../specs/api.md#authentication) | Section 3.2 |
| Rate limiting | [API Spec](../specs/api.md#rate-limiting) | Section 5 |
Rules: NEVER use vague references. ALWAYS include document path + section anchor.
⛔ NEVER SKIP THIS GATE.
This is the difference between stream coding and vibe coding. A 7/10 spec generates 7/10 code that needs 30% rework.
Before ANY code generation, verify ALL items pass:
| # | Check | Question |
|---|---|---|
| 1 | Actionable | Can AI act on every section? (No aspirational content) |
| 2 | Current | Is everything up-to-date? (No outdated decisions) |
| 3 | Single Source | No duplicate information across docs? |
| 4 | Decision, Not Wish | Every statement is a decision, not a hope? |
| 5 | Prompt-Ready | Would you put every section in an AI prompt? |
| 6 | No Future State | All "will eventually," "might," "ideally" language removed? |
| 7 | No Fluff | All motivational/aspirational content removed? |
| # | Check | Question |
|---|---|---|
| 8 | Type Identified | Document type clearly marked? (Strategic vs Implementation vs Reference) |
| 9 | Anti-patterns Placed | Anti-patterns in implementation docs only? (Strategic docs have pointers) |
| 10 | Test Cases Placed | Test cases in implementation docs only? (Strategic docs have pointers) |
| 11 | Error Handling Placed | Error handling matrix in implementation docs only? |
| 12 | Deep Links Present | Deep links in ALL documents? (No vague "see elsewhere") |
| 13 | No Duplicates | Strategic docs use pointers, not duplicate content? |
- [ ] All 7 Foundation Checks pass
- [ ] All 6 Document Architecture Checks pass
- [ ] AI Coder Understandability Score ≥ 9/10
If ANY item fails → Fix before proceeding to Phase 3
Use this rubric to score documentation. Target: 9+/10 before Phase 3.
| Criterion | Weight | 10/10 Requirement |
|---|---|---|
| Actionability | 25% | Every section has Implementation Implication |
| Specificity | 20% | All numbers concrete, all thresholds explicit |
| Consistency | 15% | Single source of truth, no duplicates across docs |
| Structure | 15% | Tables over prose, clear hierarchy, predictable format |
| Disambiguation | 15% | Anti-patterns present (5+ per impl doc), edge cases explicit |
| Reference Clarity | 10% | Deep links only, no vague references |
| Score | Meaning | Action |
|---|---|---|
| 10/10 | AI can implement with zero clarifying questions | Proceed to Phase 3 |
| 9/10 | 1 minor clarification needed | Fix, then proceed |
| 7-8/10 | 3-5 ambiguities exist | Major revision required |
| <7/10 | Not AI-ready, fundamental issues | Return to Phase 2 |
Before Phase 3, ask yourself:
If you answer "no" or "yes" to any question that should be opposite → Fix before proceeding.
Use this prompt to have Claude score your documentation:
**ROLE:** You are the Clarity Gatekeeper. Your job is to ruthlessly
evaluate software specifications for ambiguity, incompleteness, and
"vibe coding" tendencies.
**INPUT:** I will provide a technical specification document.
**TASK:** Grade this document on a scale of 1-10 using this rubric:
**RUBRIC:**
1. **Actionability (25%):** Does every section dictate a specific
implementation detail? (Reject aspirational like "fast" or
"scalable" without metrics)
2. **Specificity (20%):** Are data types, error codes, thresholds,
and edge cases explicitly defined? (Reject "handle errors appropriately")
3. **Consistency (15%):** Single source of truth? No duplicates?
4. **Structure (15%):** Tables over prose? Clear hierarchy?
5. **Disambiguation (15%):** Anti-patterns present? Edge cases explicit?
6. **Reference Clarity (10%):** Deep links only? No vague references?
**OUTPUT FORMAT:**
1. **Score:** [X]/10
2. **Criterion Breakdown:** Score each of the 6 criteria
3. **Hallucination Risks:** List specific lines where an AI developer
would have to guess or make an assumption
4. **The Fix:** Rewrite the 3 most ambiguous sections into AI-ready specs
**THRESHOLD:**
- 9-10: Ready for code generation
- 7-8: Needs revision before proceeding
- <7: Return to Phase 2
1. GENERATE: Feed spec to AI → Receive code
2. VERIFY: Run tests → Check against spec
- Does output match spec exactly?
- Yes → Continue
- No → Fix SPEC first, then regenerate
3. INTEGRATE: Commit → Update documentation if needed
"When code fails, fix the spec—not the code."
If generated code doesn't work:
Why: Manual code patches create divergence between spec and reality. Divergence compounds. Eventually your spec is fiction and you're back to manual development.
Every time you manually edit AI-generated code without updating the spec, you create Divergence. Divergence is technical debt.
Why Divergence is Dangerous:
| Scenario | ❌ Wrong | ✅ Right |
|---|---|---|
| Bug in generated code | Fix code manually | Fix spec, regenerate |
| Missing edge case | Add code patch | Add to spec, regenerate |
| Performance issue | Optimize code | Document constraint, regenerate |
| "Quick fix" needed | "Just this once..." | No. Fix spec. |
This takes 5 minutes longer than a quick hotfix. But it ensures your documentation never drifts from reality.
This methodology activates when the user says:
Documentation Audit (if existing docs):
Phase 1:
Phase 2:
Clarity Gate:
Phase 3-4:
# [Document Title] (Strategic)
## 1. [Strategic Section]
[Strategic content]
**Implementation Implication:** [Concrete effect on code/architecture]
## 2. [Another Section]
[Strategic content]
**Implementation Implication:** [Concrete effect on code/architecture]
## N. REFERENCES
### Implementation Details Location
| Content Type | Location |
| -------------- | ---------------------------------------- |
| Anti-patterns | [Technical Spec, Section 7](path#anchor) |
| Test Cases | [Testing Doc, Section 3](path#anchor) |
| Error Handling | [Error Handling Doc](path#anchor) |
### Schema References
| Topic | Location | Anchor |
| ------- | ------------------- | -------- |
| [Topic] | [Path](path#anchor) | `anchor` |
_This document provides strategic overview. Technical documents provide implementation specifications._
# [Document Title] (Implementation)
## 1. [Implementation Section]
[Technical details]
## N-3. ANTI-PATTERNS (DO NOT)
| ❌ Don't | ✅ Do Instead | Why |
| -------------- | ------------------ | -------- |
| [Anti-pattern] | [Correct approach] | [Reason] |
## N-2. TEST CASE SPECIFICATIONS
### Unit Tests
| Test ID | Component | Input | Expected Output | Edge Cases |
| ------- | ----------- | ------- | --------------- | ------------ |
| TC-XXX | [Component] | [Input] | [Output] | [Edge cases] |
### Integration Tests
| Test ID | Flow | Setup | Verification | Teardown |
| ------- | ------ | ------- | ------------ | --------- |
| IT-XXX | [Flow] | [Setup] | [Verify] | [Cleanup] |
## N-1. ERROR HANDLING MATRIX
| Error Type | Detection | Response | Fallback | Logging |
| ---------- | -------------- | ---------- | ---------- | ------- |
| [Error] | [How detected] | [Response] | [Fallback] | [Level] |
## N. REFERENCES
| Topic | Location | Anchor |
| ------- | ------------------- | -------- |
| [Topic] | [Path](path#anchor) | `anchor` |
Foundation (7):
Architecture (6): 8. Type identified? 9. Anti-patterns placed correctly? 10. Test cases placed correctly? 11. Error handling placed correctly? 12. Deep links present? 13. No duplicates?
| Criterion | Weight |
|---|---|
| Actionability | 25% |
| Specificity | 20% |
| Consistency | 15% |
| Structure | 15% |
| Disambiguation | 15% |
| Reference Clarity | 10% |
┌─────────────────────────────────────────────────────────────┐
│ Have existing docs? → Documentation Audit (conditional) │
├─────────────────────────────────────────────────────────────┤
│ │
│ Phase 1 (Strategy): 40% ──┐ │
│ Phase 2 (Specs): 40% ─────┼── 80% Documentation │
│ │ │
│ ⚠️ CLARITY GATE ──────────┘ │
│ │ │
│ Phase 3 (Code): 15% ──────┼── 20% Code │
│ Phase 4 (Quality): 5% ────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Version: 3.4 Changes from 3.3.1:
Core Insight: The Clarity Gate is the methodology. Everything else supports getting docs to 9+/10.
Stream Coding by Francesco Marinoni Moretto — CC BY 4.0 github.com/frmoretto/stream-coding
END OF STREAM CODING v3.4