Use after running 2+ research skills (critical-research, tech-feasibility, narrative-auditor, codebase-audit) to synthesize findings into a unified decision document. Resolves conflicts between sources, weighs evidence, and produces an actionable recommendation.
Aggregate findings from multiple research skills into a single decision document that resolves conflicts, weighs evidence, and delivers a clear recommendation.
Core principle: Decisions should be traceable to evidence. Every recommendation in the decision document must cite which research skill produced the supporting or opposing evidence.
Violating the letter but not the spirit doesn't count. Summarizing research outputs without resolving contradictions is not synthesis — it's concatenation. Real synthesis confronts disagreements, assigns weights, and commits to a recommendation even when evidence is mixed.
Announce at start:
"Synthesizing research findings into a decision document."
After research is complete:
Manual triggers:
Prompted by other skills:
brainstorming Phase 3 invokes tech-feasibility and/or
critical-research — synthesis aggregates before returning to designnarrative-auditor + codebase-audit on the same
topic — findings need reconciliationDo NOT use when:
Scan the current conversation for outputs from research skills. Identify each by its distinctive structure:
| Skill | Recognition Pattern |
|---|---|
| critical-research | Verdict: + Confidence: + evidence tables |
| tech-feasibility | Decision: (Go/Conditional-Go/Pivot/No-Go) + kill criteria |
| narrative-auditor | Per-claim verdicts (ACCURATE/MISLEADING/FALSE) |
| codebase-audit | Accuracy score (X/Y = Z%) + claim verdicts |
If findings are from a previous session, ask the user to paste or provide the relevant outputs.
Minimum requirement: 2 distinct research outputs. If only 1 exists, inform the user: "Only one research source found. Its conclusion stands on its own — synthesis adds value when there are multiple sources to reconcile."
Map each skill's verdict system to a common schema:
| Original Verdict | Normalized Strength | Normalized Status |
|---|---|---|
| critical-research: Supported / High confidence | Strong | Confirmed |
| critical-research: Partially Supported / Medium | Moderate | Partially confirmed |
| critical-research: Weakened / Low | Weak | Challenged |
| critical-research: Falsified | Strong | Refuted |
| tech-feasibility: Go / High confidence | Strong | Confirmed |
| tech-feasibility: Conditional-Go | Moderate | Conditionally confirmed |
| tech-feasibility: Pivot | Moderate | Challenged |
| tech-feasibility: No-Go | Strong | Refuted |
| narrative-auditor: ACCURATE | Strong | Confirmed |
| narrative-auditor: DECONTEXTUALIZED | Moderate | Partially confirmed |
| narrative-auditor: MISLEADING / FALSE | Strong | Refuted |
| narrative-auditor: UNVERIFIABLE | Weak | Uncertain |
| codebase-audit: VERIFIED | Strong | Confirmed |
| codebase-audit: PARTIALLY VERIFIED | Moderate | Partially confirmed |
| codebase-audit: FALSE | Strong | Refuted |
| codebase-audit: UNVERIFIED / UNFALSIFIABLE | Weak | Uncertain |
Create a findings matrix:
## Findings Matrix
| Topic / Claim | critical-research | tech-feasibility | narrative-auditor | codebase-audit | Consensus |
|--------------|-------------------|------------------|-------------------|----------------|-----------|
| [Claim A] | Confirmed (Strong) | — | — | — | Single source |
| [Claim B] | Confirmed (Mod) | Confirmed (Strong) | — | — | Agreement |
| [Claim C] | Challenged (Weak) | Confirmed (Strong) | — | — | **Conflict** |
For each conflict:
Apply evidence hierarchy:
When evidence is evenly split, state the uncertainty explicitly rather than forcing a false conclusion.
Produce an ADR-style document:
# Decision: [Title]
**Date**: YYYY-MM-DD
**Status**: Proposed | Accepted | Superseded by [link]
**Decision makers**: [user / team]
## Context
[What prompted this research? What question needed answering?]
## Research Conducted
| Skill | Focus | Verdict | Confidence |
|-------|-------|---------|------------|
| critical-research | [topic] | [normalized] | [High/Medium/Low] |
| tech-feasibility | [topic] | [normalized] | [High/Medium/Low] |
| ... | ... | ... | ... |
## Findings
### Agreements
<!-- Claims confirmed by 2+ sources -->
### Conflicts and Resolutions
<!-- Disagreements with explicit resolution rationale -->
### Uncertainties
<!-- Claims that remain unresolved -->
## Decision
[Clear, actionable statement of what we decided and why.]
## Consequences
### Expected Benefits
- [Benefit traced to evidence]
### Known Risks
- [Risk traced to evidence]
### Open Questions
- [What we still don't know]
## Alternatives Considered
| Alternative | Why Rejected | Source |
|-------------|-------------|--------|
| [Option B] | [Reason] | [Which skill found this] |
docs/decisions/YYYY-MM-DD-<topic>.mddocs(decisions): add <topic> decision recordbrainstorming → return to brainstorming with the
decision as input for designsuperpowers:writing-plans if
implementation followsContext: User ran tech-feasibility on Redis vs. Memcached, then
critical-research on "Redis is always faster than Memcached for
session storage."
WRONG:
"tech-feasibility says Go for Redis. critical-research says the 'always faster' claim is Partially Supported. So Redis is probably fine." (No conflict resolution, no decision document)
RIGHT:
"Synthesizing research findings into a decision document."
- Inventories: tech-feasibility (Go, High confidence for Redis) + critical-research (Partially Supported — Redis is faster for reads but Memcached wins on multi-threaded write loads)
- Identifies conflict: tech-feasibility assumed read-heavy workload; critical-research found write-heavy counter-evidence
- Resolves: "Our session store is 90% reads, 10% writes — Redis's read advantage applies to our case"
- Produces ADR with clear "Decision: Use Redis" + risk note about write-heavy growth scenario
Context: User ran narrative-auditor on a vendor's blog post
claiming "99.9% uptime", then codebase-audit on the vendor's SDK
docs.
RIGHT:
- narrative-auditor: "99.9% uptime" rated DECONTEXTUALIZED (only applies to US-East region)
- codebase-audit: SDK docs claim "automatic failover" rated FALSE (no failover code found in SDK)
- Synthesis: "Vendor uptime claim is region-limited and their SDK lacks the failover they advertise. Recommendation: require SLA contractually, implement our own failover."
| Excuse | Reality |
|---|---|
| "The research was clear, no synthesis needed" | If it was clear, synthesis takes 2 minutes. If you're wrong about clarity, synthesis catches the gap. |
| "I can just pick the strongest finding" | Cherry-picking one source while ignoring others is confirmation bias, not synthesis. |
| "The conflicts don't matter for our case" | Document why they don't matter. If you can't articulate it, they might matter more than you think. |
| "An ADR is overkill" | The ADR can be 10 lines for a minor decision. The format ensures completeness, not length. |
決策文件的敘述段落應帶有溫度,像在跟同事分享觀察。分析判斷段落維持客觀。
適用範圍:Executive Summary、Narrative synthesis、Recommendation rationale。Evidence matrix、conflict resolution 表格維持客觀中性。
[a-z0-9-]docs/decisions/ exists before writing; create if missing