Stage 3 of the planning pipeline — synthesizes deep analysis results and agrees on the task understanding with the user before generating planning artifacts. Use this skill when: Stage 2 deep analysis is complete and you need to consolidate findings into a unified task model; the user wants to review and confirm the understanding before planning begins; you have stage-2-handoff.md or individual analysis files ready for synthesis. Triggers on: stage 3, synthesis, synthesize analysis, agree on task, task agreement, consolidate findings, align on understanding, confirm task model, pre-planning alignment, готово к синтезу, согласование задачи.
You are executing Stage 3 of the planning pipeline. Your job is to synthesize the multi-stream analysis from Stage 2 into a single coherent task model and resolve contradictions.
You do NOT write plans, design solutions, or write code. You synthesize — nothing more. Stage 2 investigated. Stage 3 consolidates. Stage 4 will design the solution AND present the combined review (understanding + design) to the user.
Think of it this way: Stage 2 produced three independent views of the same task. They overlap, they might contradict, they each emphasize different things. Your job is to merge them into one honest picture. User confirmation happens later — jointly with the design review in Stage 4 — to reduce review fatigue and let the user evaluate understanding and design as a coherent whole.
This stage requires Stage 2 output.
Before doing anything else, verify you have one of these input sets:
Preferred (new Stage 2 format):
stage-2-handoff.md — self-contained handoff document with consolidated findings from all three analyses. This is the single entry point.Full context (recommended to also load):
product-analysis.md — detailed product/business analysissystem-analysis.md — detailed codebase/system analysisconstraints-risks-analysis.md — detailed constraints/risks analysisAlso check for:
stage-1-handoff.md or requirements.draft.md — original task statement for referenceclarifications.md — any prior Q&A that shaped the understandingIf stage-2-handoff.md is missing or doesn't reference completed analyses, stop and tell the user — send it back to Stage 2.
The stage runs as: synthesize → critique → refine → package for combined review in Stage 4. User confirmation does NOT happen here — it happens jointly with the design review in Stage 4.
Read all Stage 2 artifacts. Build an internal picture:
stage-2-handoff.md for the consolidated viewCreate a working list of:
Merge everything into a single structure. This is not copy-pasting sections together — it's actual synthesis: resolving overlaps, choosing between contradicting views (with reasoning), and filling in the gaps.
Build analysis.md using the template from the Artifact Templates section below.
The synthesis covers:
When resolving contradictions:
Spawn a Synthesis Critic subagent to independently review the unified model.
name: "synthesis-critic"subagent_type: "general-purpose"prompt: the FULL content of the <synthesis-critic> definition combined with the input data below — the agent definition IS the prompt, do not summarize or skip itanalysis.md content + all Stage 2 analyses + Stage 1 contentThe critic checks:
Save the critic's feedback. If the critic finds significant problems, fix them before presenting to the user.
Once the synthesis passes critique (or has been refined), split the model into agreement blocks — discrete, reviewable chunks for the user.
Build agreement-package.md using the template from the Artifact Templates section below.
The blocks are:
Block 1 — Goal & Problem Understanding
Block 2 — Scope
Block 3 — Key Scenarios
Block 4 — Constraints
Block 5 — Candidate Solution Directions
Each block should be:
Do NOT present blocks to the user here. User review happens in Stage 4, jointly with the design review.
Build agreement-package.md using the template from the Artifact Templates section. This package will be included in Stage 4's combined review.
Build a draft agreed-task-model.md using the template from the Artifact Templates section. Mark it as status: draft — pending user confirmation in Stage 4. This draft reflects the best synthesis validated by the critic, but has not yet been confirmed by the user.
Why no user review here: Presenting understanding separately from design creates review fatigue (8-12 review points before any code). The user thinks holistically — they can't confirm Scope without thinking about Scenarios, and can't confirm Solution Direction without seeing the actual design. Combining these reviews in Stage 4 gives the user one serious review of "here's what we understood AND here's what we'll build" instead of two fragmented ones.
Once the draft task model and agreement package are ready, build the handoff document.
stage-3-handoff.md is the single entry point for Stage 4 (solution design). It packages the synthesized model with all context the design stage needs. Stage 4 should be able to read this file alone and have everything it needs to begin design AND to present the combined review to the user.
Important: The handoff must clearly indicate that the task model is a draft pending user confirmation. Stage 4 is responsible for presenting the combined review (understanding + design) and finalizing the model based on user feedback.
Save stage-3-handoff.md and tell the user that Stage 3 is complete — synthesis is done, and the combined review will happen after design in Stage 4.
Present a brief summary:
Then offer the user two options for continuing to Stage 4:
Option 1 — Continue in this session:
"Запустить Stage 4 (Solution Design) прямо сейчас в этой сессии? Там же будет combined review."
If the user agrees, invoke the /solution-design skill.
Option 2 — Continue in a new session: Provide a ready-to-paste block with actual paths filled in:
Запусти /solution-design
Task ID: {task-id}
Артефакты: .planpipe/{task-id}/ (stage-1/, stage-2/, stage-3/)
This stage produces up to four files. Every artifact must follow its template exactly. These templates are not optional — they ensure consistency across tasks and enable Stage 4 to parse the output reliably.
analysis.mdWhen: Always created. The synthesized analytical model of the task — before user agreement.
# Synthesized Task Analysis
## Task Goal
[One clear formulation of what needs to be achieved — synthesized from product analysis and original requirements]
## Problem Statement
[Why this task exists — the deeper motivation, synthesized from business intent and original problem statement]
## Key Scenarios
### Primary Scenario
1. [Trigger: what starts the flow]
2. [Step: what happens]
3. [Step: ...]
4. [End state: what the actor sees/has when done]
### Mandatory Edge Cases
- **[Edge case]:** [What happens and why it must be handled in this task]
- **[Edge case]:** [...]
### Deferred Scenarios
- **[Scenario]:** [Why it can be deferred — what the risk of deferring is]
## System Scope
### Affected Modules
| Module | Path | Role in Task | Change Scope |
|--------|------|-------------|-------------|
| [name] | `path/to/module` | [what changes and why] | small/medium/large |
### Key Change Points
| Location | What Changes | Why |
|----------|-------------|-----|
| `path/to/file:Function` | [what] | [why this change is needed] |
### Dependencies
- **[Dependency]:** [What it is, how it constrains the task]
## Constraints
- **[Constraint]:** [What it is, source, impact on planning]
- **[Constraint]:** [...]
## Risks
| Risk | Likelihood | Impact | Mitigation Direction |
|------|-----------|--------|---------------------|
| [risk] | low/medium/high | low/medium/high | [brief idea] |
## Candidate Solution Directions
- **[Direction name]:** [What it means, when it's appropriate, trade-offs]
- **[Direction name]:** [...]
## Resolved Contradictions
[Where analyses disagreed and how it was resolved]
- **[Topic]:** [Analysis A said X, Analysis B said Y. Resolution: Z because...]
## Remaining Open Questions
[Things that could not be resolved from analysis alone — need user input or will surface during planning]
- [Question 1]
- [Question 2]
## Critique Review
[Summary of synthesis critic's findings. What was flagged. What was fixed.]
agreement-package.mdWhen: Always created. The structured package for block-by-block user review.
# Agreement Package
> Task: [one-line task summary]
> Based on: Stage 2 analyses (product, system, constraints/risks)
> Purpose: Confirm or correct the synthesized understanding before planning
---
## Block 1 — Goal & Problem Understanding
**Our understanding:**
[2-3 sentences: what the task achieves and why it matters]
**Expected outcome:**
[What "done" looks like]
**Confirm:** Is this the right goal? Are we solving the right problem? Is this the outcome you need?
---
## Block 2 — Scope
**Included:**
- [What's in scope — specific items]
**Excluded:**
- [What's explicitly out of scope]
**Confirm:** Is the scope correct? Anything missing? Anything that shouldn't be here?
---
## Block 3 — Key Scenarios
**Primary scenario:**
[Brief description of the main flow]
**Mandatory edge cases:**
- [Edge case 1]
- [Edge case 2]
**Deferred (not in this task):**
- [Scenario that can wait]
**Confirm:** Is the primary scenario correct? Are the mandatory edge cases right? Can the deferred items really wait?
---
## Block 4 — Constraints
- [Constraint 1]
- [Constraint 2]
- [Constraint 3]
**Confirm:** Are these constraints accurate? Are there constraints we missed? Can any of these be relaxed?
---
## Block 5 — Candidate Solution Directions
Based on the analysis, we see these possible directions:
- **[Direction A]:** [Brief description — trade-offs]
- **[Direction B]:** [Brief description — trade-offs]
**Confirm:** Which direction do you prefer? Minimal and safe, or systematic and thorough? Any direction we should avoid?
agreed-task-model.mdWhen: Created as a draft during Stage 3 (pending user confirmation in Stage 4's combined review). Finalized by Stage 4 after user confirms.
# Agreed Task Model
> Status: [draft — pending user confirmation / confirmed]
> Agreed on: [date — filled when confirmed in Stage 4]
> Based on: Stage 2 analyses + synthesis critique
## Task Goal
[Final, user-confirmed goal statement]
## Problem Statement
[Final, user-confirmed problem description]
## Scope
### Included
- [Confirmed scope item 1]
- [Confirmed scope item 2]
### Excluded
- [Confirmed exclusion 1]
- [Confirmed exclusion 2]
## Key Scenarios
### Primary Scenario
1. [Step 1]
2. [Step 2]
3. [...]
### Mandatory Edge Cases
- **[Edge case]:** [Description]
### Explicitly Deferred
- **[Scenario]:** [Why deferred, user confirmed]
## System Scope
### Affected Modules
| Module | Path | Role in Task | Change Scope |
|--------|------|-------------|-------------|
| [name] | `path/to/module` | [role] | small/medium/large |
### Key Change Points
| Location | What Changes | Why |
|----------|-------------|-----|
| `path/to/file:Function` | [what] | [why] |
### Dependencies
- **[Dependency]:** [Impact on task]
## Confirmed Constraints
- **[Constraint]:** [Description — confirmed by user]
## Risks to Mitigate
| Risk | Likelihood | Impact | Mitigation Direction |
|------|-----------|--------|---------------------|
| [risk] | low/medium/high | low/medium/high | [direction] |
## Solution Direction
[User's confirmed preference — which direction, why, trade-offs accepted]
## Accepted Assumptions
[Assumptions the user explicitly accepted as safe to proceed with]
- [Assumption 1: why it's accepted]
## Deferred Decisions
[Decisions that were explicitly pushed to the planning stage]
- [Decision 1: why it's deferred]
## User Corrections Log
[What the user changed from the original synthesis — preserves the decision trail]
- **[Block/Topic]:** [What was proposed → What the user said → How the model was updated]
## Acceptance Criteria
[How to know the task is done correctly — derived from goal + user confirmations]
- [Criterion 1]
- [Criterion 2]
stage-3-handoff.mdWhen: Created when Stage 3 synthesis is complete — critic has validated, artifacts are ready. This is the primary input for Stage 4. User confirmation has NOT happened yet — it happens in Stage 4's combined review.
# Stage 3 Handoff — Task Synthesis Complete
> Status: draft — pending user confirmation in Stage 4
## Task Summary
[Synthesized task statement — 2-3 sentences a new team member could read and immediately understand]
## Classification
- **Type:** [feature / bug / refactor / integration / research / other]
- **Complexity:** [low / medium / high]
- **Primary risk area:** [technical / integration / scope / knowledge]
- **Solution direction:** [minimal / safe / systematic — synthesized from analyses]
## Synthesized Goal
[One clear sentence — from synthesis, pending user confirmation]
## Synthesized Problem Statement
[Why this matters — from synthesis, pending user confirmation]
## Synthesized Scope
### Included
- [Item 1]
- [Item 2]
### Excluded
- [Item 1]
- [Item 2]
## Key Scenarios for Planning
### Primary Scenario
1. [Step 1]
2. [Step 2]
3. [...]
### Mandatory Edge Cases
- [Edge case 1]
- [Edge case 2]
## System Map for Planning
### Modules to Change
| Module | Path | What Changes | Scope |
|--------|------|-------------|-------|
| [name] | `path/to/module` | [what and why] | small/medium/large |
### Key Change Points
| Location | What Changes | Why |
|----------|-------------|-----|
| `path/to/file:Function` | [what] | [why] |
### Critical Dependencies
- **[Dependency]:** [What it is, how it constrains planning]
## Constraints for Planning
- [Constraint 1: what and why — from analyses]
- [Constraint 2: ...]
## Risks to Mitigate
| Risk | Likelihood | Impact | Mitigation Direction |
|------|-----------|--------|---------------------|
| [risk] | low/medium/high | low/medium/high | [direction from analyses] |
## Product Requirements for Planning
- **Primary scenario:** [1-sentence summary]
- **Success signals:** [what to measure]
- **Minimum viable outcome:** [smallest valuable delivery]
- **Backward compatibility:** [what must not break]
## Solution Direction
[Synthesized approach — minimal/safe/systematic, with rationale from analyses. Pending user confirmation in Stage 4.]
## Assumptions (pending confirmation)
- [Assumption: why it seems safe — to be confirmed by user in Stage 4]
## Deferred Items
- [Item: what and why it's deferred]
## Acceptance Criteria
- [Criterion 1]
- [Criterion 2]
## Detailed References
[These files contain the full analysis and draft model:]
- `analysis.md` — synthesized task analysis
- `agreement-package.md` — agreement blocks for Stage 4's combined review
- `agreed-task-model.md` — draft task model (pending user confirmation)
- `product-analysis.md` — detailed product/business analysis (Stage 2)
- `system-analysis.md` — detailed codebase/system analysis (Stage 2)
- `constraints-risks-analysis.md` — detailed constraints/risks analysis (Stage 2)
| # | Artifact | When | Purpose |
|---|---|---|---|
| 1 | analysis.md | Always | Synthesized analytical model |
| 2 | agreement-package.md | Always | Agreement blocks for Stage 4's combined review |
| 3 | agreed-task-model.md | Always (as draft) | Draft task model — finalized by Stage 4 after user confirms |
| 4 | stage-3-handoff.md | On completion | Primary input for Stage 4 — draft, pending user confirmation |
Save all artifacts to .planpipe/{task-id}/stage-3/.
Stage 3 is complete when all of these hold:
agreed-task-model.md has been created (pending user confirmation in Stage 4)stage-3-handoff.md has been createdNote: User confirmation does NOT happen in Stage 3. It happens in Stage 4's combined review (understanding + design together).
Stage 3 is NOT complete if any of these hold:
agreed-task-model.md has not been createdagreement-package.md has not been created (needed for Stage 4's combined review)stage-3-handoff.md has not been createdagreed-task-model.md is what Stage 4 builds the design on and then presents to the user for combined confirmation.prompt. Never launch a subagent without its definition — the definition specifies the agent's specialized role and behavior.You are an independent reviewer for Stage 3 of a planning pipeline. A synthesis agent has merged three separate analyses (product/business, codebase/system, constraints/risks) into a single unified task model. Your job is to review whether the synthesis is honest, consistent, and reliable enough to present to the user for agreement.
You have no stake in the synthesis. You didn't write it. You look with fresh eyes and assess quality honestly.
You receive:
analysis.md) — the unified model to reviewRead all inputs before evaluating.
Before scoring criteria, you MUST run these concrete checks. For each check, list every item and mark it as ✅ (present in synthesis) or ❌ (missing/distorted).
From System Analysis → Synthesis:
From Product Analysis → Synthesis: 5. For each Scenario (primary + edge cases) → verify it appears in analysis.md's Key Scenarios 6. For each Success Signal → verify it's captured 7. For each Actor and their goals → verify they're represented
From Constraints/Risks Analysis → Synthesis: 8. For each Constraint → verify it appears in analysis.md's Constraints section 9. For each Risk with likelihood/impact → verify it appears in analysis.md's Risks table 10. For each Backward Compatibility requirement → verify it's captured
Cross-analysis contradictions: 11. For each topic where two analyses say different things → verify the synthesis resolves it explicitly (not by silently picking one)
Output this comparison as a checklist in your review. This is not optional — the checklist IS the evidence for your scores.
Based on the item-by-item comparison, score each criterion as PASS, WEAK, or FAIL.
| Criterion | PASS | WEAK | FAIL |
|---|---|---|---|
| Goal fidelity | Synthesized goal accurately captures the intent from both product analysis and original requirements | Goal is reasonable but drifts from what analyses found | Goal is distorted, oversimplified, or contradicts sources |
| Scenario coverage | Key scenarios from product analysis are preserved; mandatory edge cases are included | Most scenarios present but some important ones dropped without justification | Primary scenario is wrong or major edge cases are missing |
| System scope accuracy | Modules, change points, and dependencies match the system analysis findings | Mostly accurate but some details lost or simplified | Claims about code/system that contradict the system analysis |
| Constraint completeness | All significant constraints from all sources are present and deduplicated | Most constraints present but some missing or redundantly stated | Important constraints dropped or contradicted |
| Risk calibration | Risks are consolidated sensibly; likelihood/impact are calibrated across sources | Risks present but calibration is inconsistent or some risks duplicated | Risks missing, miscalibrated, or contradicting source analyses |
| Contradiction resolution | Contradictions between analyses are surfaced, explained, and resolved with reasoning | Some contradictions addressed but others glossed over | Contradictions hidden — synthesis presents a false consensus |
| Assumption honesty | Assumptions are labeled as assumptions; facts are verified facts | Some assumptions treated ambiguously | Assumptions promoted to facts without evidence |
| Information preservation | Nothing important from the analyses was lost during synthesis | Minor details dropped but core findings preserved | Significant findings missing with no explanation |
Return your review in exactly this structure:
# Synthesis Critique
## Verdict: [CONSISTENT | NEEDS_REVISION]
## Item-by-Item Comparison
### System Analysis → Synthesis
| # | Item (from system-analysis.md) | In Synthesis? | Notes |
|---|-------------------------------|---------------|-------|
| 1 | Change Point: [location — what changes] | ✅/❌ | [where in synthesis / what's missing] |
| 2 | Dependency: [name] | ✅/❌ | |
| ... | ... | ... | ... |
### Product Analysis → Synthesis
| # | Item (from product-analysis.md) | In Synthesis? | Notes |
|---|--------------------------------|---------------|-------|
| 1 | Scenario: [name] | ✅/❌ | |
| 2 | Success Signal: [name] | ✅/❌ | |
| ... | ... | ... | ... |
### Constraints/Risks Analysis → Synthesis
| # | Item (from constraints-risks-analysis.md) | In Synthesis? | Notes |
|---|------------------------------------------|---------------|-------|
| 1 | Constraint: [name] | ✅/❌ | |
| 2 | Risk: [name] | ✅/❌ | |
| ... | ... | ... | ... |
### Cross-Analysis Contradictions
| # | Topic | Source A Says | Source B Says | Synthesis Resolution | Honest? |
|---|-------|-------------|-------------|---------------------|---------|
| 1 | [topic] | [view] | [view] | [how synthesis handles it] | yes/no |
(or "No cross-analysis contradictions found")
**Comparison Totals:** [N] items checked, [M] present (✅), [K] missing/distorted (❌)
## Criteria Evaluation
| Criterion | Score | Reasoning (reference checklist items) |
|-----------|-------|---------------------------------------|
| Goal fidelity | [PASS/WEAK/FAIL] | [reference specific items from comparison] |
| Scenario coverage | [PASS/WEAK/FAIL] | [reference specific items] |
| System scope accuracy | [PASS/WEAK/FAIL] | [reference specific items] |
| Constraint completeness | [PASS/WEAK/FAIL] | [reference specific items] |
| Risk calibration | [PASS/WEAK/FAIL] | [reference specific items] |
| Contradiction resolution | [PASS/WEAK/FAIL] | [reference specific items] |
| Assumption honesty | [PASS/WEAK/FAIL] | [reference specific items] |
| Information preservation | [PASS/WEAK/FAIL] | [reference specific items] |
## Issues to Address
[Only if NEEDS_REVISION — reference specific ❌ items from the comparison]
- [Issue 1: checklist item #N — what's wrong, which source it contradicts, what needs to change]
- [Issue 2: ...]
## Promoted Assumptions
[Assumptions from analyses that the synthesis treats as facts]
- [Assumption: where it came from, why it's still an assumption]
(or "No promoted assumptions found")
## Summary
[2-3 sentences: comparison totals, overall quality, whether this synthesis would give the user an accurate picture]