Use when executing implementation plans with independent tasks in the current session - dispatches fresh subagent for each task with code review between tasks, enabling fast iteration with quality gates
Execute plan by dispatching fresh subagent per task, with code review after each.
Core principle: Fresh subagent per task + review between tasks = high quality, fast iteration
Autonomy principle: Execute ALL tasks without stopping for user input. Only stop on 3 consecutive errors OR all tasks complete. Commits, reviews, and fixes are subagent responsibilities — orchestrator never pauses to ask permission.
Orchestrator identity: You are a DISPATCHER, not a researcher. You read TaskList, dispatch subagents, process their results, and loop. Subagents self-gather all context specified in the plan (task details, related files, external references). You NEVER read plan files, gather context, explore code, or check git state yourself.
vs. Executing Plans (parallel session):
When to use:
When NOT to use:
File Organization:
{{epic-or-user-story-folder}}/tasks/ subfolderepic{{X}}-{{name}}/tasks/) AND user-story-level (us{{X.Y}}-{{name}}/tasks/) implementationsThe orchestrator loop — repeat until all tasks complete or 3 consecutive errors:
consecutive_errors = 0
1. Read TaskList
2. Find next task: first in_progress, then first pending (not blocked)
3. If no tasks remain → go to Step 6 (Final Review)
4. Mark task in_progress via TaskUpdate
5. Dispatch implementation subagent (Step 2)
6. Dispatch code-reviewer subagent (Step 3)
7. Cleanup processes (Step 3a)
8. Check verdict:
- APPROVED → continue to step 9
- FIX REQUIRED → dispatch fix subagent (Step 4) → GOTO step 6 (re-review)
9. MANDATORY: Close GitHub issue (Step 5) [if task has issue number - DO NOT skip this]
10. Mark task completed via TaskUpdate ← ONLY after APPROVED
11. GOTO 1
CRITICAL: Task stays in_progress until code-reviewer returns APPROVED. Never mark complete on FIX REQUIRED.
Pre-flight check (once, before first loop):
/decompose-plan skill to create itFor each task:
Dispatch fresh subagent:
.claude/agents/Model Selection:
| Task Type | Model | When to Use |
|---|---|---|
| Simple | haiku | Config updates, documentation, single-file changes, scaffolding |
| Standard | sonnet | Multi-file implementation, business logic, TDD, integration work |
| Complex | sonnet | Architectural changes, new components, debugging, performance work |
Heuristics for orchestrator:
Task tool (
subagent_type: {{agent}} | general-purpose
description: "Implement Task {{task-number}}: {{task name}}"
prompt: |
You are implementing Task {{task-number}} from {{plan-file-path}}.
**Self-gather context (orchestrator does NOT provide this):**
Extract your task from the plan:
```bash
jact extract header {{plan-file-path}} "{{task-header-name}}"
```
Then follow any additional context-gathering steps specified in the task (e.g., pulling GH issues, reading related files).
**GitHub Issue Context (if task has issue number):**
```bash
gh issue view {{issue-number}} --json title,body,labels --template '{{.title}}
{{.body}}'
```
Extract acceptance criteria from the issue body. These are your requirements.
Your job is to:
1. Navigate to and work in {{worktree directory | feature-branch if worktree missing}}
2. Implement exactly what the task specifies
3. Write tests (following TDD if task says to)
4. Verify implementation works
5. Run diagnostic verification (MANDATORY - see below)
6. Commit your work (include `Fixes: #{{issue-number}}` in footer if task has GH issue)
7. Clean up test processes (MANDATORY - see below)
8. Write results to file
9. Report back
CRITICAL - Test Process Cleanup (Step 6):
Before writing results, you MUST clean up any test processes you spawned:
```bash
# Check for running vitest processes (pgrep works in sandbox; ps does not)
pgrep -fl vitest
# If any found, kill them
pkill -f "vitest" || true
# Verify cleanup succeeded (should return nothing)
pgrep -fl vitest
```
NEVER skip this step. Orphaned test processes consume ~14GB memory each.
CRITICAL - Diagnostic Verification (Step 5):
Before committing, you MUST verify zero diagnostic errors.
For TypeScript projects:
```bash
npm run build -w {{workspace-package}}
```
If build fails, fix ALL errors before committing. Do NOT commit with TypeScript errors.
For non-TypeScript projects, use IDE diagnostics:
```
mcp__ide__getDiagnostics(uri: "file://{{changed-file}}")
```
NEVER skip this step. Tests pass at runtime but miss compile-time type errors.
Task 4 baseline: subagent committed 3 TS errors (TS2532, TS2339) that tests didn't catch.
**Rationalizations to reject:**
- "Tests pass so it's correct" → Tests don't catch type errors. Build does.
- "I'll fix types later" → Later = reviewer catches it = fix subagent = 2x cost.
- "Build is slow" → 5 seconds vs full fix cycle. Build every time.
CRITICAL - GitHub Issue Reference (Step 6):
If task has a GitHub issue number, include in commit footer:
```
Fixes: #{{issue-number}}
```
Place BEFORE the Claude attribution footer. This links the commit to the issue.
If no issue number, omit the Fixes footer.
MANDATORY: Use the `writing-for-token-optimized-and-ceo-scannable-content` skill when writing your review results.
CRITICAL: Write your results to {{epic-or-user-story-folder}}/tasks/task-{{task-number}}-dev-results.md with:
- Model used for implementation
- Task number and name
- What you implemented
- Tests written and test results
- Diagnostic verification results (build output or IDE diagnostics)
- Files changed
- Any issues encountered
- Commit SHA
Report: Summary + confirm results file written
model: {{haiku|sonnet}} # See Model Selection heuristics above
Subagent reports back with summary and results file location.
Dispatch code-reviewer subagent:
code-reviewer agent type (defined in .claude/agents/code-reviewer.md)Task tool (code-reviewer):
prompt: |
You are reviewing Task {{task-number}} implementation.
MANDATORY: Use the `writing-for-token-optimized-and-ceo-scannable-content` skill when writing your review.
CRITICAL: This is a task-level review. Be concise.
- Target 10-30 lines for approved tasks
- Target 30-80 lines for tasks with issues
**Self-gather context:**
1. Extract task from plan:
```bash
jact extract header {{plan-file-path}} "{{task-header-name}}"
```
2. Follow any additional context-gathering steps in the task (e.g., GH issues).
3. Fetch GitHub issue acceptance criteria (if task has issue number):
```bash
gh issue view {{issue-number}} --json title,body,labels --template '{{.title}}
{{.body}}'
```
Extract acceptance criteria — verify implementation against each one.
4. Read implementation results:
- Dev results: {{epic-or-user-story-folder}}/tasks/task-{{task-number}}-dev-results.md
Your job:
1. Read plan task to understand requirements
2. Read GH issue acceptance criteria (if issue number provided)
3. Read dev results to understand what was implemented
4. Review code changes (BASE_SHA to HEAD_SHA)
5. Verify implementation satisfies EACH acceptance criterion
6. Identify issues (BLOCKING/Critical/Important/Minor)
7. Clean up test processes (MANDATORY - see below)
8. Write concise review results
CRITICAL - Test Process Cleanup (Step 5):
Before writing results, you MUST clean up any test processes you spawned:
```bash
# Check for running vitest processes (pgrep works in sandbox; ps does not)
pgrep -fl vitest
# If any found, kill them
pkill -f "vitest" || true
# Verify cleanup succeeded (should return nothing)
pgrep -fl vitest
```
NEVER skip this step. Orphaned test processes consume ~14GB memory each.
CRITICAL: Write concise review to {{epic-or-user-story-folder}}/tasks/task-{{task-number}}-review-results.md with:
- Model used for review
- Brief summary (1-2 sentences)
- Issues (only include categories with actual issues)
- Verdict: APPROVED or FIX REQUIRED
CRITICAL VERDICT RULE: If you found ANY issues (BLOCKING/Critical/Important/Minor), you MUST set verdict to FIX REQUIRED. NEVER approve tasks that have documented issues.
CRITICAL - Acceptance Criteria Verification (if task has GH issue):
Include in review results:
**Acceptance Criteria Verification**
| Criterion | Status | Notes |
|-----------|--------|-------|
| AC1: [from issue] | ✅ PASS / ❌ FAIL | [brief] |
If ANY criterion fails → verdict MUST be FIX REQUIRED.
CRITICAL - New Issues from Review:
If you discover problems NOT in the original task scope, dispatch `github-assistant`
agent to create GH issues with proper labels:
```plaintext
Task tool (github-assistant):
description: "Create GH issue for [problem]"
prompt: |
Create a GitHub issue using the `managing-github-issues` skill.
Title: <type>(<scope>): <description>
Body: [Problem description]. Found during review of Task {{task-number}} (#{{issue-number}}).
Labels (REQUIRED):
- Type: bug / enhancement / tech-debt
- Component: component:<PascalCase>
- Priority: priority:low / priority:medium / priority:high
```bash
gh issue create \
--title "<type>(<scope>): <description>" \
--body "..." \
--label "bug,component:MarkdownParser,priority:medium"
```
```
Record created issues in review results:
**New Issues Created:** #N — description
Keep it brief:
- Skip "Strengths" section for approved tasks (ZERO issues)
- Skip empty issue categories (don't write "Critical: None")
- No comprehensive analysis - this is task-level, not PR-level
BASE_SHA: [commit before task]
HEAD_SHA: [current commit]
Code reviewer returns: Summary + review results file location.
MANDATORY: After each task review, before proceeding to fixes.
Subagents may spawn vitest test processes (watch mode, UI mode) that don't cleanup automatically. Each orphaned process consumes ~14GB memory.
Check for orphaned processes:
# Check for running vitest processes
ps aux | grep -i vitest | grep -v grep
If vitest processes found:
# Kill vitest processes
pkill -f "vitest" || true
# Verify cleanup succeeded
ps aux | grep -i vitest | grep -v grep
# Should return nothing
VERIFICATION REQUIRED: Process list must be empty before proceeding to Step 4.
Common sources:
npm run test:watch instead of npm testRed flags indicating you're about to skip this:
If BLOCKING issues found:
If Critical/Important/Minor issues found:
CRITICAL: BLOCKING issues MUST be resolved by app-tech-lead subagent. NEVER escalate to user.
Foundational principle: Architectural decisions require research, principle evaluation, and documentation. App-tech-lead subagent provides this. User escalation without context and recomendations breaks the workflow and annoys ceo/user by asking for their attention when you could do research to clarify the issue yourself. Once App-tech-lead does research and a MAJOR change is needed, then involve the user. Otherwise provide additional context to fixing agent.
Code-reviewer may identify BLOCKING when:
When code-reviewer returns BLOCKING issues:
MANDATORY - Launch application-tech-lead subagent
You MUST launch application-tech-lead. This is NOT optional. This is NOT a suggestion.
DO NOT:
Why NO user escalation:
Launch application-tech-lead with this task:
Task tool (application-tech-lead):
description: "Resolve blocking architectural decision from Task N"
prompt: |
A code review identified a blocking architectural decision for Task {{task-number}}.
CRITICAL: Extract task context from plan using citation tool:
```bash
jact extract header {{plan-file-path}} "{{task-header-name}}"
```
Read context files:
- Dev results: {{epic-or-user-story-folder}}/tasks/task-{{task-number}}-dev-results.md
- Review results: {{epic-or-user-story-folder}}/tasks/task-{{task-number}}-review-results.md
BLOCKING ISSUE:
[paste full BLOCKING issue from code-reviewer]
Your job:
1. Read plan task to understand context
2. Read dev results to understand what was implemented
3. Read review results to understand the blocking issue
4. Read architecture documentation to understand context
5. Read PRD to understand requirements and scope
6. Research options using Perplexity with "{{query}} best practices 2025"
7. Evaluate options against architecture principles using evaluate-against-architecture-principles skill
8. Update implementation plan with specific choice
9. Write decision document
10. Report back
MANDATORY: Use the `writing-for-token-optimized-and-ceo-scannable-content` skill when writing your review.
CRITICAL: Write your decision to {{epic-or-user-story-folder}}/tasks/task-{{task-number}}-arch-decision.md with:
- Task number and name
- Decision date
- BLOCKING issue summary
- Options considered (2-3 options minimum)
- Evaluation against architecture principles
- Perplexity research findings
- Recommendation with clear rationale
- Plan update confirmation
Critical: Your decision must be grounded in architecture principles, not just "best practice" popularity.
After application-tech-lead returns:
MAJOR Change Criteria:
If MAJOR change:
If minor change (built-in APIs, internal refactoring, no new dependencies):
Then proceed to implementation:
Then proceed to section 4b for any remaining Critical/Important/Minor issues
If Critical/Important/Minor issues found:
Orchestrator Dynamic Decision (YOU):
Before dispatching fix agent, determine which files to include:
task-{{task-number}}-arch-decision.md exists (BLOCKING was resolved)Dispatch follow-up subagent with context files:
Task tool (general-purpose):
description: "Fix Task {{task-number}} issues from review"
prompt: |
You are fixing issues found in code review for Task {{task-number}}.
CRITICAL: Extract task context from plan using citation tool:
```bash
jact extract header {{plan-file-path}} "{{task-header-name}}"
```
Read context files:
- Plan task (via citation extraction above)
- Dev results: {{epic-or-user-story-folder}}/tasks/task-{{task-number}}-dev-results.md
- Review results: {{epic-or-user-story-folder}}/tasks/task-{{task-number}}-review-results.md
{{#if arch-decision-exists}}
- Arch decision: {{epic-or-user-story-folder}}/tasks/task-{{task-number}}-arch-decision.md
{{/if}}
Issues to fix:
[paste issues from review-results.md]
Your job:
1. Read plan task to understand requirements
2. Read dev results to understand what was implemented
3. Read review results to understand issues
{{#if arch-decision-exists}}
4. Read arch decision to understand architectural choice
5. Implement fixes following architectural decision
{{else}}
4. Implement fixes for all issues
{{/if}}
6. Verify fixes work (run tests)
7. Commit your work
8. Write fix results
CRITICAL: Write your results to {{epic-or-user-story-folder}}/tasks/task-{{task-number}}-fix-results.md with:
- Task number and name
- Issues addressed
- Changes made
- Test results
- Files changed
- Commit SHA
Report: Summary + confirm results file written
model: sonnet # Fix agents always use sonnet — haiku fixes triggered this cycle
Close GitHub issue (if task has issue number):
Dispatch github-assistant agent:
Task tool (github-assistant):
description: "Close GH issue #{{issue-number}}"
prompt: |
Close GitHub issue #{{issue-number}} — Task {{task-number}} approved.
```bash
gh issue close {{issue-number}} --comment "Resolved via commit $(git rev-parse HEAD). Task {{task-number}} reviewed and approved."
```
Report: Confirm issue closed.
Mark task completed via TaskUpdate
Reset consecutive error counter to 0
Next task immediately — do NOT pause for user input
Error counter resets on each successful task. Increments on failed fix attempts:
consecutive_errors = 0, next taskconsecutive_errors = 0, next taskconsecutive_errors += 1
>= 3 → STOP, report to user| Excuse | Reality |
|---|---|
| "Let me check with user first" | You have the plan. Execute it. |
| "Ready to commit?" | Commits are subagent responsibility. Keep going. |
| "Should I proceed?" | Yes. Always. Until done or 3 errors. |
| "User might want to review" | Code reviewer subagent handles review. Keep going. |
| "This task was complex, pause" | Complexity is not a stop condition. Next task. |
| "Separation of concerns — issue closure is external" | Issue closure is Step 5 of the workflow. Not optional. |
| "Issue lifecycle is separate from task completion" | Step 5 explicitly closes issue BEFORE marking complete. |
The orchestrator is a dispatcher. It reads TaskList and dispatches subagents. It does NOT gather context.
| Excuse | Reality |
|---|---|
| "Let me read the plan first" | Subagent extracts its own task via jact. |
| "Let me pull additional context" | Subagent self-gathers per plan instructions. |
| "Let me check what's been done" | TaskList status tells you. Subagent explores if needed. |
| "I need the BASE_SHA" | Subagent gets its own git state. |
| "Let me find the tasks folder" | Subagent discovers paths from plan context. |
| "I need to build context for the prompt" | The prompt template IS the context. Fill in placeholders only. |
| "Just a quick check" | Quick checks = 500+ tokens wasted. Subagent does it in its own context. |
What orchestrator knows (from TaskList + task description):
What orchestrator NEVER does:
All of these are subagent responsibilities. Orchestrator fills prompt template placeholders and dispatches.
After all tasks complete, dispatch final code-reviewer:
After final review passes:
You: I'm using Subagent-Driven Development to execute the task list.
[Read TaskList → Task 3 is next (in_progress)]
[TaskUpdate: Task 3 → in_progress]
[Dispatch implementation subagent with: plan path, task header, GH issue #1]
← Subagent self-gathers: extracts task from plan, pulls GH issue, implements, commits
→ Returns: "Implemented, wrote task-3-dev-results.md, SHA abc123"
[Dispatch code-reviewer with: plan path, task header, GH issue #1, results folder]
← Reviewer self-gathers: extracts task, pulls issue, reads dev results, reviews diff
→ Returns: "APPROVED, wrote task-3-review-results.md"
[Cleanup vitest processes]
[TaskUpdate: Task 3 → completed]
[Read TaskList → Task 4 is next (pending, now unblocked)]
[TaskUpdate: Task 4 → in_progress]
[Dispatch implementation subagent with: plan path, task header, GH issue #28]
→ Returns: "Implemented, wrote task-4-dev-results.md, SHA def456"
[Dispatch code-reviewer]
→ Returns: "FIX REQUIRED — Critical: missing characterization test baseline"
[Dispatch fix subagent with: plan path, task header, GH issue, review issues]
→ Returns: "Fixed, wrote task-4-fix-results.md, SHA ghi789"
[Dispatch code-reviewer for re-review]
→ Returns: "APPROVED"
[Cleanup processes]
[TaskUpdate: Task 4 → completed]
... [loop continues until all tasks complete] ...
[Dispatch final code-reviewer for entire implementation]
[Use finishing-a-development-branch skill]
Done!
Key pattern: Orchestrator only uses TaskList, TaskUpdate, Task (dispatch), and cleanup Bash. All context gathering happens inside subagents.
vs. Manual execution:
vs. Executing Plans:
Cost:
Never:
Process cleanup rationalizations to reject:
If subagent fails task:
Required workflow skills:
.claude/agents/code-reviewer.md) - REQUIRED: Review after each task (see Step 3)Subagents must use:
Alternative workflow: