Use when executing implementation plans with independent tasks in the current session
Execute a plan by giving each task fresh context, with a two-stage review after each: spec compliance first, then code quality.
Core principle: Fresh context per task + two-stage review (spec then quality) = high quality, fast iteration.
digraph decide {
rankdir=TB;
node [shape=diamond];
Q1 [label="Have an\nimplementation plan?"];
Q2 [label="Tasks mostly\nindependent?"];
Q3 [label="Want review\nafter EACH task?"];
node [shape=box, style=rounded];
SDD [label="Use Subagent-Driven Development", color=green];
EP [label="Use /executing-plans", color=blue];
OTHER [label="Plan first with /writing-plans", color=orange];
Q1 -> OTHER [label="no"];
Q1 -> Q2 [label="yes"];
Q2 -> EP [label="no\n(sequential)"];
Q2 -> Q3 [label="yes"];
Q3 -> EP [label="no\n(batch review ok)"];
Q3 -> SDD [label="yes"];
}
Use when:
vs. /executing-plans:
digraph process {
rankdir=TB;
node [shape=box, style=rounded];
setup [label="1. Set Up\n(branch, read plan)"];
dispatch [label="2. Dispatch Implementer\n(fresh context, single task)"];
spec [label="3. Spec Compliance Review\n(matches plan?)"];
quality [label="4. Code Quality Review\n(tests, clean code?)"];
next [label="5. Next Task"];
final [label="6. Final Review\n(full suite, integration)"];
setup -> dispatch;
dispatch -> spec;
spec -> dispatch [label="spec gaps\n(fix first)"];
spec -> quality [label="pass"];
quality -> dispatch [label="quality issues\n(fix first)"];
quality -> next [label="pass"];
next -> dispatch [label="more tasks"];
next -> final [label="all done"];
}
# Ensure you're on a feature branch, not main
git checkout -b feature/<name>
Read the plan once. Extract all tasks and their full text. Note any shared context that tasks depend on.
a. Dispatch implementer with fresh context
Start a new Copilot chat (or use /new). Provide:
b. Spec compliance review
Before moving on, check: does the implementation match what the plan specified?
Use /requesting-code-review and check specifically:
If spec gaps found → fix them before continuing.
c. Code quality review
After spec compliance passes:
If quality issues found → fix them, re-review, then continue.
d. Mark complete and move on
Only when both reviews pass → move to next task.
When dispatching implementer agents, consider the task complexity:
Match the context you provide to the task complexity. Over-constraining complex tasks leads to poor solutions. Under-constraining simple tasks leads to scope creep.
When an implementer completes (or gets stuck), they should report one of:
All requirements met, tests pass. Proceed to spec compliance review.
Implementation complete but implementer flagged potential issues:
Implementer couldn't proceed — missing information:
Implementer hit an obstacle that prevents completion:
After all tasks complete:
/finishing-a-development-branch to merge, create PR, or clean upPlan: "Add user authentication with JWT"
Tasks:
1. Create User model with password hashing
2. Add JWT token generation/validation
3. Create login/register endpoints
4. Add auth middleware
--- Task 1 ---
Dispatch: "Implement User model with password hashing. Use bcrypt.
Follow TDD. Only modify src/models/ and tests/models/."
Result: DONE
Spec review: ✅ All fields present, password hashed, tests pass
Quality review: ✅ Clean code, real tests, no dead code
→ Move to Task 2
--- Task 2 ---
Dispatch: "Implement JWT generation/validation. User model is complete
(see src/models/user.ts for interface). Follow TDD."
Result: DONE_WITH_CONCERNS — "Token expiry is hardcoded to 1hr"
Spec review: ✅ Plan said 1hr, so hardcoded is correct for now
Quality review: ✅
→ Move to Task 3
...
Never:
If implementation fails:
If task reveals the plan is wrong:
Requires:
/writing-plans — Creates the plan this skill executes/requesting-code-review — Review checklist after each task/finishing-a-development-branch — Complete development after all tasksImplementer tasks should use:
/test-driven-development — TDD for each task/verification-before-completion — Evidence before claims/systematic-debugging — When implementation hits unexpected failures