Implement a coding task end-to-end — decompose, delegate to subagents, produce documentation deliverables scaled by proportionality tier, and create a PR. Use when the user says "build this feature", "code this task", "work on PROJ-123", "implement this", or needs to go from a work item to a shipped PR with full documentation.
You are the orchestrator. Follow this workflow to implement a coding task end-to-end by delegating to subagents.
$ARGUMENTS
git branch --show-currentgit status --shortgit log --oneline -5Update .harness/working-context.md during implementation to maintain cross-session continuity.
For full format details, see references/working-context.md in the _shared skill directory.
If .harness/working-context.md exists, add the current task to the "## Active Work" section:
- [WORK-ITEM-ID]: [title] — branch: [branch-name]
If the file does not exist, create it with the standard template (see shared reference) and add the active work entry.
Update .harness/working-context.md:
This skill writes: Active Work, Recent Decisions
Before proceeding, verify this task is ready for implementation.
Read enforcement config: Check if process enforcement is configured.
Role check: If process_enforcement.roles is configured, resolve the current user's role per enforcement.md (RBAC → Role Resolution). Check the permission matrix for the planned → in-progress transition. If the role is not permitted, include G2_ROLE_FORBIDDEN in the rejection. If process_enforcement.roles is absent, skip this check (all roles permitted).
Validate work item state: If $ARGUMENTS contains a work item ID (matches pattern ^[A-Z][A-Z0-9]+-\d+$):
a. Fetch the work item using the fetch_item operation (see _shared/references/tracker-operations.md)
b. Verify the work item state is the "To Do" equivalent (per status_map.todo in config)
c. If the item is already "In Progress" or beyond, note this and proceed (may be resuming work)
d. If the item is "Done", warn: "This item is already completed."
Check dependencies: Use the check_dependencies operation to query blocking items.
Verify input traceability: Search the work item description for requirement references (pattern: REQ- followed by identifier).
Structured rejection: When any condition in steps 2–5 fails, produce a rejection object per enforcement.md (Structured Rejection Schema). Each failed condition includes a rejection code (e.g., G2_ROLE_FORBIDDEN, G2_DEPS_UNRESOLVED, G2_NO_REQ_LINK), a remediation instruction, and a retry policy. Store the complete rejection object in session-state.json at orchestrator.itemStates[id].lastGateResult. Act on the result per enforcement.md (Enforcement Evaluation Algorithm, Step 10): in block mode halt the workflow; in warn mode prompt the user to continue or abort.
If all checks pass (or the user confirms past warnings), proceed to Step 1.
Examine $ARGUMENTS to determine the task source:
If no arguments provided (empty $ARGUMENTS):
/implement TH-123 or /implement add input validation to the login formIf it looks like a work item ID (matches pattern [A-Z]+-\d+, e.g., TH-123):
^[A-Z][A-Z0-9]+-\d+$_shared/references/tracker-operations.md).harness/working-context.md — add this task to "Active Work" (see Working Context section above)If natural language (e.g., "add input validation to the login form"):
State transition: If process enforcement is enabled with auto_state_transitions:
_shared/references/tracker-operations.md)orchestrator.itemStates.[ID].abstractState to "in-progress", trackerState to the platform's in-progress state, branch to current branch, lastTransition to nowBefore delegating any work:
Based on the task complexity (see Effort Scaling in CLAUDE.md), spawn subagents:
For each implementation subagent, include in the Task Brief:
main, commit format TH-NNN: <type>: descriptionFor each test subagent, include in the Task Brief:
After all subagents complete:
Before review, generate all required documentation artifacts based on the proportionality tier (see below):
doc/design/ or update the existing SDD. Use the template in doc/templates/sdd-template.md./vv after implementation to produce this. Ensure it is attached before marking Done.doc/diagrams/ using Mermaid (preferred), PlantUML, or ASCII art.doc/journal/. Use the template in doc/templates/engineering-journal-template.md. Record:
YYYY-MM-DD-<work-item-id>.mdThe work item is not Done until every required deliverable for the determined tier is produced and attached. This gate exists because documentation debt compounds — missing an SDD now means the next engineer has no design context, and missing test results means nobody can verify quality after the fact.
Not all work items require all 8 artifacts. Scale requirements to scope:
| Tier | Required Artifacts | When |
|---|---|---|
| Minimal | Linter report, unit test results, regression test results | Bug fixes, small refactors, config changes (<50 lines changed) |
| Standard | Minimal + SDD + code coverage + V&V audit | Features, medium tasks, integrations (50-300 lines changed) |
| Comprehensive | All 8 artifacts | Architecture changes, security-sensitive work, new skills (>300 lines changed) |
Scope determination:
The reason for tiered requirements: a one-line config fix doesn't need an SDD and architecture diagrams, but a new authentication flow absolutely does. Proportionality keeps the documentation gate from becoming a tax on small changes while ensuring critical work gets thorough documentation.
Output tracking: Update session-state.json orchestrator.itemStates.[ID].outputArtifacts with the list of produced deliverable file paths. This enables the /orchestrate complete gate to verify completeness.
Invoke /pr to handle the review and PR creation:
/pr which will:
/pr skill handles all review logic — do not duplicate it hereState transition: If process enforcement is enabled with auto_state_transitions:
orchestrator.itemStates.[ID].abstractState to "in-review", prNumber to the PR number## Task Complete
- **Branch**: [feature-branch-name]
- **PR**: [PR link or number from /pr output]
- **Tests**: [pass/fail count]
- **Work Item**: [work item ID] updated with implementation notes
### Changes Made
- [file1]: [what changed]
- [file2]: [what changed]
### Review & PR
- See `/pr` output above for full review summary and PR link
### Documentation Deliverables
| Deliverable | Status | Notes |
|-------------|--------|-------|
| Software Design Document | PRODUCED/SKIPPED | [location or "N/A — minimal tier"] |
| Linter Report | PRODUCED/SKIPPED | [zero violations / N exceptions documented] |
| Code Coverage | PRODUCED/SKIPPED | [N% — meets/below threshold] |
| Unit Test Results | PRODUCED | [N passed, N failed] |
| Regression Test Results | PRODUCED | [N passed, N regressions] |
| V&V Audit Report | PRODUCED/SKIPPED | [attached / "N/A — minimal tier"] |
| Diagrams | PRODUCED/SKIPPED | [list each or "N/A — minimal/standard tier"] |
| Engineering Journal | PRODUCED/SKIPPED | [doc/journal/YYYY-MM-DD-PROJ-NNN.md or "N/A"] |
| **Proportionality Tier** | [MINIMAL/STANDARD/COMPREHENSIVE] | [rationale] |
### Working Context
- Active Work: [updated — item removed from active, decisions recorded]
1. [Explore] Analyze codebase for existing patterns and affected files
2. [Plan] Define implementation approach and file changes
3. [Implement] Write the feature code (subagent)
4. [Test] Write unit tests with 100% coverage (subagent)
5. [Document] Produce documentation deliverables per proportionality tier
6. [/pr] Review, fix blocking issues, create PR
1. [Explore] Reproduce the bug and identify root cause
2. [Test] Write a failing test that demonstrates the bug (subagent)
3. [Implement] Fix the bug — test should now pass (subagent)
4. [Verify] Run full test suite for regressions (subagent)
5. [Document] Produce documentation deliverables (minimal tier unless security-sensitive)
6. [/pr] Review and create PR
1. [Explore] Map all usages of the code being refactored
2. [Test] Ensure existing tests cover current behavior (subagent)
3. [Implement] Refactor with tests continuously passing (subagent)
4. [Verify] All existing tests still pass (subagent)
5. [Document] Produce documentation deliverables per proportionality tier
6. [/pr] Review and create PR
Tracker MCP unavailable: If a work item ID was provided but the tracker MCP server is not configured or unavailable, inform the user and suggest providing the task as natural language instead. See _shared/references/tracker-operations.md Error Handling for degraded-mode behavior.
Subagent failure: If a subagent fails or gets stuck, spawn a replacement with more specific instructions or report the blocker to the user.
No test framework detected: The implementer subagent should identify the appropriate test framework from the project. If none exists, it sets one up as part of the implementation.