Final gate before PR. Runs the full test suite to catch regressions, then reviews code against the plan and project standards. Use this skill when: (1) TDD is complete and the branch needs validation before PR, (2) the user says "review" or "ready for PR", (3) multiple TDD runs have completed and the branch needs a final check.
Final gate between /tdd completion and PR creation. Phase 1 ensures the suite is
healthy. Phase 2 reviews code against the plan and standards. If Phase 2 fixes break
tests, loop back to Phase 1.
Handles output from one or more /tdd runs on the same branch.
/tdd has completed on this branch.plans/issue-{issue-number}/plan.md exists (every issue gets a plan)Verify infrastructure before running tests:
# Verify backend venv exists
make test-backend ARGS="--version"
# Verify frontend node_modules
ls frontend/node_modules/.bin/vitest
If venv or node_modules don't exist, run make setup first.
This is the single authoritative cross-layer test run. TDD sub-agents only verify their own layer — this is where cross-layer regressions surface.
make test-backend
make test-frontend
make test-e2e
Collect all failures.
Separate failures into:
| Category | Meaning | Action |
|---|---|---|
| New feature tests failing | Bug in the feature | Flag — TDD didn't complete cleanly |
| Existing tests failing | Cascade from new behavior | Present to user for decision |
All failures are caused by this branch — main is always green.
For each existing test failure caused by the new feature, use AskUserQuestion:
Present:
Options:
Group related failures by topic when possible (e.g., "These 4 tests all assert the old response format for GET /recipes").
Context management: Spawn a general-purpose sub-agent (via Task tool) when a fix
requires reading multiple files or making coordinated changes across files. Do simple
single-file edits (update one assertion, delete one test) inline.
When spawning a sub-agent, provide:
For each decision:
Evaluate code changes for database impact:
If database changes are detected, present to user via AskUserQuestion:
If no database changes detected, skip this step.
After all decisions are executed:
make test-backend
make test-frontend
make test-e2e
.plans/issue-{issue-number}/plan.mdgit diff main...HEADgit diff main...HEAD --name-onlySpawn 4 independent review sub-agents (Task tool, subagent_type="general-purpose"):
Plan conformance reviewer:
Compare this implementation against its plan.
Plan:
[plan.md content]
Changed files:
[file list]
For each feature in the plan, evaluate:
- Is it fully implemented? (all acceptance criteria met)
- Is anything missing?
- Was anything added that isn't in the plan?
- Does the approach match what was specified?
Report:
- IMPLEMENTED: criteria met as planned
- DEVIATED: implemented differently than planned (explain how)
- MISSING: planned but not implemented
- EXTRA: implemented but not in plan
Standards conformance reviewer:
Review these code changes against project standards.
Changed files:
[file list]
Read the project's CLAUDE.md and relevant subdirectory CLAUDE.md files.
Check each changed file against the standards defined there.
Evaluate:
- Naming conventions followed?
- File organization correct?
- Patterns match established codebase conventions?
- Error handling consistent with project style?
- Test structure follows testing standards?
Report each violation with:
- File and line
- What standard is violated
- What it should be instead
Quality reviewer:
Review these code changes for quality issues.
Changed files:
[file list]
Read the changed files and evaluate:
- Dead code or unused imports introduced?
- Overly complex logic that could be simplified?
- Missing error handling at system boundaries?
- Hardcoded values that should be configurable?
- Security concerns (injection, XSS, exposed secrets)?
- Performance concerns (N+1 queries, unnecessary re-renders)?
Report only concrete issues, not style preferences.
Docs-update reviewer:
Review these code changes for documentation impact.
Changed files:
[file list]
Read ALL durable context files:
- CLAUDE.md (root)
- backend/CLAUDE.md
- frontend/CLAUDE.md
- e2e/CLAUDE.md
- All files in docs/
For each changed file, evaluate:
- Does it introduce a new pattern, convention, or architectural decision
that should be recorded in a CLAUDE.md or docs/ file?
- Does it change existing behavior that is currently documented?
- Does it add a new dependency, command, or configuration that should
be mentioned in setup/quick-start docs?
Report:
- STALE: existing documentation that no longer matches the code
- MISSING: new patterns or conventions that should be documented
- OK: no documentation impact
Only flag concrete gaps. Do not suggest adding docs for trivial changes.
Summarize all reviewer findings grouped by severity:
Must fix (blocks PR):
Should fix (improves quality):
Note (informational):
For each finding, use AskUserQuestion:
Context management: Spawn a general-purpose sub-agent (via Task tool) when a fix
touches multiple files or requires reading surrounding code. Do simple single-file
cosmetic edits inline.
For "fix now" decisions:
Check for .plans/issue-{issue-number}/friction.md. If it exists, /tdd recorded
friction during execution that needs resolution here.
For each friction entry:
in_progress after 3 failures. Diagnose the root cause,
fix the underlying issue, and complete the taskAskUserQuestion with resolution optionsAfter all friction is resolved, delete the friction file.
make lint
make typecheck
Present final state:
Tell the user: "Review complete. Ready for PR."