Verifies implementation completion by running acceptance tests and triggers retry loop on failure.
npm test -- --testPathPattern="{test files}"completionStatus:
total: 5
passed: 4
failed: 1
allPassed: false
failedTests:
- id: T2
type: Unit # or Integration
file: ErrorHandler.test.tsx
error: "Expected error message not shown"
failedPhase: "Phase 1" # Determines where to retry
recommendation: "Fix ErrorHandler.tsx, then re-run Phase 1"
When allPassed: false:
Identify failed phase based on test type:
Return to failed phase (NOT test writing):
failedTests info to implementation-agentRetry limits:
# Run specific tests
npm test -- --testPathPattern="batch.test|ErrorHandler.test"
# Check coverage (optional)
npm test -- --coverage --testPathPattern="..."
After implementation, compare results against context.md requirements to verify fulfillment.
According to the "How to write a good spec for AI agents" guide, having AI verify its own work against the spec is a powerful pattern. This helps catch missing items before running tests.
At completion, verify the following:
"After implementation, compare your results against the requirements in context.md and verify each item is fulfilled. If any requirements are not met, list them."
selfAuditResult:
# Requirements fulfillment
requirementsMet:
- "[REQ-1] User query API ✅"
- "[REQ-2] Error handling ✅"
requirementsNotMet:
- "[REQ-3] Pagination ❌ (not implemented)"
# 3-tier boundary check
boundaryCheck:
neverDoViolations: [] # Critical violations (halt if any)
askFirstItems: [] # Items needing approval
alwaysDoCompleted: # Required actions
- "lint executed"
- "tests passed"
# Overall judgment
readyForTest: true | false
blockers: # Blocking reasons if false
- "REQ-3 not implemented"
Implementation Phase Complete
↓
[Self-Audit] Compare against context.md requirements
↓
readyForTest?
↓ ↓
true false
↓ ↓
Run tests Fix implementation and retry
neverDoViolations exist, halt immediately and report to user