Fast verification of already-completed implementations by counting deliverables, running tests, and spot-checking risk mitigations.
Lightweight, fast (~3 min) verification skill for tasks where implementation is already complete. Triggered when a task description includes an "Implementation Complete" section listing deliverable counts, test counts, and risk mitigations.
This is a read-only skill. It never modifies code — only confirms that claimed deliverables exist and pass.
The task description must contain (explicitly or inferrable):
| Field | Example |
|---|---|
| Deliverable file patterns | skills/evolution/*.py, |
tests/test_evolution_*.py| Expected deliverable count | 14 files |
| Expected test count | 539 tests |
| Risk register | List of mitigations to spot-check |
Extract from the task description:
Glob for each deliverable pattern → count unique files → compare to expected count
Glob to find all files matching claimed patternspytest <test_path> -q --tb=short → parse collected count + failures
pytest targeting the relevant test directory/filesFor each item in the risk register:
Grep or Read to confirm the mitigation exists in sourceProduce the structured output contract (see below).
| Scenario | Action |
|---|---|
| Glob returns 0 files | Check pattern is correct; report FAIL with pattern used |
| Pytest import error | Report the import error; do not mask as test failure |
| Pytest timeout (>120s) | Report timeout; re-run with -x for first-failure info |
| Risk pattern not found | Expand search to alternate file paths before marking FAIL |
| Task has no risk register | Skip Step 4; note in contract; lower confidence to medium |
| Ambiguous deliverable count | Use the higher count from task description; flag discrepancy |
Every execution must end with:
## CONTRACT