Pre-completion verification — everything to check before calling fleet_task_complete. Prevents incomplete submissions.
Go through this checklist. Every item must be verified — not assumed. If an item fails, fix it before completing.
requirement_verbatim fieldfleet_task_accepteng_contribution_check)fleet_request_input..venv/bin/python -m pytest fleet/tests/ -v --tb=shorttype(scope): description [task:XXXXXXXX]Check your task's delivery_phase against config/phases.yaml:
| Phase | Tests | Docs | Security |
|---|---|---|---|
| poc | Happy path | README | No secrets in code |
| mvp | Main flows + edges | Setup + usage | Auth + validation |
| staging | Comprehensive | Full docs | Dep audit, pen-test mindset |
| production | Complete | Everything | Certified, compliance-verified |
If your work doesn't meet the phase standard: either improve it or flag to PM that the phase may need reassessment.
Your fleet_task_complete(summary=...) summary should answer:
Bad: "Implemented the feature" Good: "Added JWT auth middleware in fleet/core/auth.py with RS256 signing per architect's design input. 5 tests in test_auth.py cover valid/invalid/expired tokens. Run: pytest fleet/tests/core/test_auth.py"
The tool fires a 12+ operation tree:
You do NOT need to do any of these manually. The tree handles it. But fleet-ops WILL review your work against this checklist. Submitting incomplete work wastes a review cycle and gets rejected.
Did I read the verbatim requirement?
├── No → read it first
└── Yes → Does every criterion have code addressing it?
├── No → implement the missing parts
└── Yes → Did I consume all required contributions?
├── No → fleet_request_input for missing ones
└── Yes → Do all tests pass?
├── No → fix the tests
└── Yes → Does the summary explain what/why/how?
├── No → write a proper summary
└── Yes → fleet_task_complete(summary)