How fleet-ops verifies audit trails during review — what to check, what constitutes a complete trail, and how to distinguish real work from paperwork
The trail is how the fleet proves work was done properly. Without trail verification, reviews become rubber stamps. A task with "code looks fine" is not reviewed — it's waved through.
The trail tells you: who did what, when, in what order, with what inputs. If the trail is incomplete, the review cannot be thorough.
Use the trail-reconstructor sub-agent:
Launch trail-reconstructor with task_id
This returns a chronological event list without bloating your context.
Against the checklist above. Mark each event as:
A complete trail with garbage content is still a bad trail:
| Quality Signal | Good | Bad |
|---|---|---|
| Plan | References verbatim, maps to contributions | "Will implement the feature" |
| Progress | Specific: "auth middleware done, JWT signing with RS256" | "Making progress" |
| Commits | Conventional format, meaningful messages | "WIP", "fix", "update" |
| Completion | Explains what, why, how to verify | "Done" |
These gaps mean you CANNOT approve:
These gaps warrant a note but not rejection:
A trail exists to prove real work happened. If you find yourself checking boxes without examining content, you're doing paperwork, not verification.
Real verification: "The plan references architect's design for repository pattern. The commits show a RepositoryBase class. The test files test the repository. The security contribution said 'validate inputs at boundary' — the commit adds input validation at the API layer."
Paperwork: "Plan exists. Commits exist. Tests exist. Approved."
ops_real_review(task_id, trail_data, criteria)
The review group call includes trail verification as step 3 of the 10-step review protocol.