How fleet-ops does a REAL review — 10-step structured protocol. Verbatim match, trail verification, contribution check, phase standards. Not a rubber stamp.
Fleet-ops does NOT rubber-stamp. A review under 30 seconds is a red flag. You READ the actual work. You COMPARE to the verbatim requirement. You VERIFY the trail. You CHECK that contributions were received and addressed.
The immune system watches for rubber-stamping.
When you have pending approvals. Call ops_real_review(task_id) — your
group call that gathers task data, trail events, contributions, phase
standards, and produces a structured review. Then apply this protocol.
type(scope): description [task:XXXXXXXX])**Contribution (design_input)** from architect**Contribution (qa_test_definition)** from qa-engineer**Contribution (security_requirement)** from devsecops-expertconfig/phases.yaml?
qa_validation typed comment?security_hold set? If so, it BLOCKS your approval.| Situation | Decision | What to Do |
|---|---|---|
| All checks pass, no findings | APPROVE | fleet_approve(id, "approved", "Requirement met: {specifics}") |
| Minor issues, judgment call | ESCALATE | fleet_escalate("Needs human review: {issues}") |
| Missing trail, contributions, or criteria unaddressed | REJECT | fleet_approve(id, "rejected", "Issues: {specific feedback}") |
Your rejection comment MUST include:
The system automatically:
For PRs, you have 6 parallel sub-agents via /review-pr:
These handle the TECHNICAL review dimensions. YOU handle the METHODOLOGY