Parallel peer review gate for TECH sub-agent deliveries. Spawns 3-5 specialized reviewers (security, correctness, integration) after BUILDER/FIXER completes. Gates on majority PASS. Blocking CRITICAL from any reviewer stops merge. Invoked with /multi-review or auto-triggered by TECH verify-after-write pipeline (IL-11/12/13 augmentation). Works standalone on any file/directory (degrades gracefully without NexusOS workspace).
You are orchestrating a peer review of a TECH sub-agent delivery using multiple specialized reviewers in parallel.
/multi-review~/.nexus/workspace/active/ (latest by mtime)<task_id>: reviews specific delivery at ~/.nexus/workspace/active/{task_id}/<file_path>: reviews a specific file or directory--light: 3 reviewers only (security, correctness, integration)--full: 5 reviewers (adds style, performance)Determine what to review:
~/.nexus/workspace/active/{task_id}/PROGRESS.md for output files~/.nexus/workspace/active/git diff HEAD~1 --name-only for recent changesList all target files with line counts.
Launch 3-5 parallel review agents using the Agent tool. Each reviewer operates independently.
Core reviewers (always, --light or default):
| Reviewer | Focus | Check For |
|---|---|---|
| security-reviewer | Injection, auth bypass, secrets exposure, unsafe eval, OWASP top 10 | Command injection in bash, hardcoded secrets, path traversal, unvalidated input |
| correctness-reviewer | Logic errors, edge cases, off-by-one, unreachable code, wrong conditions | Does the code do what DISPATCH.md says? Are acceptance criteria met? |
| integration-reviewer | Does this break existing NexusOS components? Import paths, state files, hook wiring | Check consumers: who calls this? What changes for them? |
Extended reviewers (--full only):
| Reviewer | Focus |
|---|---|
| style-reviewer | NexusOS conventions, naming, file organization, FORGE compliance |
| performance-reviewer | Unnecessary loops, blocking I/O, large file reads, O(n^2) patterns |
Each reviewer agent receives:
Standalone mode (no DISPATCH.md/PROGRESS.md): When context files are absent, reviewers operate in file-only mode. Skip acceptance-criteria validation (correctness-reviewer checks internal logic only). Security and integration reviewers function normally.
Timeout: Each reviewer agent has a 120-second soft limit. If a reviewer has not returned after 120s, mark it SKIPPED and proceed with available verdicts. Use the Agent tool's natural completion; do not add artificial waits.
Wait for all reviewers to complete. Build verdict table:
| Reviewer | Verdict | Findings |
|----------|---------|----------|
| security | PASS | 0 |
| correctness | PASS | 1 (LOW) |
| integration | FAIL | 1 (HIGH) |
PASS conditions (ALL must be true):
FAIL conditions (ANY triggers FAIL):
CONDITIONAL:
Output the review summary:
## Multi-Review: {task_id or file}
| Reviewer | Verdict | Findings |
|----------|---------|----------|
| ... | ... | ... |
### Gate: PASS / CONDITIONAL / FAIL
### Findings (if any)
{numbered list of all findings across all reviewers, sorted by severity}
### Recommendation
{MERGE / FIX-THEN-MERGE / BLOCK}
/multi-review --light.Save review results to ~/.nexus/workspace/active/{task_id}/REVIEW.md if task_id exists.
This skill augments the existing verify-after-write chain:
/multi-review parallel peer review (this skill)TECH should invoke /multi-review after IL-13 passes. If multi-review returns FAIL, TECH does NOT mark the delivery as complete.
[PARTIAL-REVIEW], gate decision based on available verdicts only[NO-TARGET] error, exit without gatinggit diff fallback also yields nothing: ask user for explicit file path via AskUserQuestionAfter every completed review (PASS, CONDITIONAL, or FAIL), store a summary to Cortex:
cortex_store(collection="technical", text="Multi-Review {task_id}: {GATE_VERDICT}. {count} findings ({severity_breakdown}). Reviewers: {reviewer_verdicts}.", metadata={type: "review", task_id: "{task_id}", verdict: "{verdict}", findings_count: N})
This enables cross-session trend analysis (e.g., "which components fail security review most often").