Validate an ad-hoc implementation plan through architect and quality-guard (and optionally security-auditor), then output a revised plan with adjustments applied.
Arguments (if provided): $ARGUMENTS
# Source resolve-config: marketplace installs get ${CLAUDE_PLUGIN_ROOT} substituted
# inline before bash runs; ./install.sh users fall back to ~/.claude. If neither
# path resolves, fail loudly rather than letting resolve_artifact be undefined.
if [ -f "${CLAUDE_PLUGIN_ROOT}/shared/resolve-config.sh" ]; then
source "${CLAUDE_PLUGIN_ROOT}/shared/resolve-config.sh"
elif [ -f "$HOME/.claude/shared/resolve-config.sh" ]; then
source "$HOME/.claude/shared/resolve-config.sh"
else
echo "ERROR: resolve-config.sh not found. Install via marketplace or run ./install.sh" >&2
exit 1
fi
REVIEW_EXEC_MODE=$(resolve_exec_mode review_plan team)
Use $REVIEW_EXEC_MODE to determine team vs sub-agent behavior in Step 3.
Take an ad-hoc implementation plan, run it through design-review agents (architect, quality-guard, and optionally ), and return a revised plan with their adjustments applied. Fills the gap between (ideation) and (execution) — lightweight, stateless, no requirements file needed.
security-auditor/brainstorm/implementParse $ARGUMENTS into two parts:
--security flag (anywhere in arguments) → SECURITY_OPT_IN=1, strip the flagPLAN_TEXTIf PLAN_TEXT is empty after stripping flags, use AskUserQuestion to prompt:
"Plan""What plan would you like reviewed?""Enter plan" / "I'll type the plan in the text field below""Cancel" / "Never mind, don't run the review"If user cancels, stop with: No plan provided. Review cancelled.
The user's response via the text input becomes PLAN_TEXT. If they enter nothing twice, stop with: Cannot review an empty plan.
Always run: architect, quality-guard
Run security-auditor if any of the following:
SECURITY_OPT_IN=1 (user passed --security)PLAN_TEXT matches security heuristic — check with grep, case-insensitive, for any of: auth, authn, authz, authentic, authoriz, password, credential, token, secret, permission, role, session, cookie, encrypt, decrypt, PII, sensitive, personal data, payment, card number, social security, SSNif [ "$SECURITY_OPT_IN" = "1" ] || echo "$PLAN_TEXT" | grep -qiE "auth(n|z|entic|oriz)|password|credential|token|secret|permission|role|session|cookie|encrypt|decrypt|pii|sensitive|personal data|payment|card number|social security|ssn"; then
INCLUDE_SECURITY=1
else
INCLUDE_SECURITY=0
fi
Report the decision to the user:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Review Scope
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Agents: architect, quality-guard{, security-auditor if included}
Trigger: {--security flag | security heuristic matched on "{matched keyword}" | default scope}
Mode: $REVIEW_EXEC_MODE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
If $REVIEW_EXEC_MODE = "subagent":
Run agents in parallel via a single message with multiple Task tool calls.
Task 1 — Use Task tool with subagent_type: "architect":
Prompt: Validate the following ad-hoc implementation plan against architecture patterns, design soundness, and structural concerns. This is a pre-implementation design review — the plan has NOT been implemented yet.
Plan:
{PLAN_TEXT}
Evaluate:
- Does the plan respect module boundaries and separation of concerns?
- Are there architectural anti-patterns or coupling issues?
- Does the approach align with existing patterns in the codebase? (Use Explore/Grep to verify)
- Are there missing steps, hidden dependencies, or unstated prerequisites?
- Is the scope coherent — does it do one thing well, or does it sprawl?
- Are there simpler alternatives that achieve the same outcome?
Return structured findings:
- CRITICAL: architectural flaws that would require rework
- IMPORTANT: design concerns the plan should address
- SUGGESTIONS: improvements that would strengthen the plan
Task 2 — Use Task tool with subagent_type: "quality-guard":
Prompt: Challenge the following ad-hoc implementation plan (Level 1 — Plan Validation). Be adversarial. Push back on unverified assumptions.
Plan:
{PLAN_TEXT}
Verify:
- Does the plan address the actual problem, or a tangential one?
- Which claims in the plan are assumed vs verified against the code?
- Are success criteria concrete and measurable, or vague?
- What edge cases, failure modes, or interactions is the plan silent on?
- Is the plan's scope right — too narrow (misses root cause) or too broad (scope creep)?
- What would the plan break if executed as written?
Return structured findings:
- CRITICAL: claims that appear wrong, missing pieces that would cause the plan to fail
- IMPORTANT: assumptions that need verification before proceeding
- SUGGESTIONS: gaps worth addressing even if not blocking
Task 3 (only if INCLUDE_SECURITY=1) — Use Task tool with subagent_type: "security-auditor":
Prompt: Review the following ad-hoc implementation plan for security concerns. This is pre-implementation — no code exists yet.
Plan:
{PLAN_TEXT}
Evaluate:
- Does the plan introduce authentication, authorization, or session-handling changes? Are they sound?
- Input validation, output encoding, injection surfaces
- Sensitive data handling (PII, credentials, tokens)
- Secret storage, key management
- Audit logging, access trails
- OWASP-relevant concerns for the described change
Return structured findings:
- CRITICAL: security flaws that must be fixed before implementation
- IMPORTANT: security concerns the plan should address
- SUGGESTIONS: defensive improvements
If $REVIEW_EXEC_MODE = "team" (default):
Create a review team for cross-pollination:
TeamCreate(team_name="review-plan-{short_hash_of_plan}")
TaskCreate: "Validate architecture" (T1)
description: |
Plan: {PLAN_TEXT}
Review for architectural soundness, pattern alignment, scope coherence.
Share findings with teammates — quality-guard will challenge claims.
TaskCreate: "Challenge plan assumptions" (T2)
description: |
Plan: {PLAN_TEXT}
Adversarial Level-1 plan validation. Verify claims, surface assumptions,
identify gaps. Use SendMessage to challenge architect's findings or push
back on security-auditor if their scope bleeds into design.
[If INCLUDE_SECURITY=1]
TaskCreate: "Security review" (T3)
description: |
Plan: {PLAN_TEXT}
Evaluate auth, data handling, injection surfaces, secrets, logging.
Share findings with teammates.
[PARALLEL - Single message with multiple Task calls]
Task tool: name: "arch-review", subagent_type: "architect", team_name: "review-plan-{hash}"
Task tool: name: "plan-skeptic", subagent_type: "quality-guard", team_name: "review-plan-{hash}"
[If INCLUDE_SECURITY=1]
Task tool: name: "sec-review", subagent_type: "security-auditor", team_name: "review-plan-{hash}"
Assign tasks. Agents cross-pollinate findings via SendMessage. Collect results and TeamDelete.
Combine agent outputs into a single structured report:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Plan Review — Findings
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
## Original Plan
{PLAN_TEXT}
---
## 🔴 Critical
[Concerns that would cause the plan to fail or require rework if ignored]
- **[agent-name]** {finding}
- ...
## 🟡 Important
[Concerns the revised plan should address]
- **[agent-name]** {finding}
- ...
## 🔵 Suggestions
[Improvements worth considering]
- **[agent-name]** {finding}
- ...
{If security-auditor ran:}
## 🔒 Security
[Security-specific findings — may overlap with critical/important above, kept here for visibility]
---
## Verdict
**{One of: Plan is sound | Plan needs adjustments | Plan needs rework}**
{1-2 sentence summary of overall assessment}
Verdict rubric:
Plan is sound — no critical findings, ≤ 1 important findingPlan needs adjustments — no critical findings, but multiple important findings to applyPlan needs rework — one or more critical findingsIncorporate agent feedback into a revised plan. Apply every CRITICAL finding, every IMPORTANT finding, and SUGGESTIONS where they clearly strengthen the plan without bloating scope.
Render below the findings report:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Revised Plan
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
{Full revised plan — self-contained, ready to paste into /nexus:implement or use as a working spec. Preserve the intent of the original plan; integrate adjustments inline rather than tacking them on at the end.}
---
### Changes from Original
- {bullet per significant change, citing the agent whose finding drove it}
- ...
Rules for the revised plan:
If the verdict is Plan needs rework and a critical finding requires a design decision the skill cannot make alone, use AskUserQuestion to surface the decision before producing the revised plan. Give the user the option to defer (skill emits an "unresolved" version) or pick an answer that the skill then incorporates.
Display a one-line next-step hint:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Done. Hand the Revised Plan to /nexus:implement, or iterate by re-running /nexus:review-plan with the updated version.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
resolve-config.sh missing — handled in the Configuration block; hard-stop with install instructions.architect path is strictly required; if it fails, stop with a clear error./nexus:feedback.quality-guard) challenges the other agents' findings in team mode./implement QA — /implement still runs its own code-level review phase. /nexus:review-plan catches design problems before they become code./brainstorm — /brainstorm generates options; /nexus:review-plan validates a chosen approach. Use them in sequence if the plan is still half-formed./nexus:review-plan Extract the auth middleware into its own package so we can share it with the admin app
Security-auditor auto-included (heuristic matched auth). Output: findings report + revised plan that likely calls out shared-state concerns, versioning of the extracted package, and test coverage gaps.
/nexus:review-plan --security Switch our session store from in-memory to Redis so horizontal scaling works
Security-auditor included via flag (even though the heuristic would have matched session anyway). Findings will cover at-rest encryption, credential handling for the Redis connection, key rotation, and failure modes.
/nexus:review-plan
Prompts for plan text, then proceeds.