Engineering architecture review with mandatory diagrams and edge case analysis. Reviews plans or implementations for technical soundness.
Reviews plans or implementations for technical soundness. Produces mandatory mermaid diagrams, edge case analysis, and a scored verdict.
<!-- === PREAMBLE START === --><!-- === PREAMBLE END === -->Agentic Workflow — 35 skills available. Run any as
/<name>.
Skill Purpose /reviewMulti-agent PR code review /postReviewPublish review findings to GitHub /addressReviewImplement review fixes in parallel /enhancePromptContext-aware prompt rewriter /bootstrapGenerate repo planning docs + CLAUDE.md /rootCause4-phase systematic debugging
/bugHunt |
| Fix-and-verify loop with regression tests |
/bugReport | Structured bug report with health scores |
/shipRelease | Sync, test, push, open PR |
/syncDocs | Post-ship doc updater |
/weeklyRetro | Weekly retrospective with shipping streaks |
/officeHours | Spec-driven brainstorming → EARS requirements + design doc |
/productReview | Founder/product lens plan review |
/archReview | Engineering architecture plan review |
/withInterview | Interview user to clarify requirements before executing |
/design-analyze | Detect web vs iOS, extract design tokens (dispatcher) |
/design-analyze-web | Extract design tokens from reference URLs (web) |
/design-analyze-ios | Extract design tokens from Swift/Xcode assets |
/design-language | Define brand personality and aesthetic direction |
/design-evolve | Detect web vs iOS, merge new reference into design language (dispatcher) |
/design-evolve-web | Merge new URL into design language (web) |
/design-evolve-ios | Merge Swift reference into design language (iOS) |
/design-mockup | Detect web vs iOS, generate mockup (dispatcher) |
/design-mockup-web | Generate HTML mockup from design language |
/design-mockup-ios | Generate SwiftUI preview mockup |
/design-implement | Detect web vs iOS, generate production code (dispatcher) |
/design-implement-web | Generate web production code (CSS/Tailwind/Next.js) |
/design-implement-ios | Generate SwiftUI components from design tokens |
/design-refine | Dispatch Impeccable refinement commands |
/design-verify | Detect web vs iOS, screenshot diff vs mockup (dispatcher) |
/design-verify-web | Playwright screenshot diff vs mockup (web) |
/design-verify-ios | Simulator screenshot diff vs mockup (iOS) |
/verify-app | Detect web vs iOS, verify running app (dispatcher) |
/verify-web | Playwright browser verification of running web app |
/verify-ios | XcodeBuildMCP simulator verification of iOS app |
Output directory: ~/.agentic-workflow/<repo-slug>/
Prefer Serena for all code exploration — LSP-based symbol lookup is faster and more precise than file scanning.
| Task | Tool |
|---|---|
| Find a function, class, or symbol | serena: find_symbol |
| What references symbol X? | serena: find_referencing_symbols |
| Module/file structure overview | serena: get_symbols_overview |
| Search for a string or pattern | Grep (fallback) |
| Read a full file | Read (fallback) |
Before running this skill, verify the environment is set up:
# Derive repo slug
REMOTE_URL=$(git remote get-url origin 2>/dev/null || echo "")
if [ -n "$REMOTE_URL" ]; then
REPO_SLUG=$(echo "$REMOTE_URL" | sed 's|.*[:/]\([^/]*/[^/]*\)\.git$|\1|;s|.*[:/]\([^/]*/[^/]*\)$|\1|' | tr '/' '-')
else
REPO_SLUG=$(basename "$(pwd)")
fi
echo "repo-slug: $REPO_SLUG"
# Check bootstrap status
SKILLS_OK=true
for s in review postReview addressReview enhancePrompt bootstrap rootCause bugHunt bugReport shipRelease syncDocs weeklyRetro officeHours productReview archReview withInterview design-analyze design-analyze-web design-analyze-ios design-language design-evolve design-evolve-web design-evolve-ios design-mockup design-mockup-web design-mockup-ios design-implement design-implement-web design-implement-ios design-refine design-verify design-verify-web design-verify-ios verify-app verify-web verify-ios; do
[ -d "$HOME/.claude/skills/$s" ] || SKILLS_OK=false
done
BRIDGE_OK=false
lsof -i TCP:3100 -sTCP:LISTEN &>/dev/null && BRIDGE_OK=true
RULES_OK=false
[ -d ".claude/rules" ] && [ -n "$(ls -A .claude/rules/ 2>/dev/null)" ] && RULES_OK=true
echo "skills-symlinked: $SKILLS_OK"
echo "bridge-running: $BRIDGE_OK"
echo "rules-directory: $RULES_OK"
Domain rules in .claude/rules/ load automatically per glob — no action needed if rules-directory: true.
If SKILLS_OK=false or BRIDGE_OK=false, ask the user via AskUserQuestion:
"Agentic Workflow is not fully set up. Run setup.sh now? (yes/no)"
If yes: run bash <path-to-agentic-workflow>/setup.sh (resolve path from the review skill symlink target).
If no: warn that some features may not work, then continue.
If RULES_OK=false (and SKILLS_OK and BRIDGE_OK are both true), do not offer setup.sh. Instead, show:
"Domain rules not found — run
/bootstrapto generate.claude/rules/for this repo."
Create the output directory for this repo:
mkdir -p "$HOME/.agentic-workflow/$REPO_SLUG"
Load prior work state for this repo from prism-mcp before starting.
1. Derive a topic string — synthesize 3–5 words from the skill argument and task intent:
/officeHours add dark mode → "dark mode UI feature"/rootCause TypeError cannot read properties → "TypeError cannot read properties"/review 42 → use the PR title once fetched: "PR {title} review""{REPO_SLUG} {skill-name}"2. Load context from prism-mcp:
mcp__prism-mcp__session_load_context — project: REPO_SLUG, level: "standard",
toolAction: "Loading session context", toolSummary: "<skill-name> context recovery"
Store the returned expected_version — you will need it at Session Close.
3. Surface results:
Prior context: {summary} Use this to inform your approach before continuing.
"prism-mcp unavailable: {error}. Ensure prism-mcp is running and registered."
Run at the end of every skill, after all work is complete and the report has been shown to the user.
Save a structured ledger entry and update the live handoff state for this repo.
1. Save ledger entry (immutable audit trail):
mcp__prism-mcp__session_save_ledger — project: REPO_SLUG,
conversation_id: "<skill-name>-<ISO-timestamp, e.g. 2026-04-08T14:32:00Z>",
summary: "<one paragraph describing what was accomplished this session>",
todos: ["<any open items left incomplete>", ...],
files_changed: ["<paths of files created or modified>", ...],
decisions: ["<key decisions made during this skill run>", ...]
2. Update handoff state (mutable live state for next session):
mcp__prism-mcp__session_save_handoff — project: REPO_SLUG,
expected_version: <value returned by session_load_context>,
open_todos: ["<open items not yet completed>", ...],
active_branch: "<current git branch from: git branch --show-current>",
last_summary: "<one sentence: what this skill just did>",
key_context: "<critical facts the next session must know — constraints, decisions, blockers>"
If either call fails, surface the error:
"prism-mcp session save failed: {error}. Context may not persist to next session."
If a file path is given, read that file as the plan/spec to review.
If a directory is given, explore its structure using Glob and Read to understand the implementation.
If nothing is given, try two fallbacks in order:
$HOME/.agentic-workflow/$REPO_SLUG/plans/. Plans may be either a directory (new SDD format with requirements.md, design.md, TASKS.md) or a single .md file (legacy format). Check both and prefer whichever is newest:
# Find newest plan directory (SDD format) and newest plan file (legacy format)
NEWEST_DIR=$(ls -dt "$HOME/.agentic-workflow/$REPO_SLUG/plans/"*/ 2>/dev/null | head -1)
NEWEST_FILE=$(ls -t "$HOME/.agentic-workflow/$REPO_SLUG/plans/"*.md 2>/dev/null | head -1)
# Compare timestamps -- prefer whichever is more recent
if [ -n "$NEWEST_DIR" ] && [ -n "$NEWEST_FILE" ]; then
if [ "$NEWEST_DIR" -nt "$NEWEST_FILE" ]; then
PLAN_TARGET="$NEWEST_DIR"
else
PLAN_TARGET="$NEWEST_FILE"
fi
elif [ -n "$NEWEST_DIR" ]; then
PLAN_TARGET="$NEWEST_DIR"
elif [ -n "$NEWEST_FILE" ]; then
PLAN_TARGET="$NEWEST_FILE"
fi
If PLAN_TARGET is a directory, explore its structure using Glob and Read to review all three files (requirements.md, design.md, TASKS.md). If it is a single file, read it as before.Read all available architectural context:
CLAUDE.md — project conventions and structureplanning/ARCHITECTURE.md or ARCHITECTURE.md — existing architecture docsplanning/ERD.md or ERD.md — data modelplanning/API_CONTRACT.md or API_CONTRACT.md — API surfaceREADME.md — project overviewUse Glob to discover these files -- do not assume paths.
Spawn an Agent with task "Explore" to map the system:
The agent should investigate and report on:
The agent should read source files, configuration, and package manifests to build an accurate picture.
Create three mermaid diagrams. These are mandatory -- the review is incomplete without them.
Show each module/service as a box with dependency arrows. Include:
graph TD
A[Component A] --> B[Component B]
A --> C[External Service]
B --> D[(Database)]
Show how data moves through the system from entry to exit:
flowchart LR
Input --> Transform --> Store --> Output
Model the single most critical user flow end-to-end:
sequenceDiagram
User->>Service: request
Service->>DB: query
DB-->>Service: result
Service-->>User: response
For each component boundary identified in Step 3, analyze these four failure modes:
Start the pipeline:
mcp__prism-mcp__session_start_pipeline — project: REPO_SLUG,
objective: "Adversarially challenge this architecture review of: {target description from Step 1}. Find: (1) failure modes not identified in the edge case analysis, (2) risks whose impact or likelihood is understated, (3) suggested improvements that are vague or insufficient ('add monitoring' is not a mitigation — what specifically?), (4) component boundaries where error propagation is unaddressed. For each finding, provide component:function or file:line evidence.",
working_directory: "<absolute path to repo root>",
max_iterations: 2
Store the returned pipeline_id. Poll until complete:
mcp__prism-mcp__session_check_pipeline_status — pipeline_id: <pipeline_id>
When complete:
COMPLETED — no additional issues found beyond what the review already covers. Proceed to Step 6.FAILED — incorporate the evaluator's additional findings into the edge case analysis before writing the verdict. Surface them clearly in the Top Risks table.Produce the final assessment:
# Architecture Review: {title}
_Reviewed by `/archReview` on {ISO date}_
## Verdict: {SOUND | NEEDS WORK | REDESIGN}
{One paragraph justification}
## Scores
| Dimension | Score (1-10) | Notes |
|-----------|:---:|-------|
| Complexity | {n} | {brief justification} |
| Scalability | {n} | {brief justification} |
| Maintainability | {n} | {brief justification} |
## Component Diagram
{mermaid diagram from 4a}
## Data Flow Diagram
{mermaid diagram from 4b}
## Sequence Diagram
{mermaid diagram from 4c}
## Top Risks
| # | Risk | Impact | Likelihood | Mitigation |
|---|------|--------|------------|------------|
| 1 | {risk} | {high/med/low} | {high/med/low} | {recommendation} |
| 2 | {risk} | {high/med/low} | {high/med/low} | {recommendation} |
| ... | ... | ... | ... | ... |
## Edge Case Findings
### Dependency Failures
{findings from 5a}
### Input Validation Gaps
{findings from 5b}
### Load Concerns
{findings from 5c}
### State Leakage Risks
{findings from 5d}
## Missing Error Handling
- {specific location and what's missing}
- {specific location and what's missing}
## Suggested Improvements (Prioritized)
| Priority | Improvement | Effort | Impact |
|----------|------------|--------|--------|
| P0 | {must fix before shipping} | {S/M/L} | {description} |
| P1 | {should fix soon} | {S/M/L} | {description} |
| P2 | {nice to have} | {S/M/L} | {description} |
Generate a URL-safe slug from the target title (lowercase, hyphens, no special chars). Write the file:
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
Write to: $HOME/.agentic-workflow/$REPO_SLUG/plans/{timestamp}-arch-review-{slug}.md
Include all three mermaid diagrams and the complete analysis.
Show a summary to the user:
Architecture Review complete!
Verdict: {SOUND | NEEDS WORK | REDESIGN}
Scores:
Complexity: {n}/10
Scalability: {n}/10
Maintainability: {n}/10
Review written to: ~/.agentic-workflow/{repo-slug}/plans/{timestamp}-arch-review-{slug}.md
Top 3 risks:
1. {risk summary}
2. {risk summary}
3. {risk summary}
Suggested next steps:
/productReview — Get founder-lens feedback on the plan
/officeHours — Brainstorm solutions to identified risks