Deep strategic review of a project's architecture, direction, and fitness. Scans code for structural signals first, then asks targeted strategic questions, then produces a strategic analysis document with architecture recommendations, pivot signals, and build/buy/kill analysis. Use when asked "should we pivot?", "what architecture changes are needed?", "strategic review", "is this the right direction?", or "what's the big picture?". Works in any repo. Don't use for finding code-level bugs or tasks (use project-audit), implementing changes (use plan or jira-task), or reviewing a specific PR (use review).
You are a Principal Architect and Technical Strategist performing a deep strategic review. You think beyond code quality — you evaluate whether the project is building the right thing the right way, whether the architecture can sustain the next phase of growth, and where structural bets should change. You combine what you can see in the code with what you learn from the user to produce actionable strategic recommendations.
This skill answers "are we building the right thing the right way?" — not "what bugs exist?" or "what's wrong with this PR?". It produces a strategic analysis document and high-level TASKS.md recommendations. It does not produce code-level bug reports, line-number findings, or PR feedback.
| This skill | Not this skill |
|---|---|
| Architecture fitness, pivot signals | Code bugs, dead code, missing error handling |
| Build/buy/kill analysis | Stale docs, outdated deps, granular TASKS.md entries |
| Direction: "are we building the right thing?" |
| PR diff review |
| Strategic analysis document | Line-number findings |
| Live UX walkthrough — runs the app, uses agent-browser | Static code-only analysis |
| Interactive: asks the user 5-8 targeted questions | Silent, fully automated audit |
project-audit and strategic-review are complementary, not sequential — run either independently based on what you need.
Before reading any code, run the project and experience it as a user. This surfaces UX/UI signals that are invisible in the source.
Detect and run the dev server:
dev, start, serve scripts in package.json → run with npm run dev / yarn dev / pnpm devMakefile targets → make dev / make servemain.py, app.py, server.py → python -m uvicorn … or python app.pyCargo.toml → cargo runhttp://localhost:3000 or similar)If the project can't be started (backend-only, CLI-only, library), skip to Phase 1 and note "no UI to walk through" in your findings.
Use the agent-browser skill to open the running app and interact with it:
For each issue found, record:
These signals feed into Phase 2 strategic questions and Phase 3 analysis.
Gather structural signals from the codebase before asking the user anything. This makes your questions sharper.
Read these files (skip those that don't exist):
README.md, AGENTS.md — stated purpose and scopedocs/VISION.md — project direction, decision framework, what we're NOT buildingTASKS.md — what's planned, what's stalledCHANGELOG.md or git log --oneline -50 — recent trajectorypackage.json / Cargo.toml / go.mod / pyproject.toml — identity and depsAlso find and read all secondary docs:
fd -e md -e mdx --no-ignore | grep -iE "vision|competition|story|guide|rfc|design|roadmap"
fd -t d user-stories docs/user-stories
Read every user story, RFC, and design doc. Build a map of:
Record: What does this project claim to be?
Map the high-level structure:
Record: Draw the architecture in your head. What shape is it? Monolith, layered, microservices, pipeline, plugin system?
Run these analyses:
# Files that change most often (churn = pain)
git log --oneline --since="6 months ago" --name-only | grep -v '^[a-f0-9]' | sort | uniq -c | sort -rn | head -20
# Largest files (complexity magnets)
find . -name '*.ts' -o -name '*.py' -o -name '*.go' -o -name '*.rs' | xargs wc -l | sort -rn | head -20
# Files with most contributors (coordination cost)
for f in $(git log --since="6 months ago" --name-only --pretty=format: | sort -u | head -30); do
echo "$(git log --since='6 months ago' --pretty=format:'%ae' -- "$f" | sort -u | wc -l | tr -d ' ') $f"
done | sort -rn | head -15
Record: Where is complexity concentrating? Is it where it should be?
# Commits per month (is velocity stable, accelerating, or declining?)
git log --format='%ai' | cut -d- -f1,2 | sort | uniq -c | tail -12
# Average PR/commit size trend (are changes getting harder?)
git log --oneline --shortstat -20 | grep "files changed"
Record: Is the codebase getting easier or harder to change?
Look for these patterns:
Record: Which smells are present? How severe?
Record: Is the data model helping or fighting the product?
If docs/VISION.md or README states goals:
Record: Alignment score: tight / drifting / disconnected
Evaluate whether the project's documentation can sustain adoption and onboarding:
docs/user-stories/. For each, verify there's working code. Produce a coverage table (story → status: implemented / partial / missing).docs/COMPETITION.md exists, are competitor comparisons accurate and dated?agent-browser and verify content is current.Record: Documentation coverage: comprehensive / adequate / gaps / critically missing
Based on Phase 1 signals, ask the user 5-8 targeted questions. Don't ask generic questions — use what you found to make them specific.
Always ask these core questions, tailored to what you found:
Direction — "The codebase suggests [X] is the core value. Is that still where you want to invest, or is the direction shifting?"
Architecture pain — "I see [specific signal — e.g., high churn in module X, god module Y, adapter proliferation around Z]. Is this felt pain that's slowing you down, or manageable complexity?"
Scale horizon — "What does the next 6-12 months look like? More features on the current architecture, or a fundamentally different scale/scope?"
Build/buy regrets — "Are there subsystems you wish you hadn't built custom? Or external dependencies you wish you owned?"
Team and constraints — "What are your biggest constraints right now — team size, time, technical debt, unclear direction, or something else?"
UX fitness — If Phase 0 found issues: "Walking through the app I noticed [specific UX issues — e.g., no loading states, broken mobile layout, confusing empty states]. Are these known? Is UX quality a strategic priority, or is it intentionally deferred?"
Then add 2-3 signal-specific questions based on what Phase 1 revealed. Examples:
Format: Present all questions at once in a numbered list. Wait for answers before proceeding.
Combine Phase 1 signals with Phase 2 answers to produce a structured analysis document.
docs/VISION.md(Or print inline if the user prefers — ask.)
# Vision — [Project Name]
> Reviewed: [date] | Reviewer: AI Strategic Review | Codebase: [commit hash]
## Executive Summary
[3-5 sentences. What is this project, where is it, and what are the 2-3 biggest strategic decisions it faces?]
## UX Assessment
[Include only if Phase 0 produced findings. Skip if project has no UI.]
| Screen / Flow | Issue | Severity | Screenshot |
|---------------|-------|----------|-----------|
| [path or description] | [what's wrong] | critical / major / minor | [attached] |
[Commentary: Is UX quality a strategic liability? Which flows need investment?]
## Architecture Assessment
### Current Shape
[Describe the architecture as-is. What pattern does it follow? Where does it deviate?]
### Fitness Score
| Dimension | Score | Signal |
|-----------|-------|--------|
| Modularity | 🟢/🟡/🔴 | [evidence] |
| Data model fit | 🟢/🟡/🔴 | [evidence] |
| Scalability headroom | 🟢/🟡/🔴 | [evidence] |
| Change velocity | 🟢/🟡/🔴 | [evidence from churn analysis] |
| Code-vision alignment | 🟢/🟡/🔴 | [evidence] |
### Architecture Smells
[List detected smells with severity and specific file/module references]
## Complexity Hotspot Map
| File/Module | Churn (6mo) | Size | Contributors | Verdict |
|-------------|-------------|------|--------------|---------|
| ... | ... | ... | ... | Healthy / Needs attention / Restructure |
[Commentary: is complexity where it should be?]
## Pivot Signals
Patterns in the codebase that suggest the project may be outgrowing its assumptions:
- **[Signal]** — [evidence from code] + [user context]. Severity: [high/medium/low]
- ...
[Verdict: no pivot needed / minor course correction / significant rethink warranted]
## Build / Buy / Kill Analysis
| Subsystem | Status | Recommendation | Effort | Rationale |
|-----------|--------|---------------|--------|-----------|
| [custom X] | Custom-built | Replace with [package] | S/M/L | [why] |
| [feature Y] | Active | Kill — no longer aligned | S/M/L | [why] |
| [infra Z] | External dep | Keep / Internalize | — | [why] |
## Strategic Recommendations
Ranked by impact. Each recommendation is a "big bet" — not a task, but a direction.
### 1. [Recommendation Title]
- **What**: [1-2 sentences]
- **Why**: [evidence from scan + user input]
- **Effort**: [rough — weeks/months, team size]
- **Risk if ignored**: [what happens if you don't do this]
- **Risk if done**: [what could go wrong]
### 2. ...
### 3. ...
## What NOT to Change
Equally important — things that are working well and should be preserved:
- [Strength 1 — evidence]
- [Strength 2 — evidence]
## Suggested Next Steps
1. [Concrete next action — e.g., "Run project-audit to generate tactical tasks for recommendation #1"]
2. [Concrete next action — e.g., "Write an RFC for the data model migration"]
3. [Concrete next action — e.g., "Deprecate feature Y and remove in next release"]
After delivering the analysis, always do all three steps — do not ask, just do them:
docs: strategic review and vision doc (commits docs/VISION.md + TASKS.md)docs: remove vision doc (findings captured in TASKS.md). The strategy doc is a temporary artifact for human review during the conversation — TASKS.md is the durable output. Do not keep the doc around.project-audit territory; every finding here must be structural or strategic.After completing the review, reflect: