Query-driven codebase exploration engine — ask any question about code and get structural answers through automatic multi-analysis. Internally composes code-journey, dependency-graph, bug-investigation, and other skills. Use when onboarding to unfamiliar code, investigating 'how does this work?', or mapping impact before changes. This is the single entry point for all code understanding tasks.
Ask any question about code and get structural answers. Answers surface new questions, and the tool automatically digs deeper until understanding is complete.
/code-explorer <question or target>
No mode selection needed — the intent is auto-detected and multiple analyses are combined.
/code-explorer "Who calls this API?"
/code-explorer "What breaks if I delete this service?"
/code-explorer "Uncover implicit specs around authentication"
/code-explorer "Understand the payment service"
/code-explorer src/services/payment.ts
The heart of this skill is an exploration loop that does not stop at a single answer.
┌─────────────────────────────────────────────────────────┐
│ EXPLORE LOOP │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Observe │──→│ Orient │──→│ Decide │──→ Act │
│ │ read code│ │ analyze │ │ dig deep │ ↓ │
│ └──────────┘ └──────────┘ └──────────┘ │ │
│ ↑ │ │
│ └─────────────────────────────────────────┘ │
│ │
│ Stop conditions: │
│ - Question has a complete answer │
│ - User says "OK here" │
│ - 3 loops elapsed (auto-stop → confirm to continue) │
│ │
└─────────────────────────────────────────────────────────┘
Iteration 1: Directly answer the question (structural output) Iteration 2: Automatically dig into risks/unknowns found in the answer Iteration 3: Check implicit specs and blind spots in surrounding code
After 3 loops, ask the user whether to continue.
Analyze $ARGUMENTS and determine the combination of analyses to run.
| Question Intent | Auto-run Analyses | Internal Skills |
|---|---|---|
| "Who calls X?" "Where does X come from?" | call tree + dependency trace | dependency-graph |
| "What happens if I change X?" "Impact?" | impact analysis + breakage scan | — |
| "When does X break?" | breakage scenarios + implicit spec check | bug-investigation |
| "I want to understand X" "Overview" | execution flow + viewpoint rotation + blind spot | code-journey + dependency-graph |
| File path only | full scan (all analyses) | all |
Existing skills are auto-invoked based on the question. Users do not need to know individual skill names.
Important: Rather than calling each skill sequentially, independent analyses run as parallel agents simultaneously.
When this skill is invoked, execute the following pipeline autonomously. No mid-process confirmation needed (only ask after final output whether to go deeper).
Determine investigation scope from $ARGUMENTS.
Input: "Understand the payment service"
→ Scope: src/services/payment/ and its dependencies
→ target name: payment-service (used in filenames)
Input: src/auth/
→ Scope: all files under src/auth/
→ target name: auth
If scope is too broad (100+ files), split into sub-scopes by directory.
Run 3 parallel investigations using Agent tool, save results as JSON files.
First, create .code-explorer/ directory and meta file:
mkdir -p .code-explorer
Write to .code-explorer/{target}-meta.json:
{ "target": "{display name}", "question": "{user question}", "scope": "{scope}" }
Then launch 3 Agents in parallel. Each Agent is instructed to output JSON only.
Important: Explore agents cannot use Write tool. After receiving each Agent's JSON output, use Write tool yourself (main Claude) to save to file.
Agent 1 (structure) — receive output, Write to .code-explorer/{target}-structure.json:
Investigate dependencies of the following scope and output JSON only (no explanation).
Result format: {"nodes": [...], "edges": [...], "paths": [...]}
Schema: harness-engineering/skills/code-explorer/graph-ir-schema.json
nodes elements:
- id: kebab-case unique identifier
- type: function|event|guard|action|data (no custom types; skills use action)
- label: display name (60 chars max)
- detail: one-line description
- file: relative path from project root
- line: main line number
- layer: architecture layer (see rules below)
- risk: CRITICAL|HIGH|MEDIUM|LOW (only when applicable)
- metadata: { implicitSpecs: [...], breakageScenarios: [...] } (only when applicable)
edges elements:
- source → target is execution order (caller → callee). Do not reverse.
- type: calls|reads|writes|triggers|references|side-effect
- reads = actual import/require/file reads only. Conceptual dependencies use references
paths elements:
- id, name, nodes (array of node ids), type (execution|guard|breakage|error|side-effect)
- risk: paths with unguarded breakage are HIGH or above
Layer classification: routes/api→presentation, services→service, domain/models→domain,
repositories/db→infrastructure, config→data, test→test, scripts→script,
.github→ci, hooks/middleware→hook. If no match, use directory name as-is.
Scope: {scope}
Agent 2 (behavior) — receive output, Write to .code-explorer/{target}-behavior.json:
Read code in the following scope and output implicit specs and breakage scenarios as JSON (no explanation).
Result format: {"implicitSpecs": [...], "breakagePerNode": {...}}
implicitSpecs elements:
- rule: implicit spec (string)
- source: file:line
- type: convention|design-decision|assumption|platform-behavior|tuning|gap
- risk: risk description (only when applicable)
breakagePerNode: map of node id → array of breakage scenario strings
Categories: NULL/RACE/TIMEOUT/STATE/ORDER
Scope: {scope}
Agent 3 (gaps) — receive output, Write to .code-explorer/{target}-gaps.json:
Output untested, unreachable, and undocumented code in the following scope as JSON (no explanation).
Result format: {"blindSpots": [...]}
blindSpots elements:
- type: untested|no-test|dead-code|undocumented|no-owner|single-point|gap|role-unclear|stale-docs
- target: target name
- file: relative path
- description: description
Scope: {scope}
After all 3 Agents complete, use the CLI merge to integrate into a single Graph IR:
node harness-engineering/skills/code-explorer/explorer-cli.js merge \
.code-explorer/{target}-meta.json \
.code-explorer/{target}-structure.json \
.code-explorer/{target}-behavior.json \
.code-explorer/{target}-gaps.json \
-o .code-explorer/{target}-graph.json
merge automatically:
structure nodes/edges/paths as basebehavior breakagePerNode into each node's metadata.breakageScenariosbehavior implicitSpecs at top levelgaps blindSpots at top levelnode harness-engineering/skills/code-explorer/explorer-cli.js pipeline \
.code-explorer/{target}-graph.json --open
This executes in order:
| Failure | Response |
|---|---|
| validate: schema error | Read error message, fix JSON file, re-run from merge |
| validate: referential integrity | orphan node → add edges or remove node; missing node ref → fix id typo |
| wcag: contrast | Fix color definition in template |
| wcag: keyboard/ARIA | Fix ARIA attributes in template |
CLI=harness-engineering/skills/code-explorer/explorer-cli.js
# Merge Agent outputs
node $CLI merge meta.json structure.json behavior.json gaps.json -o graph.json
# Full pipeline (validate → generate → wcag → open)
node $CLI pipeline .code-explorer/{target}-graph.json --open
# Individual steps
node $CLI validate .code-explorer/{target}-graph.json
node $CLI generate .code-explorer/{target}-graph.json
node $CLI wcag .code-explorer/{target}-explorer.html --fix
After opening the HTML in browser, display a text summary in terminal:
## Understanding Map: {target}
Nodes: {n} | Edges: {e} | Paths: {p} | Risks: {r_high} HIGH, {r_med} MED
### Top Risks
1. [HIGH] {description} ({file}:{line})
...
### Implicit Specs
1. {rule} — {source}
...
### Follow-up Questions
- [ ] {question}
...
Interactive UI: .code-explorer/{target}-explorer.html
Type 'continue' to explore follow-ups, or ask a new question.
When the user says continue, auto-investigate Follow-up Questions:
Auto-stop after 3 loops and ask user to confirm continuation.
getUserById (src/repository/user.ts:42)
├── caller: UserService.fetchProfile (src/service/user.ts:18)
│ └── caller: ProfileController.get (src/api/profile.ts:25)
└── caller: AdminService.lookupUser (src/service/admin.ts:67)
[Direct] src/service/user.ts:updateUserEmail
├─→ [DB] users.email — unique constraint, may conflict
├─→ [Side Effect] isVerified → false
│ ├─→ [UI] verified badge disappears
│ └─→ [API] verified-only endpoints → 403
└─→ [Event] UserEmailChanged → NotificationService
1. [NULL] user.paymentMethod is undefined
Trigger: Guest checkout without payment setup
Guard: No null check — TypeError at payment.ts:45
2. [RACE] Concurrent payment for same order
Trigger: Double-click submit
Guard: No idempotency key — duplicate charge
Rule 1: Session expires after 30min inactivity
Source: session.ts:23
Contradicts: config.ts SESSION_TTL=3600 (1hr)
Rule 2: Account locks after 5 failed logins
Source: login.ts:67
Lock duration: permanent (no auto-unlock)
UNTESTED: payment.ts:processRefund — 0 coverage
DEAD CODE: legacy_export.ts — no imports
UNDOCUMENTED: src/jobs/ — 4 files, 0 docs
SINGLE POINT: config/secrets.ts — 23 dependents, 1 contributor
| Finding | Auto-run next analysis |
|---|---|
| More callers than expected | Escalate to impact analysis |
| Unguarded breakage | implicit-spec check: is it intentional? |
| Contradicting implicit specs | Trace both code paths with code-journey |
| Untested critical path | Breakage scan for risk assessment |
| Circular dependency | dependency-graph for detailed visualization |
Deep-dive runs automatically but always stops after 3 iterations to confirm with user.
Graph IR accumulates in .code-explorer/. Follow-up questions in the same session add nodes/edges to existing Graph IR (no re-analysis).
Important implicit specs or design decisions found during exploration are saved to MEMORY.md after user confirmation. Never saved automatically.
Before generating the Interactive HTML, optionally ask the user's hypothesis:
Target: PaymentService
Your prediction first (skippable):
- Where do you think this function's callers are?
- What edge cases do you expect?
(Press Enter to skip and show results directly)
If the user enters a prediction: add to Graph IR as prediction field, overlay "prediction vs reality" in Interactive HTML, highlight nodes/edges that differ from prediction.
code-journey: Step-by-step execution flow visualization (used internally)bug-investigation: Known bug root cause analysis (used in deep-dive)See gotchas.md in this directory for known pitfalls and recurring mistakes when using this skill.