Task-typed codebase exploration. Classifies the task and returns tuned subagent prompts instead of generic ones. Use when exploring a codebase before implementation — standalone or from any workflow.
Used by: claude-flow Phase 2 (direct), /research skill (task classification for researcher selection)
This skill is used internally by claude-flow during Phase 2 (Exploration). Rather than dispatching generic explorer subagents, it first classifies the task type, then selects targeted prompts from the prompt library that are tuned for that category of work.
The result is exploration that surfaces the exact context each task type needs — rather than broadly scanning the codebase and hoping the right patterns turn up.
claude-flow dispatches 2–3 explorer subagents in parallel during Phase 2. Before constructing those subagent prompts, consult this skill to:
prompt-library.md directly.[FEATURE][AREA][TARGET]The prompt optimization system A/B tests explorer prompts to find which wording produces the best exploration results. Before dispatching:
# Select variant for Explorer A
python3 ~/.claude/scripts/prompt-tracker.py select explorer <category> A
# Select variant for Explorer B
python3 ~/.claude/scripts/prompt-tracker.py select explorer <category> B
Each returns {"variant_id": "...", "prompt": "..."}. Use the returned prompt instead of the one in prompt-library.md. Record the variant_id so outcomes can be attributed later.
After Phase 2 completes: Record which files each explorer found. Store this list — it will be compared against files actually used in Phase 5 to compute the exploration quality score.
After Phase 5 completes: Record outcome via:
python3 ~/.claude/scripts/prompt-tracker.py record '{
"session_id": "<session>",
"task_category": "<category>",
"variant_id": "<variant_id>",
"explorer_role": "<A or B>",
"files_found": ["file1.py", "file2.py"],
"files_used_in_impl": ["file1.py", "file3.py", "file4.py"],
"phase5_retries": 2,
"plan_steps": 8
}'
Fallback: If prompt-tracker.py is not available or errors, use the prompts from prompt-library.md directly. The optimization system is additive — the workflow works without it.
If 2 prompts are listed for a category, dispatch both in parallel. If only 1 is listed, dispatch it as a single explorer.
When the swarm registry exists at .claude/swarm/agent-registry.json, rank prompt variants by findings_used_rate before selecting which to dispatch.
variant_id in the registry (variant IDs are tagged in prompt-library.md under each explorer heading).findings_used_rate descending. Dispatch the top-ranked variant as Explorer A and the second-ranked as Explorer B.prompt-library.md (Explorer A first, Explorer B second).Registry lookups are read-only during variant selection. Record dispatched events after dispatch using the variant_id returned — this is what feeds the registry data used in future selections.
| Category | When to Use |
|---|---|
endpoint | Adding or modifying API routes, controllers, handlers, or REST/GraphQL endpoints |
ui | Frontend or template changes — components, pages, CSS, state management |
data | Models, database migrations, queries, schema changes, ORM relationships |
integration | Connecting to or modifying external APIs, webhooks, third-party services |
refactor | Restructuring existing code without changing behavior — moving, renaming, abstracting |
bugfix | Debugging a defect — tracing an error, unexpected behavior, or regression |
config | Configuration, environment variables, infrastructure, deployment changes |
exploration | Spike, prototype, or proof-of-concept — validate a hypothesis with minimal overhead. Lighter prompts focused on feasibility and key risks rather than full production patterns |
general | Fallback when the task doesn't fit a specific category |
Use both signals together:
endpoint; templates/components suggest ui; models/migrations suggest data; external service calls suggest integration.When both signals agree, classification is confident. When they conflict or the request is ambiguous, default to general.
Examples:
/payments/refund endpoint" → endpointbugfixrefunded_at column to invoices" → datarefactorintegrationuiconfigexplorationexplorationgeneralEach prompt in the library contains one or more placeholders:
[FEATURE] — the specific feature or capability being added/changed[AREA] — the area of the codebase (e.g., "billing", "dashboard", "auth")[TARGET] — the specific function, class, or module being refactored or debuggedAlways fill these in from the user's request before dispatching. If the request is vague, use the most specific description available (e.g., "the payments module" rather than "some code").
See prompt-library.md in this skill directory for the complete set of subagent dispatch prompts organized by category.
If the category is unclear after reviewing both signals, use the general prompts. Do not guess aggressively — a well-scoped general exploration is more useful than a narrowly-scoped exploration aimed at the wrong category.