Comprehensive research orchestrator that classifies topics, dispatches parallel research-angle workers, and produces structured artifacts using domain-specific schemas.
Comprehensive research orchestrator that classifies a topic, selects a domain-specific schema, plans independent research angles, dispatches parallel workers (or executes sequentially), and produces a structured artifact. Output is always a written artifact -- never inline-only.
Use when:
Don't use when:
Parse from $ARGUMENTS:
surfacestandardexhaustive--focus security (optional)Print this banner once at start:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
/deep-research
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Then print step indicators before beginning work:
[1/10] Parsing arguments…[2/10] Reading context… (if applicable)[3/10] Classifying topic…[4/10] Planning research angles…[5/10] Checking sub-agent availability…[6/10] Executing research angles…[7/10] Checking for comparisons… (if applicable)[8/10] Synthesizing findings…[9/10] Resolving output target…[10/10] Writing artifact…For long-running operations (web fetches, multi-angle research), print a start line and a completion line:
→ Researching: {angle}…
→ Complete.
Keep it concise; don't print a line for every shell command.
[1/10] Parsing arguments…
Parse from $ARGUMENTS:
surface | standard (default) | exhaustive. Controls breadth and depth of research.--focus security).If the topic is unclear, ask the user for clarification:
AskUserQuestionThe prompt should ask: "What topic would you like me to research?"
[2/10] Reading context…
If --context is specified:
.md files and incorporateContext informs topic classification and angle planning but does not replace the skill's methodology.
If --context is not specified, skip this step.
[3/10] Classifying topic…
Classify the topic to select the appropriate extended schema:
| Classification | Extended Schema | Typical Topics |
|---|---|---|
| Technical | schema-technical.md | packages, libraries, frameworks, tools, languages |
| Comparative | schema-comparative.md | "X vs Y", choosing between options |
| Conceptual | schema-conceptual.md | design patterns, methodologies, concepts, theories |
| Architectural | schema-architectural.md | system design, infrastructure, deployment patterns |
Context (if provided) informs classification -- e.g., context with performance constraints might push a general topic toward technical.
Present classification to user briefly: → Classified as: {type} → using {schema} template
[4/10] Planning research angles…
Based on the topic and classification, plan 3-6 research angles. Each angle is an independent research question or perspective that can be explored in isolation.
Example angles for "event-driven architecture":
Angle planning rules:
--focus is specified, prioritize that angle and reduce others.--context is provided, let context shape which angles are prioritized and what each angle looks for.--depth surface, use 2-3 angles with lighter research per angle.--depth exhaustive, use 5-6 angles with thorough research per angle.[5/10] Checking sub-agent availability…
→ research workers: {available | not resolved} ({reason})
→ Selected: Execution Tier {1|2|3} — {description}
Execution Tier 1 -- Parallel worker dispatch (preferred):
Detection logic (provider split):
Agent tool available -- dispatch multiple concurrent Agent tool calls, each with subagent_type: "general-purpose" and a structured research prompt. Parallelism comes from concurrent tool invocations.multi_agent = trueExecution Tier 2 -- Sequential self-execution:
→ research workers: not resolved — falling back to Execution Tier 2 (sequential)Execution Tier 3 -- Inline execution:
[6/10] Executing research angles…
For each angle, the worker (sub-agent or self) receives:
--context was provided)Each worker returns structured findings inline to the orchestrator (not written to disk).
Print progress per angle:
→ Researching: {angle}…
→ Complete.
[7/10] Checking for comparisons…
If competing options emerge during research (e.g. "which library for X?"):
/compare as a sub-agent with the competing options/compare returns its output inline to the orchestrator (no intermediate file, no model-tagged filename)If no competing options emerged, skip this step.
[8/10] Synthesizing findings…
[9/10] Resolving output target…
If an explicit output path was provided in $ARGUMENTS, use it directly — no prompt.
Otherwise, determine a default suggestion using OAT-aware detection:
.oat/ at repo root (project-level OAT) → suggest .oat/repo/research/~/.oat/ (user-level OAT) → suggest ~/.oat/research/Then ask the user via AskUserQuestion (Claude Code), structured user-input tooling (Codex), or equivalent:
"Where would you like to write the artifact? (default: {suggested path})"
[10/10] Writing artifact…
Write the structured research artifact using:
Base schema structure from references/schema-base.md (Executive Summary, Methodology, Findings, Sources & References)
Extended schema sections based on classification from references/schema-{type}.md
Artifact frontmatter contract:
---
skill: deep-research
schema: { selected schema type }
topic: '{topic}'
model: { self-detected model identifier }
generated_at: { today's date }
---
Plus optional keys when applicable: context, depth, focus
Model-tagged filename: {topic-slug}-{model-id}.md (e.g., event-driven-architecture-opus-4-6.md)
Reference schemas live in: references/schema-base.md and references/schema-{type}.md
/deep-research "event-driven architecture"
/deep-research "React state management in 2026" --depth exhaustive --focus performance
/deep-research "authentication patterns for microservices" --context docs/security-requirements.md
/deep-research "Rust vs Go for CLI tools" ~/research/
Topic too broad:
--focus or ask user to scope downNo web search available:
Schema classification unclear:
Sub-agent dispatch fails:
→ research workers: not resolved (dispatch failed) — falling back to Execution Tier 2Output path not writable:
--depth, --focus, and --context correctly influence research scope and priorities