Multi-phase deep research pipeline that discovers, validates, and synthesizes information from many sources into a comprehensive report. Uses parallel Researcher subagents for maximum coverage and depth. Use this skill whenever the user says "deep research" followed by a topic, when they explicitly invoke it, or when you determine that a question requires thorough, multi-source investigation rather than a quick lookup. This is different from a simple documentation search or a single web query — use this when the user needs a comprehensive understanding of a topic drawn from many sources, cross-validated and synthesized into a structured report. Triggers include: "deep research on X", "research X thoroughly", "I need a full report on X", "dive deep into X", "investigate X comprehensively", or any context where surface-level answers won't cut it and the user (or you) needs authoritative, multi-source coverage of a topic.
A four-phase research pipeline that goes wide first (discover everything), filters for quality (validate and rank sources), goes deep second (parallel agents extract detail from the best sources), and then synthesizes everything into a single structured report.
Before doing any research, ask the user where they want the report delivered. Use the AskUserQuestion tool:
deep-research/"Wait for their answer before proceeding. Store their choice for Phase 4.
The goal here is breadth — cast the widest possible net before filtering. If the user gave a vague topic, spend a moment refining the research question first. A well-framed question produces dramatically better results.
Spawn a single Researcher subagent (using the Agent tool with subagent_type: "Researcher") with a prompt like:
Research the following topic as broadly as possible: {topic}
Your job is pure discovery — find as many distinct, relevant sources as you can. Use at least 5-6 different search queries to cover different angles. Adapt your query strategy to the type of research topic:
Single technology (e.g., "WebTransport API maturity"):
- Direct name: "WebTransport API"
- Comparison with alternatives: "WebTransport vs WebSockets"
- Production readiness: "WebTransport production readiness 2026"
- Official specs: "W3C WebTransport specification status"
- Community sentiment: "WebTransport developer experience"
Comparison/evaluation (e.g., "Workers vs Lambda@Edge vs Deno Deploy"):
- Head-to-head comparisons: "Cloudflare Workers vs Lambda@Edge"
- Each option independently: "Deno Deploy edge computing review"
- Independent benchmarks: "edge computing platform benchmark 2026"
- Migration/switching experiences: "migrated from Lambda@Edge to Workers"
- Pricing/limits comparisons: "edge computing pricing comparison"
Landscape/survey (e.g., "AI code review tools"):
- Broad landscape: "AI code review tools comparison 2026"
- Specific tools by name (search for each major player individually)
- Integration patterns: "AI code review CI/CD integration"
- Developer experience reports: "AI code review developer experience"
- Critical/contrarian views: "AI code review limitations problems"
- Industry analysis: "AI code review market adoption"
Mix and match these strategies based on the actual topic. The goal is diverse coverage across different angles and source types.
For each source you find, return:
- URL — the full URL
- Title — the page or document title
- Type — what kind of source it is (official docs, blog post, academic paper, GitHub repo, forum thread, news article, vendor page, etc.)
- Publisher — who published this (e.g., "Mozilla/MDN", "Cloudflare blog", "personal blog by [name]", "Company X marketing page")
- Brief description — 1-2 sentences on what this source covers
- Apparent date — any publication or last-updated date you can find
Return the full list as a structured markdown table. Aim for at least 10-15 distinct sources. Prioritize diversity — different authors, different types of sources, different angles on the topic. Include at least some sources from each category: official/spec docs, independent analysis, community/developer experience, and (if applicable) vendor-published content.
Now you have a list of sources. Your job is to analytically assess and rank them. Do this yourself (no subagent needed) — you're the one with the full picture.
For each source, score it on four dimensions (1-5 scale each):
| Dimension | What it measures |
|---|---|
| Recency | How recent is the content? Published this year = 5, last year = 4, 2-3 years = 3, older = 2, unknown = 1 |
| Authority | Is this an official source, a respected publication, a known expert? Official docs/specs = 5, reputable tech blog = 4, personal blog with demonstrated expertise = 3, forum thread = 2, unknown = 1 |
| Relevance | How directly does this source address the research topic? Dead-on = 5, closely related = 4, tangential = 3, barely related = 2, off-topic = 1 |
| Independence | How free is this source from commercial bias? Independent research/analysis = 5, editorial tech publication = 4, sponsored but disclosed = 3, vendor blog about own product = 2, vendor marketing page = 1 |
Composite score = Recency + Authority + Relevance + Independence (max 20)
The Independence dimension matters because vendor-published content systematically overstates their own product's capabilities. A vendor's benchmark of their own tool will almost always show better numbers than an independent benchmark of the same tool. This doesn't mean vendor sources are useless — they're often the most detailed — but the report needs to account for that bias.
Sort by composite score descending. Select the top sources — use your judgment on how many, but typically the top 5-8 that scored 13 or above. If fewer than 5 sources score well, include everything above 10. Ensure you have a mix of independent and vendor sources — don't filter out all vendor content, but make sure independent sources outnumber vendor ones.
Present this ranked table to the user briefly so they can see what you're working with, then proceed to Phase 3.
This is where the real depth comes from. Take your prioritized sources and distribute them across multiple Researcher subagents running in parallel.
How to split the work:
Adapt the grouping strategy to the research topic type:
Comparison topics (e.g., "X vs Y vs Z"): Assign one agent per product/option to go deep on its strengths, limitations, and ecosystem. Add one cross-cutting agent focused on independent benchmarks, migration experiences, and direct comparisons. Example for 3 options: 4 agents (3 per-product + 1 cross-cutting).
Landscape/survey topics (e.g., "how companies handle X"): Group by perspective rather than by tool. Example: one agent on tooling/products, one on real-world practices and case studies, one on developer sentiment and community perspectives.
Single technology topics (e.g., "is X ready for production?"): Group by aspect. Example: one agent on specification/standards status and browser/runtime support, one agent on real-world usage and production experiences, one agent on performance characteristics and limitations.
General rules:
Prompt each Researcher subagent with:
You are conducting deep research on: {topic}
Your assigned sources to investigate thoroughly: {list of URLs with titles and publishers}
For each source:
- Fetch and read the full content using WebFetch
- Extract all key findings, data points, claims, and insights relevant to the topic
- Note any unique information this source provides that others might not
- Flag any claims that seem questionable or contradict other sources
- Check for vendor bias: If this source is published by a company with a commercial interest in the topic (e.g., a cloud provider writing about their own platform, a tool vendor publishing benchmarks of their own tool), explicitly note which claims serve the vendor's interests vs. which appear objective. For example, a vendor benchmark showing their product is "3x faster" should be flagged as vendor-claimed unless independently verified.
Then synthesize across your assigned sources:
- What are the key themes and findings?
- Where do your sources agree?
- Where do they disagree or contradict each other?
- What gaps remain — what questions does this set of sources NOT answer?
- Which claims are independently verified vs. single-source or vendor-only?
Return your findings as structured markdown with clear section headings.
Launch all agents in parallel — use a single message with multiple Agent tool calls so they run concurrently.
Once all Phase 3 agents have returned, you have deep findings from multiple angles. Now synthesize everything into a single coherent report.
Cross-referencing discipline — this is what makes the skill's output better than a single research pass:
# Deep Research Report: {Topic}
*Generated: {date}*
*Sources analyzed: {count}*
## Executive Summary
{3-5 paragraphs covering the most important findings, key takeaways,
and any critical caveats. This should stand alone — someone reading
only this section should walk away informed.}
## Findings by Theme
### {Theme 1}
{Detailed findings, cross-referenced across sources. Include specific
data points, quotes, or examples where they add value. Note where
sources agree and where they diverge. When citing a specific data point,
indicate the source inline, e.g., "latency dropped 35% (Source: Vroble,
Nov 2025)" so readers can trace claims back to origins.}
### {Theme 2}
{...}
### {Theme N}
{...}
## Contradictions and Open Questions
{This section is not optional. Cover:
- Areas where sources directly disagreed (with specifics)
- Claims that could not be independently verified
- Questions that the research did not fully answer
- Vendor claims that lack independent corroboration
This tells the reader where the limits of this research are.}
## Sources
| # | Source | Type | Ind. | Score | Key Contribution |
|---|--------|------|------|-------|-----------------|
| 1 | [Title](URL) | Type | Yes/No | X/20 | What this source uniquely contributed |
| ... | ... | ... | ... | ... | ... |
The "Ind." column marks whether the source is independent (Yes) or vendor-published (No). This lets readers quickly assess the evidence base.
Based on the user's Phase 0 choice:
deep-research/{topic-slug}-{date}.md using the Write tool (create the deep-research/ directory in the current working directory if it doesn't exist). Then tell the user where the file was saved and provide a brief summary inline.