Structured approaches for deep and wide research. Deep Research (5 phases) finds answers. Wide Research (4 phases) finds questions. Research Synthesis (5 phases) finds patterns across sources. Research builds understanding, not just collects information.
Version: 1.2 Author: Tres Pies Design Purpose: Structured approaches for deep and wide research that produce actionable understanding, not just information.
Research isn't collecting information — it's building a mental model that enables decisions. If research doesn't change what you'd do next, it hasn't worked.
Three modes serve one goal:
They are complementary. Wide research identifies where to dig. Deep research does the digging. Synthesis reveals what no single source can show. See the research-synthesis skill for the synthesis workflow.
| Criteria | Deep Research | Wide Research | Research Synthesis |
|---|---|---|---|
| Scope | 1-3 related topics | 10-50+ topics | 3+ existing files |
| Sources per topic | 5-10+ | 1-3 | All provided files |
| Output | Detailed report with recommendation | Landscape map with opportunity matrix | Theme-based synthesis with cross-reference matrix |
| Use when | Decision depends on technical details | Exploring a new problem space | You have multiple files and need to find patterns across them |
Help the user narrow from topic to specific research question. A good research question is answerable, bounded, and actionable.
Bad: "What is the best database?" Good: "Which embedded database supports vector search with <100ms P95 latency for our Go backend?"
Questions to answer:
Output: Research brief
## Research Brief
**Question:** [The core question]
**Decision:** [What this research will inform]
**Scope:**
- In scope: [Topics to explore]
- Out of scope: [Topics to exclude]
**Success Criteria:**
- [ ] [Criterion 1]
- [ ] [Criterion 2]
**Timeline:** [Expected duration]
Timeline: 2-8 hours for a full deep research cycle.
Identify 5-10 high-quality sources. If ~~knowledge base is connected, search internal docs first.
Methods:
Quality Filters:
Output: Source list with relevance annotations
For each source, extract structured notes:
Use web search and web fetch tools aggressively.
Note-Taking Template:
## Notes: [Source Title]
**Main Argument:** [1-2 sentences]
**Key Insights:**
- [Insight 1]
- [Insight 2]
**Evidence:**
- [Data point, study, or example]
**Disagreements:**
- [How this contradicts other sources]
**Open Questions:**
- [What this source doesn't address]
**Relevance:** [How this informs the research question]
Cross-reference findings across sources:
Build a recommendation grounded in evidence.
Stress-test the recommendation:
What domain, market, or technology space? What are the boundaries?
Questions to answer:
Output: Landscape brief
Survey 15-20 sources quickly. For each:
Prioritize breadth over depth. Spend 5-10 minutes per source max.
Output: Tagged source list
Identify:
Cluster findings into categories with maturity assessment (Emerging / Growing / Mature).
Map findings on a 2x2 grid:
High Impact
|
Quick Wins | Big Bets
|
────────────┼────────────
|
Fill Later | Avoid
|
Low Impact
High Feasibility ← → Low Feasibility
Identify the high-impact, high-feasibility quadrant.
Use when:
Process:
Timeline: 1-2 days
The Problem: Needed to explore the landscape of agent orchestration platforms, sandboxing approaches, and skill distribution systems to inform the Dojo v2.0 architecture direction.
The Process (Wide Research):
The Outcome: The wide research produced the 4-era roadmap for Dojo v2.0. The CAS "Quick Win" became Era 1 (shipped). The WASM "Big Bet" became Era 3 (planned). Without the opportunity matrix, the team would have started with WASM (the exciting bet) instead of CAS (the pragmatic foundation).
Key Insight: The opportunity matrix forced a sequencing decision that intuition alone wouldn't have produced. Building the boring foundation first (CAS) de-risked the exciting bet later (WASM).
Problem: Starting focused ("which database for vector search?") but ending scattered ("let me also compare all cloud providers and their pricing...").
Solution: Define boundaries in Phase 1 and enforce them. When something interesting but out-of-scope appears, note it for a future research cycle — don't chase it now.
Problem: Only seeking sources that support your initial intuition, ignoring contradicting evidence.
Solution: After forming a recommendation (Phase 4), actively search for the strongest counterargument (Phase 5). If you can't find one, you haven't looked hard enough.
Problem: Reading forever without synthesizing. 20 sources become 30, become 50, and you still haven't produced a recommendation.
Solution: Time-box each phase. When in doubt, synthesize what you have. A recommendation with 5 sources and caveats is more useful than 50 unsynthesized bookmarks.
Problem: Reading titles and abstracts but not engaging with the actual arguments and evidence.
Solution: For deep research, take structured notes per source (Phase 3). The note-taking process forces engagement.
Problem: Producing a beautiful research document that doesn't inform any specific decision.
Solution: Every research cycle must answer: "What will I do differently because of this research?" If the answer is "nothing," the research failed.
research-synthesis — Dedicated skill for synthesizing 3+ research files into cross-referenced insightsstrategic-scout — Strategic scouting often triggers deep or wide research on specific tensionscontext-ingestion — Routes research files to the appropriate processing modeseed-extraction — Extract reusable patterns from research insights into persistent seedsretrospective — Review research effectiveness during sprint retrospectives## Research: [Topic]
**Question:** [Core research question]
**Date:** [Date]
### Sources Reviewed
| # | Source | Author/Org | Year | Relevance | Key Contribution |
|---|--------|-----------|------|-----------|-----------------|
| 1 | [Title] | [Author] | [Year] | High/Med/Low | [One sentence] |
### Key Findings
#### Finding 1: [Theme]
**Evidence:** [What supports this]
**Confidence:** High/Medium/Low
**Implication:** [What this means for the decision]
### Synthesis
[Cross-referenced analysis of findings]
### Recommendation
[Action to take based on findings]
- **Rationale:** [Why this is the best choice]
- **Risk:** [What could go wrong]
- **Mitigation:** [How to address the risk]
### Confidence Level
**[High/Medium/Low]** — [Rationale for confidence assessment]
### Open Questions
- [Questions that need further research]
## Landscape: [Domain]
**Domain:** [Problem space]
**Date:** [Date]
### Source Scan
| # | Source | Summary | Relevance (1-5) | Key Takeaway |
|---|--------|---------|-----------------|--------------|
| 1 | [Title] | [One sentence] | [Rating] | [Insight] |
### Patterns & Themes
#### Theme 1: [Name]
**Evidence:** [Sources that support this]
**Maturity:** Emerging / Growing / Mature
**Implication:** [What this suggests]
### Opportunity Matrix
| Approach | Impact | Feasibility | Risk | Priority |
|----------|--------|-------------|------|----------|
| [Approach 1] | High/Med/Low | High/Med/Low | High/Med/Low | 1-5 |
### Recommended Focus Areas
1. **[Area 1]:** [Why this is promising] — **Next step:** [Action]
Token Savings: ~2,000-4,000 tokens per research session (structured approach prevents re-reading and wandering) Quality Impact: Ensures research is focused, comprehensive, and actionable Maintenance: Update when new research patterns emerge
Last Updated: 2026-04-06 Status: Active