Unified research — auto-detects mode from input (general topics, PRD questions, best practices, framework docs, institutional learnings, codebase analysis). Single entry point replacing all prior research skills.
Note: The current year is 2026. Use this when searching for recent documentation and patterns.
One entry point for all research. Auto-detects what kind of research you need and runs the right strategy. Supports combining modes when a question spans multiple domains.
WebSearch/WebFetch; Codex web.run (search_query, then open)Ref first, then Context7, then web.run for gapsGlob; Codex use shell search (rg --files, find, ls)Announce at start: "I'm using the dev-research skill to research this topic and present findings."
| Mode | Triggers | What It Does |
|---|---|---|
| General | Standalone topic, evaluation, feasibility, "is X ready for production?" | Web research with max 5 searches, cross-reference validation |
| PRD | File path to PRD, "resolve open questions", "investigate questions" | Categorize questions, parallel subagents per investigatable question, PRD update |
| Best Practices | "best practices", "patterns", "conventions", "how should we..." | Skills-first → Ref → Context7 → web search. Deprecation check mandatory. |
| Framework Docs | Named framework/library, API usage, "how does X work", version questions | Deprecation check FIRST → Ref → Context7 → version-specific docs → source code |
| Learnings | "check learnings", "what do we know about", --learnings, past solutions | qmd vault search + ~/docs rg for institutional knowledge |
| Codebase | "understand this repo", --codebase, "what patterns does this project use" | Architecture docs → templates → ast-grep/rg for patterns |
Auto-detection heuristics (applied in order):
.md → check if it's a PRD (has Open Questions section) → PRD mode--mode=X or --mode=X,Y → use specified mode(s)--learnings → include Learnings mode--codebase → include Codebase modeMulti-mode: modes can combine. E.g., --mode=general,learnings or auto-detected when a topic spans domains.
--mode, --learnings, --codebase)| Output | When |
|---|---|
| Report file | Deep research, reference material → docs/research/YYYY-MM-DD-<topic>.md |
| PRD update | PRD mode — findings update the PRD directly |
| Quick answer | Simple question, conversation reply is enough |
If the user doesn't specify, default to quick answer for single-mode simple queries, report file for multi-mode or deep research.
Plan the execution per mode. This phase is internal — don't present it to the user.
For each active mode, determine:
Run all modes in parallel using subagents. Each mode has a specific execution strategy:
WebSearch, Codex: web.run with search_query)WebFetch, Codex: web.run with open) and use docs tools (Ref preferred, then Context7) for deep analysis| Category | Action |
|---|---|
| Scope / Requirements | Investigate now |
| External research | Investigate now |
| Prior art / Patterns | Investigate now |
| Design exploration | Flag — needs to be seen/experienced, not researched |
| Technical implementation | Defer to tech planning |
| User decision needed | Flag for user |
general-purpose subagent per investigatable question, in parallel**/SKILL.md files, match topic to available skills, extract patterns"[API/service] deprecated 2026 sunset shutdown""[API/service] breaking changes migration"[skill] > [official-docs] > [community]Search institutional knowledge in ~/notes (Obsidian vault) and ~/docs for past solutions and patterns.
qmd tool for semantic search over the Obsidian vault:
qmd_query for hybrid search with reranking (best quality)qmd_search for fast keyword matches~/docs — Search docs directory for relevant files:
rg -i "(keyword1|keyword2|synonym)" ~/docsARCHITECTURE.md, README.md, CONTRIBUTING.md, AGENTS.md, CLAUDE.md.github/ISSUE_TEMPLATE/, PR templates, RFC templatesast-grep for syntax-aware patterns when available, fall back to rgrg for common patterns in the codebaseWhen any mode could benefit from personal notes/context, also run:
qmd_query "[topic]" to search the Obsidian vault[skill] — from local skill files (highest authority)[learnings] — from institutional knowledge / vault[official-docs] — from Ref, Context7, or official documentation[community] — from web search, blog posts, forums[codebase] — from repository analysisBased on output preference from Phase 1:
Save to docs/research/YYYY-MM-DD-<topic>.md with this lean format:
# Research: [Topic]
**Date**: YYYY-MM-DD
**Modes**: [General, Best Practices, etc.]
## Key Findings
[The meat — organized by theme, not by mode. Source tags inline.]
## Recommendations
[Specific, actionable next steps. Ordered by priority.]
## Trade-offs & Risks
[What could go wrong. Competing approaches. Deprecation warnings.]
## Open Questions
[What remains unresolved. What needs user decision.]
## Sources
[Links to docs, articles, skill files referenced.]
Sacrifice grammar for concision. No fluff.
Respond directly in conversation. Include source attribution tags inline. Keep it concise.
[skill], [learnings], [official-docs], [community], [codebase]qmd for Obsidian vault search when institutional knowledge could helpAfter delivery, present options using the interactive question tool:
/save-to-notes for ObsidianWhen invoked from another skill's workflow: return findings to the calling skill instead of presenting handoff options.
| Anti-Pattern | Better Approach |
|---|---|
| Investigating technical implementation questions | Defer to tech planning — those need codebase context |
| Researching questions that need to be experienced | Flag for design exploration — research can't answer "how should this feel?" |
| Making PRD changes without approval | Present findings and proposed changes first |
| Sequential research when questions are independent | Spawn parallel subagents |
| Going online first | Check skills → learnings → Ref → Context7 → then web search |
| Recommending an API without checking deprecation | Always run deprecation check before recommending external APIs |
| Overwhelming with detail | Distill to actionable insights with source tags |