6-step multi-source research workflow with inline citations and synthesis
Structured multi-source research workflow. Produces synthesis reports with inline citations and explicit methodology.
Without structured research, Claude:
Deep research ensures comprehensive, verifiable, and transparent knowledge synthesis.
Define the research question explicitly.
Transform vague requests into focused questions:
Specify scope boundaries: Timeframe (last 12 months vs all history), domain (academic vs industry vs docs), depth (30 min survey vs 2-hour deep dive).
Identify success criteria: What decision does this inform? What confidence level is needed?
Break the research question into focused sub-topics.
Example: React Context vs Redux → sub-questions: performance difference, bundle size, official React 18 guidance, community trends, maintainability tradeoffs.
Each sub-question: answerable from 3-6 sources, focuses on one dimension, contributes to overall question.
Source diversity is critical. Gather 15-30 sources from: official docs (APIs, RFCs, changelogs), academic papers (ArXiv, IEEE, ACM), industry blogs (Thoughtworks, company eng blogs), community signals (GitHub, Stack Overflow, Reddit), tools/data (npm trends, benchmarks).
Skim all, mark 3-5 for deep read, record metadata. Prefer sources <12 months old (tech moves fast).
Read the top 3-5 sources in full, not just abstracts.
Selection criteria:
While reading, extract:
Deep read time budget:
Synthesis is NOT summarization.
Summarization (wrong):
Synthesis (correct):
Organize by themes, not by sources.
Identify:
Every claim needs a citation.
Use inline citations: [Source Title, Date] or [Author, Date]
Example:
"React 18's automatic batching reduces re-renders by 30-50% in typical applications [React 18 RFC, 2021], making Context API performance comparable to Redux for moderate state complexity [Benchmark by Josh Comeau, 2022]. However, at >1000 component trees, Redux still outperforms due to selector memoization [Redux Toolkit Docs, 2023]."
Report Format:
# Research Report: [Question]
**Date:** [ISO date]
**Scope:** [Brief scope statement]
## Executive Summary
[3-5 sentence synthesis answering the research question directly. Decision-focused.]
## [Theme 1]
[Synthesis with inline citations]
## [Theme 2]
[Synthesis with inline citations]
## [Theme 3]
[Synthesis with inline citations]
## Key Takeaways
1. [Actionable insight 1]
2. [Actionable insight 2]
3. [Actionable insight 3]
## Limitations and Gaps
[What is uncertain or unknown? Where do sources contradict? What follow-up research is needed?]
## Sources
[Full citation list with URLs, sorted by relevance or alphabetically]
1. [Title] by [Author], [Date]. [URL]
2. [Title] by [Author], [Date]. [URL]
...
## Methodology
[Brief description of search strategy, source selection criteria, and analysis approach]
Report should be:
High-Quality Sources:
Low-Quality Sources:
When in doubt, apply the hierarchy:
Explicitly state when:
Example gap acknowledgment:
"Performance comparisons are limited to client-side React applications. Server-side rendering (SSR) performance was not covered in available sources. Follow-up research needed for Next.js-specific implications."
Gaps are not failures. Transparency about limitations increases report credibility.
Complements investigate skill:
Feeds into other skills:
Stopping at 3 sources: Insufficient diversity. You need 15-30 to identify consensus and contradictions.
Summarizing instead of synthesizing: Listing "Source A says X, Source B says Y" is not synthesis. Organize by themes, integrate evidence.
Skipping inline citations: Claims without citations are unverifiable. Every assertion needs a source reference.
Using only recent sources: Sometimes historical context matters. If researching "why does X exist", older sources (original RFC, launch announcement) are valuable.
Ignoring contradictions: When sources disagree, investigate why. Methodology differences? Different contexts? One source outdated?