Expert knowledge for AI deep research — methodology, source evaluation, search optimization, cross-referencing, synthesis, and citation formats
| Question Type | Strategy | Example |
|---|---|---|
| Factual | Find authoritative primary source | "What is the population of Tokyo?" |
| Comparative |
| Multi-source balanced analysis |
| "React vs Vue for large apps?" |
| Causal | Evidence chain + counterfactuals | "Why did Theranos fail?" |
| Predictive | Trend analysis + expert consensus | "Will quantum computing replace classical?" |
| How-to | Step-by-step from practitioners | "How to set up a Kubernetes cluster?" |
| Survey | Comprehensive landscape mapping | "What are the options for vector databases?" |
| Controversial | Multiple perspectives + primary sources | "Is remote work more productive?" |
Complex questions should be broken into sub-questions:
Main: "Should our startup use microservices?"
Sub-questions:
1. What are microservices? (definitional)
2. What are the benefits vs monolith? (comparative)
3. What team size/stage is appropriate? (contextual)
4. What are the operational costs? (factual)
5. What do similar startups use? (case studies)
6. What are the migration paths? (how-to)
A (Authoritative): Passes all 5 CRAAP criteria
B (Reliable): Passes 4/5, minor concern on one
C (Useful): Passes 3/5, use with caveats
D (Weak): Passes 2/5 or fewer
F (Unreliable): Fails most criteria, do not cite
Exact phrase: "specific phrase" — use for names, quotes, error messages
Site-specific: site:domain.com query — search within a specific site
Exclude: query -unwanted_term — remove irrelevant results
File type: filetype:pdf query — find specific document types
Recency: query after:2024-01-01 — recent results only
OR operator: query (option1 OR option2) — broaden search
Wildcard: "how to * in python" — fill-in-the-blank
For each research question, use at least 3 search strategies:
site:gov OR site:edu OR site:org [topic][topic] research paper [year] or site:arxiv.org [topic][topic] guide or [topic] tutorial or [topic] how to[topic] statistics or [topic] data [year][topic] criticism or [topic] problems or [topic] myths| Domain | Best Sources | Search Pattern |
|---|---|---|
| Technology | Official docs, GitHub, Stack Overflow, engineering blogs | [tech] documentation, site:github.com [tech] |
| Science | PubMed, arXiv, Nature, Science | site:arxiv.org [topic], [topic] systematic review |
| Business | SEC filings, industry reports, HBR | [company] 10-K, [industry] report [year] |
| Medicine | PubMed, WHO, CDC, Cochrane | site:pubmed.ncbi.nlm.nih.gov [topic] |
| Legal | Court records, law reviews, statute databases | [case] ruling, [law] analysis |
| Statistics | Census, BLS, World Bank, OECD | site:data.worldbank.org [metric] |
| Current events | Reuters, AP, BBC, primary sources | [event] statement, [event] official |
Level 1: Single source (unverified)
→ Mark as "reported by [source]"
Level 2: Two independent sources agree (corroborated)
→ Mark as "confirmed by multiple sources"
Level 3: Primary source + secondary confirmation (verified)
→ Mark as "verified — primary source: [X]"
Level 4: Expert consensus (well-established)
→ Mark as "widely accepted" or "scientific consensus"
When sources disagree:
The evidence suggests [main finding].
[Source A] found that [finding 1], which is consistent with
[Source B]'s observation that [finding 2]. However, [Source C]
presents a contrasting view: [finding 3].
The weight of evidence favors [conclusion] because [reasoning].
A key limitation is [gap or uncertainty].
FINDING 1: [Claim]
Evidence for: [Source A], [Source B] — [details]
Evidence against: [Source C] — [details]
Confidence: [high/medium/low]
Reasoning: [why the evidence supports this finding]
FINDING 2: [Claim]
...
After synthesis, explicitly note:
According to a 2024 study (https://example.com/study), the effect was significant.
According to a 2024 study[1], the effect was significant.
---
[1] https://example.com/study — "Title of Study" by Author, Published Date
In-text: (Smith, 2024)
Reference: Smith, J. (2024). Title of the article. *Journal Name*, 42(3), 123-145. https://doi.org/10.xxxx
For web sources (APA):
Author, A. A. (Year, Month Day). Title of page. Site Name. https://url
According to recent research [1], the finding was confirmed by independent analysis [2].
## References
1. Author (Year). Title. URL
2. Author (Year). Title. URL
# [Question]
**Date**: YYYY-MM-DD | **Sources**: N | **Confidence**: high/medium/low
## Answer
[2-3 paragraph direct answer]
## Key Evidence
- [Finding 1] — [source]
- [Finding 2] — [source]
- [Finding 3] — [source]
## Caveats
- [Limitation or uncertainty]
## Sources
1. [Source](url)
2. [Source](url)
# Research Report: [Question]
**Date**: YYYY-MM-DD | **Depth**: thorough | **Sources Consulted**: N
## Executive Summary
[1 paragraph synthesis]
## Background
[Context needed to understand the findings]
## Methodology
[How the research was conducted, what was searched, how sources were evaluated]
## Findings
### [Sub-question 1]
[Detailed findings with inline citations]
### [Sub-question 2]
[Detailed findings with inline citations]
## Analysis
[Synthesis across findings, patterns identified, implications]
## Contradictions & Open Questions
[Areas of disagreement, gaps in knowledge]
## Confidence Assessment
[Overall confidence level with reasoning]
## Sources
[Full bibliography in chosen citation format]
Be aware of these biases during research:
Confirmation bias: Favoring information that confirms your initial hypothesis
Authority bias: Over-trusting sources from prestigious institutions
Anchoring: Fixating on the first piece of information found
Selection bias: Only finding sources that are easy to access
Recency bias: Over-weighting recent publications
Framing effect: Being influenced by how information is presented