Source-quality-guaranteed research skill. Investigates best practices, academic papers, and real-world case studies with explicit reliability tiers and source verification. Use when the user asks to research, investigate, survey, or look up best practices, papers, technical trends, latest examples, industry patterns, or architectural decisions. Triggers include "research", "investigate", "survey", "look up", "best practice", "what's the latest on", "how do companies do X", "find papers on", "state of the art".
Conduct research on a given topic with verifiable, high-quality sources. Every claim must trace back to a source with an explicit reliability tier.
All sources are classified into 3 tiers. Higher tiers are preferred. If only lower-tier sources are available, explicitly note this in the report.
| Tier | Source Type | Examples |
|---|---|---|
| S | Official docs, standards bodies, RFCs | MDN, IETF RFC, W3C Spec, language official docs (go.dev, docs.python.org, etc.) |
| A | Major tech company engineering blogs, widely-adopted OSS docs | Google AI Blog, Netflix Tech Blog, Stripe Engineering, Meta Engineering, repos with 5k+ GitHub Stars |
| B | Well-known practitioners, high-signal community resources | Martin Fowler, Kent Beck, ThoughtWorks Tech Radar, Stack Overflow answers with 100+ votes |
Reject: personal blogs without track record, Medium posts without author verification, SEO-optimized content farms, AI-generated summaries without primary source.
| Tier | Source Type | Criteria |
|---|---|---|
| S | Top-tier venues | NeurIPS, ICML, ICLR, ACL, CVPR, SIGMOD, VLDB, OSDI, SOSP, Nature, Science — OR citation count 200+ |
| A | Peer-reviewed conferences/journals | Published in recognized venues, citation count 50+ |
| B | Preprints with traction | arXiv/SSRN papers from known research groups, citation count 10+, or widely discussed in the community |
Reject: unpublished manuscripts, predatory journal papers, preprints with 0 citations and no institutional backing.
| Tier | Source Type | Examples |
|---|---|---|
| S | Official company announcements, conference talks | AWS re:Invent talks, Google I/O, KubeCon talks, official company blog posts about their own systems |
| A | Established tech media, industry reports | InfoQ, The New Stack, Gartner reports, ThoughtWorks Technology Radar, QCon presentations |
| B | Verified practitioner accounts | Conference lightning talks, podcast interviews with named engineers, detailed post-mortems on established platforms |
Reject: anonymous anecdotes, unverified social media claims, press releases without technical detail, secondhand accounts without primary source link.
Clarify the research scope before searching.
Output:
## Research Scope
- Topic: [identified topic]
- Type: [best practices / papers / case studies / mixed]
- Constraints: [time range, tech stack, industry, etc.]
- Search queries: [list of planned queries]
Execute searches systematically. Breadth first, then depth.
Compile findings into a structured markdown report.
# Research Report: [Topic]
## Executive Summary
[2-3 sentence overview of key findings]
## Findings
### [Finding 1 Title]
[Description with specific details, data points, or recommendations]
**Sources:**
- ★★★★★ (理由) [S] Author/Org, "Title", URL, YYYY-MM
- ★★★★☆ (理由) [A] Author/Org, "Title", URL, YYYY-MM
### [Finding 2 Title]
...
## Source Reliability Summary
| Tier | Count | Notes |
|------|-------|-------|
| S | [N] | [e.g., "Official docs, top-venue papers"] |
| A | [N] | [e.g., "Major tech blogs, peer-reviewed"] |
| B | [N] | [e.g., "Community resources — used only where S/A unavailable"] |
## Source List
[Full numbered list of all sources used, with relevance rating, reason, tier, author, title, URL, and date]
## Caveats
- [Any gaps in coverage, areas where only lower-tier sources were available, conflicting findings, etc.]
After presenting the report, offer next steps via AskUserQuestion: