Use when investigating latest vendor behavior, comparing tools or platforms, verifying claims beyond the repo, or gathering external evidence before implementation.
Run a disciplined research pass before implementation when the repo alone is not enough. This skill keeps research evidence-driven: inspect the local codebase first, escalate to official docs when freshness or public comparison matters, then use labeled community evidence only when it adds practical context.
Define the research question before collecting sources because vague research sprawls quickly. Restate the target topic, freshness requirement, comparison axis, and what decision the findings need to support.
Inspect the repo first because many questions are already answerable from local code, configs, tests, docs, or generated assets. Do not browse externally until the local evidence is exhausted or clearly insufficient.
Decide whether external research is actually required because not every task needs web evidence. Escalate only when freshness matters, public comparison matters, or the user explicitly asks to research or verify.
Follow the source ladder strictly because evidence quality matters. Use official docs, upstream repositories, standards, and maintainer material as primary sources before looking at blogs, issue threads, or Reddit.
Capture concrete source details because research without provenance is hard to trust. Record exact links, relevant dates, versions, and any repo files that support or contradict the external evidence.
Cross-check important claims across more than one source when possible because public docs, repos, and community advice can drift. If sources disagree, say so explicitly instead of smoothing over the conflict.
Use Reddit and other community sources only as labeled secondary evidence because they can surface practical gotchas but are not authoritative. Treat them as implementation color, not final truth.
Separate verified facts from inference because downstream planning depends on confidence. Mark what is directly supported by repo evidence or official sources versus what you infer from patterns or secondary signals.
Keep the output decision-oriented because the goal is not to dump links. Tie each finding back to the implementation, workflow, agent, or skill decision it affects.
Recommend the next route explicitly because research is usually a handoff, not the end of the task. Name the next workflow, agent, or exact skill that should continue the work.
State the remaining gaps and risks because incomplete research is still useful when the uncertainty is visible. Call out what you could not verify, what may have changed recently, and what assumptions remain.
Avoid over-quoting and over-collecting because research quality comes from synthesis, not volume. Prefer concise summaries with high-signal citations over long pasted excerpts.
When the task turns into implementation, stop researching and hand off because mixing discovery and execution usually creates drift. Deliver the research brief first, then route into the correct workflow or specialist.
Deliver:
Load only the file needed for the current question.
| File | Load when |
|---|---|
references/source-ladder.md | Need the repo-first and source-priority policy for official docs versus community evidence. |
references/research-output.md | Need the structured output format, evidence labeling rules, or handoff pattern. |
references/comparison-checklist.md | Comparing vendors, frameworks, or tools and need a concrete evaluation frame. |
Use these when the task shape already matches.
| File | Use when |
|---|---|
examples/01-latest-docs-check.md | Verifying a latest capability or doc claim before implementation. |
examples/02-ecosystem-comparison.md | Comparing multiple tools or platforms with official-first sourcing. |
examples/03-research-to-implementation-handoff.md | Turning research findings into a concrete next workflow or specialist handoff. |