Use this skill when the user asks for papers, prior work, benchmark conventions, prior-art mapping, or other explicit evidence grounding from published work, including heterogeneous-catalysis method benchmarks.
Use this skill to run the smallest literature-grounding workflow that can answer a paper, benchmark, or prior-art question with reusable evidence. Do not use it when ordinary web context is enough, when the question is mainly about current non-paper facts, or when the task is actually about local project artifacts rather than published work.
query focused on the literature question and provide a short domain/topic phrase when possible; scholarly metadata lookups should not receive the full reporting instruction text.depth=quick for representative-paper requests.depth=standard for method conventions, benchmark framing, catalyst-system prior art, or comparative evidence.depth=focused for narrow method disputes, conflicting conventions, or targeted evidence checks.depth=deep_report only for explicit deep-review or survey-style requests.executeOnly trigger literature work when the user explicitly asks for papers, prior work, supporting evidence, benchmark context, prior-art mapping, or literature-grounded method conventions. Use normal web/online search first when the need is broad background, public-page summaries, or lightweight orientation rather than paper-level grounding. Do not route current product facts, software usage questions, or repository-local evidence into this skill just because they mention “references”.
Use quick for representative papers and fast grounding. Use standard for conventions, benchmark framing, catalyst-system prior art, or related-work summaries. Use focused when a narrow scientific question needs targeted evidence or when the literature disagrees on a method choice. Reserve deep_report for explicit deep-review requests.
Ask for the literature object you actually need: representative papers, benchmark systems, reference-state conventions, dispersion policy, model chemistry, or open methodological disagreement. A good pack should extract conventions and decision-relevant comparisons, not just titles. When staging the request for the runtime literature path, keep the packet compact and decision-shaped. At minimum, surface:
query: the scientific question in one sentencetopic: short domain phrasedepth: quick, standard, focused, or deep_reportobjective: what downstream decision this evidence should informmust_answer: a short list of concrete method or prior-art questionsexclusions: any out-of-scope systems, methods, or date windowsDo not automatically hit both scholarly metadata APIs and public web in the same first pass. Start with public web when the request is broad, contextual, or public-summary-friendly. Use OpenAlex / Semantic Scholar when the user clearly needs paper-level grounding or when the web pass is too weak.
If papers disagree on a benchmark convention, reference state, dispersion treatment, or reported trend, keep the disagreement visible. Do not flatten a split literature into one fake consensus. Report which convention is dominant, which is contested, and what that means for the downstream workflow choice.
Use the returned literature context pack as the planning/evidence object. Do not surface the raw retrieval process, intermediate search noise, or browser-style exploration steps to the main agent context.
Return:
depth