Produce a lightweight chemistry or materials literature summary for research-agent, focused on baselines, key variables, and known constraints.
Use this skill as a focused helper for research-agent, not as a full systematic-review workflow.
Given a framed research problem, collect the literature context needed to operationalize the study:
local_packet_path for a benchmark-frozen literature packetresearch_runs/<research_id>/research_plan.md (to write the Literature Context section)Return findings in this structure so they can be written into research_state.json.literature_findings:
{
"baselines": [],
"key_variables": [],
"known_constraints": [],
"source_urls": [],
"summary": "",
"computable_candidates": [],
"evaluator_profile": {
"mode": "lookup | surrogate | live_simulation | unknown",
"evaluation_cost": "cheap | moderate | expensive",
"stability_risk": "low | medium | high",
"requires_run_local_setup": false,
"why": ""
}
}
Also write a short narrative summary into the Literature Context section of research_runs/<research_id>/research_plan.md.
baselines: best or representative prior values for the target property, with source attributionkey_variables: variables that the literature repeatedly treats as importantknown_constraints: physical, chemical, or experimental constraints that should inform BO setupsource_urls: links or source identifiers for the papers, docs, repositories, or code artifacts actually usedsummary: 1–3 short paragraphs linking the literature to the experiment designcomputable_candidates: operationalizable evaluator/code/equation/tutorial/paper candidates that Claude could turn into a working local setupevaluator_profile: the routing summary that tells research-agent whether this looks like a lookup, surrogate, or live-simulation setup and how risky/expensive it appearsIf local_packet_path is provided, treat it as the authoritative literature environment for this run.
In that mode:
This is the preferred mode for closed-world or control runs.
If no local packet is provided:
Each computable_candidates item should be an object with:
{
"label": "",
"kind": "paper | equation | code | tutorial | repository | simulator",
"source_url": "",
"inputs": [],
"notes": ""
}
Set evaluator_profile using the best-supported path you actually found:
mode
lookup: tabulated or database values are the intended evaluatorsurrogate: a fitted or prebuilt predictive model is the intended evaluatorlive_simulation: the evaluator is meant to be computed by running chemistry code or another simulatorunknown: no clear evaluator mode was establishedevaluation_cost
cheap: seconds or otherwise easy to probe repeatedlymoderate: nontrivial but still practical for repeated BO callsexpensive: each call is materially costly and should shape BO/search-space designstability_risk
low: evaluator path looks straightforward and stablemedium: some setup or convergence risk existshigh: evaluator path looks fragile enough that setup stabilization is likely neededrequires_run_local_setup
true when Claude will likely need to write local scripts or otherwise operationalize the evaluator directlywhy
If the system is too narrow, novel, or obscure to find direct baselines:
computable_candidates empty and say so directly. Do not invent one.evaluator_profile.mode to unknown and explain why.local_packet_path is provided, do not browse beyond that packet.research-agent proceed.evaluator_profile from the path you actually found; do not leave it vague when the literature clearly implies a live simulator or a lookup-based path.