Structured literature review for a physics research topic with citation network analysis and open question identification
Codex shell compatibility:
gpd on PATH.GPD_ACTIVE_RUNTIME=codex uv run gpd ....
</codex_runtime_notes><codex_questioning>
Orchestrator role: Scope the review, spawn gpd-literature-reviewer agent, handle checkpoints, present results.
Why subagent: Literature searches burn context fast (reading abstracts, following citation chains, cross-referencing results, tracking conventions across papers). Fresh 200k context for the full survey. Main context stays lean for user interaction.
<!-- [included: literature-review.md] --> <purpose> Conduct a systematic literature review for a physics research topic. Map the intellectual landscape: foundational works, methodological approaches, key results, controversies, and open questions. Produce LITERATURE-REVIEW.md consumed by planning and paper-writing workflows. <process> <step name="load_context" priority="first"> **Load project context (if available):** <step name="scope_review"> Establish scope from command context: <step name="identify_foundations"> **Phase 1: Foundational Works** <step name="map_methods"> **Phase 2: Methodological Landscape** <step name="catalog_results"> **Phase 3: Key Results Catalog** <step name="trace_citations"> **Phase 4: Citation Network Analysis** <step name="find_controversies"> **Phase 5: Controversies and Disagreements** <step name="identify_gaps"> **Phase 6: Open Questions** <step name="assess_frontier"> **Phase 7: Current Frontier** <step name="create_review_document"> Ensure the output directory exists:A physics literature review is not a bibliography. It is a map of the intellectual landscape: who computed what, using which methods, with what assumptions, getting what results, and where do they agree or disagree. The reviewer must think like a physicist surveying a field, not a librarian cataloging references. </objective>
<execution_context>
Called from $gpd-literature-review command. </purpose>
<core_principle> A physics literature review is not a bibliography. It is a structured map of who computed what, using which methods, with what assumptions, getting what results, and where they agree or disagree. The goal is to understand the state of a field well enough to identify what is known, what is open, and where new work can contribute. </core_principle>
<source_hierarchy> MANDATORY: Authoritative sources BEFORE general search
Textbooks and monographs -- For established results, standard methods, and field context
Review articles -- For field overviews and method surveys
Seminal papers -- Original derivations of key results
Recent arXiv preprints -- For cutting-edge developments
Conference proceedings -- For very recent results and community direction
web_search -- Last resort for community discussions, code repos, numerical benchmarks
</source_hierarchy>
INIT=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local init progress --include state,config)
if [ $? -ne 0 ]; then
echo "ERROR: gpd initialization failed: $INIT"
# STOP — display the error to the user and do not proceed.
fi
Parse JSON for: commit_docs, state_exists, project_exists, project_contract, contract_intake, effective_reference_intake, active_reference_context, reference_artifact_files, reference_artifacts_content.
Read mode settings:
AUTONOMY=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local --raw config get autonomy 2>/dev/null | /home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local json get .value --default balanced 2>/dev/null || echo "balanced")
RESEARCH_MODE=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local --raw config get research_mode 2>/dev/null | /home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local json get .value --default balanced 2>/dev/null || echo "balanced")
Mode-aware behavior:
research_mode=explore: Comprehensive review (30+ papers), include tangential fields, map full citation network, identify open questions.
research_mode=exploit: Focused review (8-12 papers), direct relevance only, extract key results and methods.
research_mode=adaptive: Start with 15 papers, expand if citation network reveals critical gaps.
autonomy=supervised: Pause after each review round for user feedback on scope and direction.
autonomy=balanced (default): Complete the full review pipeline automatically. Pause only if the literature reveals scope ambiguity, contradictory evidence, or a change in recommendation.
autonomy=yolo: Complete the review pipeline without pausing, but do NOT drop contract-critical anchors or user-mandated references.
If state_exists is true: Extract convention_lock for notation context (helps identify which conventions are used in papers being reviewed). Extract active research topic, phase context, and any contract-critical references from active_reference_context.
If state_exists is false (standalone usage): Proceed — the user will specify the topic directly.
Treat effective_reference_intake as the machine-readable carry-forward ledger for anchors, prior outputs, baselines, user-mandated context, and unresolved gaps. Re-surface those items in the review even if the broader search expands beyond them.
Use reference_artifacts_content as supporting evidence when existing literature/research-map artifacts already pin down benchmark values, prior outputs, or anchor wording that should remain stable.
Project context helps focus the review on conventions and methods relevant to the current research. </step>
project_contract, contract_intake, effective_reference_intake, and active_reference_context before broadening the searchDefine explicit include/exclude boundaries:
Every subfield has seminal papers that defined the field. Identify them:
Search for review articles first (they cite the seminal works):
web_search: "[topic] review" site:arxiv.org
web_search: "[topic]" site:journals.aps.org/rmp
From review articles, extract:
For each foundational work, record:
anchor_id and concrete locator if the work is contract-critical or likely to be reused downstreamBuild a citation timeline showing how the field developed. </step>
Catalog all methods that have been applied to this problem:
For each method:
| Field | Detail |
|---|---|
| Method name | Formal name and common abbreviations |
| Type | Analytical / Numerical / Mixed |
| Key idea | One-sentence description of the approach |
| Regime of validity | Where it works (weak coupling, high T, large N, etc.) |
| Limitations | Where it fails (strong coupling, low dimension, sign problem, etc.) |
| Accuracy | Typical precision achievable |
| Computational cost | Scaling with system size, time, memory |
| Key references | Original paper + best application to this system |
| Available codes | Open-source implementations, if any |
Organize methods by approach type:
Note which methods agree and where they disagree -- this reveals the interesting physics. </step>
For each significant result in the literature:
| Field | Detail |
|---|---|
| Quantity | What was computed (energy, correlation function, phase boundary, etc.) |
| Value | Numerical result or analytical expression |
| Method | How it was obtained |
| Uncertainty | Error bars, systematic uncertainties, convergence status |
| Conventions | Units, normalization, sign conventions used |
| Regime | Parameter values, approximations in effect |
| Reference | Full citation |
| Agreement | How it compares with other determinations |
Tabulate results for the SAME quantity across different papers/methods to expose:
Map intellectual lineages:
This reveals:
Actively search for disagreements in the literature:
Numerical discrepancies: Different groups get different values for the same quantity
Methodological disagreements: Different methods give inconsistent results
Conceptual disagreements: Different physical interpretations of the same result
Convention conflicts: Different papers use different conventions
Systematically identify what has NOT been done:
For each gap:
Map the state-of-the-art:
Most recent results (last 1-2 years)
Active groups
Emerging methods
Community direction
mkdir -p .gpd/literature
Write .gpd/literature/{slug}-REVIEW.md:
---