Structure and write a physics paper from research results
Codex shell compatibility:
gpd on PATH.GPD_ACTIVE_RUNTIME=codex uv run gpd ....
</codex_runtime_notes>Orchestrator role: Establish paper scope and structure, spawn gpd-paper-writer agents for section drafting (wave-parallelized), gpd-bibliographer for citation verification, run the staged peer-review panel (gpd-review-reader, gpd-review-literature, gpd-review-math, gpd-review-physics, gpd-review-significance, then gpd-referee as final adjudicator), coordinate revisions, ensure internal consistency.
Why subagent: Paper writing requires holding the full research context while drafting coherent prose. Each section needs access to derivations, numerical results, and literature context. Fresh 200k context per section ensures quality. Main context coordinates the overall structure.
Writing a physics paper is not writing a report. A paper has a narrative arc: it poses a question, develops the tools to answer it, presents the answer, and explains why the answer matters. Every equation must earn its place. Every figure must make a point. Every paragraph must advance the argument.
Routes to the write-paper workflow which handles all logic including:
<execution_context>
Called from $gpd-write-paper command. Sections are drafted by gpd-paper-writer agents. </purpose>
<core_principle> A physics paper has a narrative arc. It is not a report of everything that was done -- it is a carefully constructed argument that poses a question, develops the tools to answer it, presents the answer with evidence, and explains why the answer matters. Every equation, figure, and paragraph must advance this argument. Anything that doesn't is cut or moved to an appendix.
The narrative arc:
<journal_formats>
INIT=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local init phase-op)
if [ $? -ne 0 ]; then
echo "ERROR: gpd initialization failed: $INIT"
# STOP — display the error to the user and do not proceed.
fi
Parse JSON for: commit_docs, state_exists, project_exists, project_contract, selected_protocol_bundle_ids, protocol_bundle_context, active_reference_context.
Load mode settings:
AUTONOMY=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local --raw config get autonomy 2>/dev/null | /home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local json get .value --default balanced 2>/dev/null || echo "balanced")
RESEARCH_MODE=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local --raw config get research_mode 2>/dev/null | /home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local json get .value --default balanced 2>/dev/null || echo "balanced")
Mode effects on the write-paper pipeline:
For detailed mode adaptation specifications (bibliographer search breadth, referee strictness, paper-writer style by mode), see ./.codex/get-physics-done/references/publication/publication-pipeline-modes.md.
Run centralized context preflight before continuing:
CONTEXT=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local --raw validate command-context write-paper "$ARGUMENTS")
if [ $? -ne 0 ]; then
echo "$CONTEXT"
exit 1
fi
Run the centralized review preflight before continuing:
/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local validate review-preflight write-paper --strict
If review preflight exits nonzero because of missing project state, missing roadmap, missing manuscript, degraded review integrity, missing research artifacts, or non-review-ready reproducibility coverage, STOP and show the blocking issues before drafting.
Locate paper directory (if resuming):
for DIR in paper manuscript draft; do
if [ -f "${DIR}/main.tex" ]; then
PAPER_DIR="$DIR"
break
fi
done
If PAPER_DIR is set, the workflow is resuming or revising an existing paper. Otherwise, a new paper/ directory will be created in generate_files.
Check pdflatex availability:
command -v pdflatex >/dev/null 2>&1 && PDFLATEX_AVAILABLE=true || PDFLATEX_AVAILABLE=false
If PDFLATEX_AVAILABLE is false, display a warning:
⚠ pdflatex not found. LaTeX compilation checks will be skipped.
Install TeX Live or MacTeX for compilation verification during drafting.
The paper .tex files will still be generated correctly.
The workflow continues without compilation checks — .tex file generation does not require pdflatex.
Convention verification — papers must use consistent conventions throughout:
CONV_CHECK=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local --raw convention check 2>/dev/null)
if [ $? -ne 0 ]; then
echo "WARNING: Convention verification failed — review before writing paper"
echo "$CONV_CHECK"
fi
If conventions are locked, all equations in the paper must follow them. Convention mismatches between research phases and the paper are a common source of sign errors and missing factors.
selected_protocol_bundle_ids is non-empty, keep the bundle's decisive artifact guidance, estimator caveats, and reference prompts visible while choosing main-text figures, appendices, and related-work framing.project_contract, or override contract_results, comparison_verdicts, .gpd/comparisons/*-COMPARISON.md, .gpd/paper/FIGURE_TRACKER.md, or active_reference_context. Those remain authoritative.Check for research digests generated during milestone completion. These digests are the primary structured handoff from the research phase and should drive paper organization.
Step 1 -- Locate digest files:
ls .gpd/milestones/*/RESEARCH-DIGEST.md 2>/dev/null
If digest(s) found:
Read all available digests:
cat .gpd/milestones/*/RESEARCH-DIGEST.md
Step 2 -- Map digest sections to paper structure:
The research digest provides a direct scaffolding for paper organization:
Narrative Arc -> Paper's logical flow. The narrative arc paragraph describes the research story from first question to final result. Use this as the backbone for the Introduction's argument and the overall section ordering. If the narrative naturally follows "we asked X, developed method Y, found Z," the paper sections should mirror that progression.
Key Results table -> Results section content. Each row in the key results table is a candidate for inclusion in the Results section. The equations/values, validity ranges, and confidence levels determine what gets presented as primary results vs. supporting evidence vs. appendix material.
Methods Employed -> Methods section. The phase-ordered methods list defines the tools developed or applied. Methods introduced early that underpin later results are the core of the Methods section. Methods used only in one phase may be relegated to a subsection or appendix.
Convention Evolution -> Notation consistency. The final active conventions from the convention timeline define the notation for the entire paper. Any superseded conventions must NOT appear in the manuscript. Build the paper's symbol table from the "Active" entries only.
Figures and Data Registry -> Figure planning. Figures marked "Paper-ready? Yes" are immediate candidates for the paper. Others may need regeneration. Use the registry to plan the figure sequence and identify gaps that need new figures.
Open Questions -> Discussion / Future Work. These feed directly into the Discussion section's "outlook" paragraphs or a dedicated Future Work subsection.
Dependency Graph -> Derivation ordering. The provides/requires graph shows which results depend on which. This determines the logical ordering of the Methods and Results sections -- a result that requires another result must come after it.
Mapping to Original Objectives -> Introduction framing. The requirements mapping shows which research goals were achieved. This helps frame the Introduction's promise ("In this paper, we...") and the Conclusions' delivery ("We have shown...").
Step 3 -- Identify digest gaps:
If the digest is incomplete or missing sections, note which paper sections will need to be built from raw phase data instead:
# Fall back to raw sources if digest is insufficient
cat .gpd/phases/*-*/*-SUMMARY.md
cat .gpd/state.json
If NO digest found:
Display a clear warning explaining why and offering alternatives:
⚠ No RESEARCH-DIGEST.md found in .gpd/milestones/.
Research digests are generated during $gpd-complete-milestone. Without a digest,
the paper will be built from raw phase data (SUMMARY.md files, STATE.md, state.json).
This works but produces a less structured starting point — the digest provides
a curated narrative arc, convention timeline, and figure registry.
Options:
1. Continue anyway — build paper from raw phase data (proceed below)
2. Run $gpd-complete-milestone first — generates the digest, then return here
3. Use --from-phases to explicitly select which phases to include:
$gpd-write-paper --from-phases 1,2,3,5
If --from-phases flag is present: Read SUMMARY.md and research artifacts only from the specified phase directories. Skip milestone digest lookup entirely. This is useful for writing papers that cover a subset of phases or when milestones haven't been completed yet.
# Example: --from-phases 1,3,5
for PHASE_NUM in $(echo "$FROM_PHASES" | tr ',' ' '); do
PHASE_DIR=$(ls -d .gpd/phases/*/ | grep "^.gpd/phases/0*${PHASE_NUM}-")
cat "$PHASE_DIR"/*-SUMMARY.md 2>/dev/null
done
Proceed to establish_scope and catalog_artifacts, which will gather research context from raw phase data, SUMMARY.md files, and state.json directly.
If a research digest was loaded, the key result is typically the highest-confidence entry in the Key Results table. The narrative arc paragraph often contains the one-sentence key result in condensed form.
The key result drives everything. Every section exists to support, contextualize, or explain this result. </step>
Derivations -- LaTeX, Python scripts, Mathematica notebooks
Numerical results -- Data files, convergence tests, benchmarks
Figures -- Existing plots, phase diagrams, schematics
Literature context -- From .gpd/literature/*-REVIEW.md or phase RESEARCH.md
Verification results -- From VERIFICATION.md
Internal comparisons and decisive evidence -- From .gpd/comparisons/*-COMPARISON.md, FIGURE_TRACKER.md, and bundle context
comparison_verdicts for the paper's core claims?Map each artifact to the section where it will appear. </step>
Before committing to an outline, verify the research is publication-ready. This pre-flight gate catches gaps that would block or undermine the paper.
Run checks across all contributing phases (from digest, --from-phases, or all completed phases):
# Identify contributing phases
if [ -n "$FROM_PHASES" ]; then
PHASE_DIRS=$(for n in $(echo "$FROM_PHASES" | tr ',' ' '); do ls -d .gpd/phases/0*${n}-* 2>/dev/null; done)
else
PHASE_DIRS=$(ls -d .gpd/phases/*/ 2>/dev/null)
fi
Every contributing phase must have a SUMMARY.md that tells the paper what user-visible result it contributes. For contract-backed phases, contract_results and any decisive comparison_verdicts are the readiness anchors; generic verification_status / confidence tags are optional hints, not gates.
For each phase directory:
plan_contract_ref and contract_resultscomparison_verdicts entry and an evidence path the manuscript can surfaceMissing SUMMARY.md → CRITICAL gap (phase results not summarized).
Contract-backed phase missing contract_results for a paper-relevant target → CRITICAL gap.
Decisive comparison required by the contract but no verdict/evidence path is surfaced → CRITICAL gap.
Missing generic verification_status / confidence tags alone are not blockers.
Read the convention declarations from each phase's SUMMARY.md or derivation files and compare:
Also check convention_lock in STATE.md or state.json. If a convention lock exists, verify all phases comply.
Convention mismatch between phases → CRITICAL gap (combining results with different conventions produces wrong answers).
For each key result listed in the research digest (or intermediate_results in state.json):
Values differ by more than stated uncertainty → CRITICAL gap. Values lack uncertainty estimates → WARNING.
Check whether planned figures have source data and generation scripts:
# Check durable figure roots, not internal phase scratch paths
find artifacts/phases figures paper/figures -maxdepth 3 \( -type f -o -type d \) 2>/dev/null
ls .gpd/paper/FIGURE_TRACKER.md 2>/dev/null
For each figure referenced in the research digest or artifact catalog:
.py, .m, .nb)?Source data missing → CRITICAL gap.
Script missing but data exists → WARNING (script can be written during generate_figures).
Script exists but not run → INFO (will be run during generate_figures).
Check for bibliography infrastructure:
ls references/references.bib 2>/dev/null
ls paper/references.bib 2>/dev/null
ls .gpd/literature/*-REVIEW.md 2>/dev/null
references/references.bib or paper/references.bib)?.gpd/literature/*-REVIEW.md or phase RESEARCH.md exist?No bibliography file and no literature review → WARNING (citations will need to be built from scratch).
Check that the manuscript can surface the decisive evidence, not just supporting narrative:
.gpd/comparisons/*-COMPARISON.md and note every decisive comparison_verdicts entry.gpd/paper/FIGURE_TRACKER.md and confirm those decisive claims have a planned figure, table, or explicit textual comparison pathselected_protocol_bundle_ids is non-empty, use protocol_bundle_context only as an additive expectation map for which anchors, estimator caveats, or benchmark comparisons should stay visible in the paperDecisive comparison missing for a central claim → CRITICAL gap. Bundle guidance suggests a decisive comparison that is absent, but the manuscript narrows the claim honestly → WARNING, not blocker.
Present results as a readiness report:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
GPD > PAPER-READINESS AUDIT
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Phases audited: {N}
Check Status Issues
─────────────────────────────────────────────
SUMMARY completeness {P/F} {details}
Convention consistency {P/F} {details}
Numerical stability {P/F} {details}
Figure readiness {P/F} {details}
Citation readiness {P/F} {details}
Decisive comparisons {P/F} {details}
CRITICAL gaps: {count}
Warnings: {count}
create_outline.Paper-readiness audit found {N} critical gap(s):
{numbered list of critical gaps with phase and description}
Options:
1. Fix gaps first — return to research phases to address critical issues
2. Proceed anyway — acknowledge gaps as known limitations in the paper
3. Exclude problematic phases — re-scope paper with --from-phases to skip incomplete phases
Wait for user decision before proceeding. Do NOT silently continue past critical gaps.
For each section:
The outline must satisfy:
Present outline for approval before proceeding. </step>
paper/
+-- main.tex # Master document with \input commands
+-- abstract.tex
+-- introduction.tex
+-- model.tex # or setup.tex
+-- methods.tex # or derivation.tex
+-- results.tex
+-- discussion.tex
+-- conclusions.tex
+-- appendix_A.tex # if needed
+-- appendix_B.tex # if needed
+-- references.bib # BibTeX entries
+-- figures/ # All figure files
+-- Makefile # Build: pdflatex + bibtex
The main.tex should:
./.codex/get-physics-done/templates/latex-preamble.md for standard packages, project-specific macros, equation labeling conventions, and SymPy-to-LaTeX integration)If the project has a .gpd/analysis/LATEX_PREAMBLE.md, use its macros to ensure notation consistency with the research phases.
If a machine-readable paper spec is available, prefer the canonical builder:
/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local paper-build paper/PAPER-CONFIG.json
This emits paper/main.tex, writes the artifact manifest, and keeps the manuscript scaffold aligned with the tested gpd.mcp.paper package. If no JSON spec exists yet, create paper/PAPER-CONFIG.json first using ./.codex/get-physics-done/templates/paper/paper-config-schema.md as the schema source of truth, and then run gpd paper-build before proceeding. The compilation checks in draft_sections require main.tex to exist.
When authoring paper/PAPER-CONFIG.json:
./.codex/get-physics-done/templates/paper/paper-config-schema.mdauthors, sections, figures, and appendix_sections as JSON arraysjournal to a supported builder key like prl, apj, mnras, nature, jhep, or jfmSupplemental material: If the paper requires supplemental material (common for PRL and other letter-format journals), use ./.codex/get-physics-done/templates/paper/supplemental-material.md for the standard structure (extended derivations, computational details, additional figures, data tables, code availability).
Experimental comparison: If the paper compares theoretical predictions with experimental or observational data, use ./.codex/get-physics-done/templates/paper/experimental-comparison.md for the systematic comparison structure (data source metadata, unit conversion checklist, pull analysis, chi-squared statistics, discrepancy classification with root cause hierarchy).
</step>
Ensure the paper directory structure exists before writing any files:
mkdir -p paper/figures
Before drafting sections, generate all planned figures:
.gpd/paper/FIGURE_TRACKER.md for figure specificationsplt.style.use('paper/paper.mplstyle') if exists, otherwise use sensible defaultspaper/figures/
d. Update FIGURE_TRACKER.md statusIf figure data is missing: Flag as blocker, suggest which phase needs re-execution. </step>
WRITER_MODEL=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local resolve-model gpd-paper-writer)
Spawn gpd-paper-writer agents for section drafting.
Section drafting order (with parallelization):
LaTeX compilation check after each wave (if pdflatex available):
Skip this check if PDFLATEX_AVAILABLE is false (set in init step).
After each drafting wave completes, verify the document compiles:
cd paper/
pdflatex -interaction=nonstopmode main.tex 2>&1 | tail -20
If compilation errors:
grep -A 3 "^!" main.log | head -10If compilation succeeds: Proceed to next wave. Run bibtex after the bibliography wave.
This prevents error accumulation across waves.
Per-wave checkpointing and failure recovery:
Before spawning each wave, check if the target .tex files already exist on disk. If they do, skip that wave and move to the next. On re-invocation, the workflow detects already-written sections and resumes from the first incomplete wave.
# Example: check Wave 1 outputs before spawning
if [ -f "paper/results.tex" ] && [ -f "paper/methods.tex" ]; then
echo "Wave 1 outputs exist -- skipping to Wave 2"
else
# Spawn Wave 1 agents
fi
Apply this pattern to each wave: check for the expected .tex output files before spawning writer agents.
For each section, spawn a writer agent:
Runtime delegation: Spawn a subagent for the task below. Adapt the
task()call to your runtime's agent spawning mechanism. Ifmodelresolves tonullor an empty string, omit it so the runtime uses its default model. Always passreadonly=falsefor file-producing agents. If subagent spawning is unavailable, execute these steps sequentially in the main context.
task(
prompt="First, read ./.codex/agents/gpd-paper-writer.md for your role and instructions.\n\n" + section_prompt,
subagent_type="gpd-paper-writer",
model="{writer_model}",
readonly=false,
description="Draft: {section_name}"
)
If a writer agent fails to spawn or returns an error: Check if the expected .tex file was written to paper/ (agents write files first). If the file exists, proceed to the next section. If not, offer: 1) Retry the failed section, 2) Draft the section in the main context using the section brief, 3) Skip the section and continue with remaining waves. Do not block the entire paper on a single section failure — other sections can still be drafted in parallel.
Each writer agent receives:
.gpd/comparisons/*-COMPARISON.md) and relevant FIGURE_TRACKER.md entries for any contract-critical figure or tableprotocol_bundle_context and selected_protocol_bundle_ids as additive specialized guidance only; they help decide which decisive anchors, estimator caveats, and benchmark comparisons must stay visible, but they do not replace the contract-backed evidence ledgerWhat makes good physics writing:
Numbered if referenced (Eq. (1), (2), ...). Unnumbered if not referenced.
Defined completely -- Every symbol defined at first appearance
Dimensionally consistent -- Author has verified dimensions
Typeset correctly -- LaTeX best practices:
\left( \right) for auto-sizing delimiters\mathrm{d} for differential d (upright, not italic)\text{...} for words within equations\boldsymbol{} for vector/tensor quantities (or \vec{} if journal prefers)\cdot vs \times for productsContextualized -- Each equation has text before (setup) and after (interpretation)
Bad: "From the Lagrangian we get" [equation] "which we use below." Good: "Varying the action with respect to phi yields the equation of motion" [equation] "This is a nonlinear Klein-Gordon equation, with the potential V'(phi) acting as an effective mass that depends on the field value." </step>
Caption format:
\begin{figure}
\includegraphics[width=\columnwidth]{figures/fig_energy.pdf}
\caption{Ground-state energy $E_0$ as a function of coupling $g$ for $N = 100$ sites.
Solid line: exact diagonalization. Dashed line: mean-field theory.
Error bars are smaller than symbol size for all data points.
Inset: relative difference between ED and MFT, showing $O(1/N)$ corrections.}
\label{fig:energy}
\end{figure}
Notation audit:
Cross-reference audit:
Placeholder resolution:
Scan all .tex files for RESULT PENDING markers left by the paper-writer:
grep -rn "RESULT PENDING" paper/*.tex
For each % [RESULT PENDING: phase N, task M -- description]:
\text{[PENDING]} with the actual value and remove the % [RESULT PENDING: ...] commentGATE: All RESULT PENDING markers must be resolved before proceeding to verify_references.
PENDING_COUNT=$(grep -rcE "RESULT PENDING|\\\\text\{\\[PENDING\\]\}" paper/*.tex 2>/dev/null || echo 0)
If PENDING_COUNT > 0:
ERROR: ${PENDING_COUNT} unresolved RESULT PENDING marker(s) found.
A paper with placeholder values is not submission-ready.
Unresolved markers:
$(grep -rn "RESULT PENDING" paper/*.tex 2>/dev/null)
Options:
1. Resolve markers from phase SUMMARYs (attempt auto-fill)
2. Return to research phases to complete missing results
3. List all pending markers for manual resolution
HALTING — do NOT proceed to verify_references until all markers are resolved.
Do NOT proceed to the verify_references step. This is a hard gate.
Physics consistency:
Narrative flow:
After all sections are drafted, run a systematic notation check:
Check for notation glossary:
ls .gpd/NOTATION_GLOSSARY.md 2>/dev/null
If NOTATION_GLOSSARY.md does not exist, skip step 2 below and note in the report that no glossary was available for cross-referencing. The consistency checks (steps 1, 3, 4) still run — they compare the paper against itself.
Resolve bibliographer model:
BIBLIO_MODEL=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local resolve-model gpd-bibliographer)
Runtime delegation: Spawn a subagent for the task below. Adapt the
task()call to your runtime's agent spawning mechanism. Ifmodelresolves tonullor an empty string, omit it so the runtime uses its default model. Always passreadonly=falsefor file-producing agents. If subagent spawning is unavailable, execute these steps sequentially in the main context.
task(
subagent_type="gpd-bibliographer",
model="{biblio_model}",
readonly=false,
prompt="First, read ./.codex/agents/gpd-bibliographer.md for your role and instructions.
Verify all references in the paper and audit citation completeness.
Mode: Audit bibliography + Audit manuscript
Paper directory: paper/
Bibliography: `references/references.bib` (preferred) or `paper/references.bib` if the manuscript keeps a local copy
Manuscript files: paper/*.tex
Target journal: {target_journal}
Tasks:
1. Verify every entry in the active bibliography file against authoritative databases (INSPIRE, ADS, arXiv)
2. Check all \cite{} keys in .tex files resolve to bibliography entries
3. Detect orphaned bibliography entries (not cited in any .tex file)
4. Scan for uncited named results, theorems, or methods that should have citations
5. Verify BibTeX formatting matches {target_journal} requirements
6. Check arXiv preprints for published versions (update stale preprint-only entries)
Write audit report to paper/CITATION-AUDIT.md
Return BIBLIOGRAPHY UPDATED or CITATION ISSUES FOUND."
)
If the bibliographer agent fails to spawn or returns an error: Proceed without bibliography verification — note in the paper status that citations are unverified. The user should run $gpd-literature-review to verify citations after the paper is written.
If CITATION ISSUES FOUND:
.gpd/references-status.jsonMISSING: markers: for each entry in resolved_markers, find-and-replace \cite{MISSING:X} → \cite{resolved_key} in all .tex files and remove the associated % MISSING CITATION: commentIf BIBLIOGRAPHY UPDATED:
Use the canonical schema:
./.codex/get-physics-done/templates/paper/reproducibility-manifest.mdCreate or update:
paper/reproducibility-manifest.jsonMinimum required inputs:
paper/ARTIFACT-MANIFEST.jsonpaper/BIBLIOGRAPHY-AUDIT.json.gpd/paper/FIGURE_TRACKER.mdSUMMARY.md / VERIFICATION.md evidence for decisive claims, figures, and comparisonsValidate it before entering strict review:
/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local --raw validate reproducibility-manifest paper/reproducibility-manifest.json --strict
If validation fails, stop and fix the manifest now. Do not enter pre_submission_review with a missing or non-review-ready reproducibility manifest, because strict review preflight will block on it.
</step>
Standalone entrypoint: $gpd-peer-review is the first-class command for re-running this stage outside the write-paper pipeline. This embedded step must stay behaviorally aligned with that command and use the same six-agent panel:
gpd-review-readergpd-review-literaturegpd-review-mathgpd-review-physicsgpd-review-significancegpd-referee as final adjudicatorFor the detailed staging, artifact naming, round handling, CLAIMS.json / STAGE-*.json outputs, REVIEW-LEDGER.json, REFEREE-DECISION.json, and recommendation guardrails, follow @./.codex/get-physics-done/workflows/peer-review.md exactly, using paper/main.tex as the resolved target and the current draft's bibliography and audit artifacts. Keep the current project_contract and active_reference_context visible throughout that staged review; they remain authoritative when judging whether the manuscript has surfaced decisive evidence honestly.
If the staged panel fails: Do not silently waive the review. Note the failure and recommend running $gpd-peer-review directly after resolving the blocking issue.
After final adjudication:
Read .gpd/review/REFEREE-DECISION.json and .gpd/review/REVIEW-LEDGER.json first when they exist, then read .gpd/REFEREE-REPORT.md and assess the findings:
accept or minor_revision with 0 major issues: Proceed to final_review. Note minor issues for the user.major_revision or reject: Present the major issues to the user before proceeding. For each major issue, show the location, description, and suggested fix. Ask the user whether to:
final_review anyway (accept the issues as known limitations)7. Run paper quality scoring (see ./.codex/get-physics-done/references/publication/paper-quality-scoring.md):
Score the paper across 7 dimensions (equations, figures, citations, conventions, verification, completeness, results presentation) for a total out of 100. Apply journal-specific multipliers for the target journal.
QUALITY=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local --raw validate paper-quality --from-project . 2>/dev/null)
The score should be artifact-driven, not manually estimated. Use:
paper/ARTIFACT-MANIFEST.jsonpaper/BIBLIOGRAPHY-AUDIT.json.gpd/paper/FIGURE_TRACKER.md frontmatter figure_registry.gpd/comparisons/*-COMPARISON.mdSUMMARY.md / VERIFICATION.md contract_results and comparison_verdictsTreat paper-support artifacts as scaffolding, not as proof that a claim is established. Missing decisive comparison evidence still blocks a strong submission recommendation even if manifests and audits are complete.
Present the quality score report. If score < journal minimum, list specific items to fix before submission. If score >= minimum, recommend proceeding to $gpd-arxiv-submission.
Present summary to user with build instructions, quality score, and next steps. </step>
Note: For a dedicated referee response workflow, use $gpd-respond-to-referees. This step handles revision when invoked from within the write-paper pipeline.
When revising a paper in response to referee reports:
Parse the referee report: Extract each numbered point as a structured item with:
Produce AUTHOR-RESPONSE.md: Spawn a paper-writer agent to produce the structured author response that the gpd-referee expects for multi-round review:
task(
subagent_type="gpd-paper-writer",
model="{writer_model}",
readonly=false,
prompt="First, read ./.codex/agents/gpd-paper-writer.md for your role and instructions.\n\nRead your <author_response> protocol. Produce an AUTHOR-RESPONSE file.\n\n" +
"Referee report: .gpd/REFEREE-REPORT{-RN}.md\n" +
"Review ledger (if present): .gpd/review/REVIEW-LEDGER{-RN}.json\n" +
"Decision artifact (if present): .gpd/review/REFEREE-DECISION{-RN}.json\n" +
"Manuscript: paper/*.tex\n" +
"Round: {N}\n\n" +
"For each REF-xxx issue, classify as fixed/rebutted/acknowledged. Use the JSON artifacts to identify blocking issues and decision-floor reasons, but keep REF-xxx IDs from the report.\n" +
"Write to .gpd/AUTHOR-RESPONSE{-RN}.md",
description="Author response: round {N}"
)
If the author-response agent fails to spawn or returns an error: Check if .gpd/AUTHOR-RESPONSE{-RN}.md was written (agents write files first). If it exists, proceed to section revision. If not, offer: 1) Retry the agent, 2) Draft the author response in the main context using the referee report and manuscript, 3) Skip structured response and proceed directly to section revisions.
The AUTHOR-RESPONSE.md uses REF-xxx issue IDs matching the referee report, with classifications (fixed/rebutted/acknowledged) and specific change locations. When present, REVIEW-LEDGER{-RN}.json and REFEREE-DECISION{-RN}.json provide the blocking-issue and recommendation-floor context that the response must resolve. See the gpd-paper-writer's <author_response> section for the full format.
Also create paper/REFEREE_RESPONSE.md (the human-readable response letter) using the templates/paper/referee-response.md template for the actual journal submission cover letter.
Spawn section revision agents: For each major concern requiring manuscript changes, spawn a paper-writer agent with:
Track new calculations: If referee requests require new derivations or simulations, create tasks in .gpd/paper/REVISION_TASKS.md and route to appropriate phases.
Verify consistency: After all revisions, re-run the consistency_check and notation_audit steps to ensure revisions don't introduce new inconsistencies.
After section revision agents complete, run the pre_submission_review step again to check if the revisions resolved the issues. Track iteration count.
Iteration flow:
Paper revision loop reached maximum iterations (3).
**Remaining issues ({N}):**
{list of unresolved issues from latest pre_submission_review}
Options:
1. Proceed to final_review anyway (accept known issues)
2. Manually edit the affected sections
3. Return to research phases to address underlying problems
Each iteration should be targeted -- only revise sections flagged by the reviewer, not the entire paper. This prevents introducing new issues while fixing old ones. </step>
<success_criteria>
Canonical source of truth for paper/PAPER-CONFIG.json, the machine-readable paper build spec consumed by gpd paper-build.
Create this JSON before asking the builder to emit paper/main.tex when no tested paper config already exists. Do not invent extra top-level keys or replace arrays with prose.
{
"title": "Benchmark Recovery in a Controlled Regime",
"authors": [
{
"name": "A. Researcher",
"email": "[email protected]",
"affiliation": "Department of Physics, Example University"
}
],
"abstract": "One paragraph stating the question, method, decisive result, and why it matters.",
"sections": [
{
"heading": "Introduction",
"content": "\\\\section{Introduction}\\nState the problem, stakes, and contract-backed claim.",
"label": "sec:intro"
},
{
"heading": "Results",
"content": "\\\\section{Results}\\nPresent the decisive benchmark comparison and uncertainty bounds.",
"label": "sec:results"
}
],
"figures": [
{
"path": "figures/benchmark.pdf",
"caption": "Benchmark comparison with uncertainty bands.",
"label": "fig:benchmark",
"width": "\\\\columnwidth",
"double_column": false
}
],
"acknowledgments": "Funding, collaborators, and compute support.",
"bib_file": "references",
"journal": "prl",
"appendix_sections": [
{
"heading": "Supplementary Derivation",
"content": "\\\\section{Supplementary Derivation}\\nDetailed algebra moved out of the main text.",
"label": "app:derivation"
}
],
"attribution_footer": "Generated with Get Physics Done"
}
title: non-empty stringauthors: array of objects with name; email and affiliation are optional stringsabstract: non-empty stringsections: array of section objectsEach section object must include:
heading: non-empty section titlecontent: LaTeX-ready section body stringOptional:
label: string such as sec:introNotes:
title in place of heading, but prefer heading in JSON examples and generated specs so the intent is obvious.content should already be valid LaTeX prose/equations, not a placeholder like "TODO".Each figure object must include:
path: path to the figure filecaption: non-empty captionlabel: LaTeX label such as fig:benchmarkOptional:
width: LaTeX width string, default \\columnwidthdouble_column: boolean, default falseRules:
gpd paper-build runs.acknowledgments: stringbib_file: bibliography stem without .bib, default referencesjournal: journal key, default prlappendix_sections: array of section objectsattribution_footer: string footer appended by the builderjournal ValuesThe paper builder currently supports:
prlapjmnrasnaturejhepjfmChoose one supported key exactly. Do not use freeform journal names here.
authors, sections, figures, appendix_sections) as JSON arrays.authors or sections, even for minimal drafts.bib_file as a stem like references, not references.bib."figures": [] rather than prose./home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local paper-build paper/PAPER-CONFIG.json
This validates the JSON against the typed PaperConfig contract, resolves figure paths, and emits the canonical manuscript scaffold plus paper artifacts.
</execution_context>
Check for existing drafts:
ls paper/ manuscript/ draft/ 2>/dev/null
ls .gpd/paper/*.md 2>/dev/null
find . -name "*.tex" -maxdepth 2 2>/dev/null | head -10
Load research context:
cat .gpd/ROADMAP.md 2>/dev/null
ls .gpd/phases/*/SUMMARY.md .gpd/phases/*/*-SUMMARY.md 2>/dev/null
cat .gpd/research-map/FORMALISM.md 2>/dev/null
The workflow handles all logic including:
gpd init phase-op, check pdflatex availability, verify conventions--from-phases flag to select specific phases.paper/PAPER-CONFIG.json using @./.codex/get-physics-done/templates/paper/paper-config-schema.md, then materialize the canonical manuscript scaffold with gpd paper-build (emits paper/main.tex, bibliography artifacts, and paper/ARTIFACT-MANIFEST.json)$gpd-peer-reviewFor a standalone rerun of the referee stage after the manuscript already exists, use $gpd-peer-review.
</process>
<success_criteria>