Initialize a new physics research project with deep context gathering and PROJECT.md
Codex shell compatibility:
gpd on PATH.GPD_ACTIVE_RUNTIME=codex uv run gpd ....
</codex_runtime_notes><codex_questioning>
If no project config exists yet, the workflow opens with the physics-questioning pass, then asks for workflow preferences only after scope approval and before the first project-artifact commit.
Creates:
.gpd/PROJECT.md — research project context.gpd/config.json — workflow preferences.gpd/research/ — domain and literature research (optional).gpd/REQUIREMENTS.md — scoped research requirements.gpd/ROADMAP.md — phase structure.gpd/STATE.md — project memory.gpd/state.json project_contract — authoritative machine-readable scoping contractAfter this command: Run $gpd-plan-phase 1 to start execution.
</objective>
<execution_context>
<required_reading> Read all files referenced by the invoking prompt's execution_context before starting. </required_reading>
<auto_mode>
Check if --auto flag is present in $ARGUMENTS.
If auto mode:
balanced / yolo; if autonomy=supervised, present the draft roadmap before commitDocument requirement:
Auto mode requires a research document via @ reference (e.g., $gpd-new-project --auto @proposal.md). If no document provided, error:
Error: --auto requires a research document via @ reference.
Usage: $gpd-new-project --auto @your-proposal.md
The document should describe the physics problem you want to investigate.
</auto_mode>
<minimal_mode>
Check if --minimal flag is present in $ARGUMENTS.
If minimal mode: After Step 1 (Setup), skip the entire standard flow (Steps 2-9) and execute the Minimal Initialization Path below instead.
Minimal mode creates the SAME directory structure and file set as the full path -- just with less conversational overhead. It still must produce a scoping contract with decisive outputs, anchors, and explicit approval so downstream workflows ($gpd-plan-phase, $gpd-execute-phase, etc.) work identically.
Two variants:
--minimal @file.md — Input file provided. Parse it for research context.--minimal (no file) — Ask ONE question, then generate everything from the response.After Step 1 completes (init checks, git, project_exists guard):
If --minimal with file ($gpd-new-project --minimal @plan.md):
Parse the input markdown for:
If the file cannot be parsed (no discernible research question or objective), error:
Error: Could not extract research context from the provided file.
The file should contain at minimum:
- A research question or objective
It should ideally also name at least one decisive output, anchor, prior output, or explicit "anchor unknown / need grounding" note so any repair prompt can stay narrow.
Example structure:
# Research Question
What is the critical exponent of the 3D Ising model?
# Success Signal
Extract the critical exponent and compare it against a trusted benchmark.
# Anchors
Compare against the known 3D Ising result from the literature.
# Optional First Investigation Chunk
Set up the Monte Carlo simulation and finite-size scaling workflow.
If --minimal without file ($gpd-new-project --minimal):
Ask exactly one inline freeform question with no preamble or restatement:
"Describe your research project in one pass: what's the core question, what output, claim, or deliverable would count as success, what references, prior outputs, or known results must stay visible, whether the anchor is still unknown, any first investigation chunk you already know, and what would make you rethink the approach?"
Wait for response. From the single response, extract:
Build a canonical scoping contract from the extracted input.
Blocking fields that must be present before approval:
Fields to capture even if still uncertain:
Preservation rule: If the user names a specific observable, figure, dataset, derivation, paper, benchmark, notebook, prior run, or stop condition, keep that wording recognizable in the contract. Do not generalize it away into a vague proxy.
If the user does not know the anchor yet, preserve that explicitly in scope.unresolved_questions or context_intake.context_gaps rather than inventing a paper, benchmark, or baseline.
Prefer explicit missing-anchor wording such as Which reference should serve as the decisive benchmark anchor?, Benchmark reference not yet selected, still to identify the decisive anchor, or baseline comparison is TBD.
Do not force a phase list just to make the scoping contract look complete. If decomposition is still unclear, record that uncertainty and let ROADMAP.md start with a single coarse phase or first grounded investigation chunk.
If a blocking field is missing, ask exactly one repair prompt that targets only the missing field. Do not silently continue with placeholders.
Before you show the approval gate, build the raw contract as a literal JSON object that follows templates/state-json-schema.md exactly:
project_contract is a JSON object, not proseobservables, claims, deliverables, acceptance_tests, references, forbidden_proxies, and links are arrays of objects, not stringsidcontext_intake.must_read_refs must contain only references[].id valuesclaims[].observables, claims[].deliverables, claims[].acceptance_tests, and claims[].references must point only to declared IDsacceptance_tests[].subject, references[].applies_to, and forbidden_proxies[].subject must point to a claim ID or deliverable ID, never an observable label or free textacceptance_tests[].evidence_required, links[].source, and links[].target may only point to declared claim, deliverable, acceptance-test, or reference IDsobservables[].kind: scalar | curve | map | classification | proof_obligation | otherdeliverables[].kind: figure | table | dataset | data | derivation | code | note | report | otheracceptance_tests[].kind: existence | schema | benchmark | consistency | cross_method | limiting_case | symmetry | dimensional_analysis | convergence | oracle | proxy | reproducibility | human_review | otheracceptance_tests[].automation: automated | hybrid | humanreferences[].kind: paper | dataset | prior_artifact | spec | user_anchor | otherreferences[].role: definition | benchmark | method | must_consider | background | otherlinks[].relation: supports | computes | visualizes | benchmarks | depends_on | evaluated_by | otherreferences[].carry_forward_to[] is free-text workflow scope such as planning, execution, verification, or writing; it is not an enum and must not be reused for IDs or relation namesanchor, manual, content-check, benchmark-record, or anchors; rewrite them to the exact schema term before approvalThen present a concise scoping summary and require explicit approval:
CRITICAL: Minimal mode is still allowed to be lean, but it is not allowed to be contract-free.
After approval, validate the contract before persisting it:
printf '%s\n' "$PROJECT_CONTRACT_JSON" | /home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local --raw validate project-contract -
If validation fails, show the errors, revise the scoping contract, and do NOT continue to downstream artifact generation.
After validation passes, persist the approved contract into .gpd/state.json from the same stdin payload:
printf '%s\n' "$PROJECT_CONTRACT_JSON" | /home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local state set-project-contract -
Do not write /tmp intermediates for the approved contract. Prefer piping the exact approved JSON directly to gpd ... -. Only write a file if the user explicitly wants a durable saved copy, and if so place it under the project, not an OS temp directory.
Populate .gpd/PROJECT.md using the template from templates/project.md.
Fill in what was extracted. For sections without enough information, use sensible placeholder text that signals incompleteness:
# [Extracted Research Title]
## What This Is
[Extracted research description — keep it concise, 2-3 sentences from the input]
## Core Research Question
[Extracted research question]
## Scoping Contract Summary
### Contract Coverage
- [Claim / deliverable]: [What counts as success]
- [Acceptance signal]: [Benchmark match, proof obligation, figure, dataset, or note]
- [False progress to reject]: [Proxy that must not count]
### User Guidance To Preserve
- **User-stated observables:** [Specific quantity, curve, figure, or smoking-gun signal]
- **User-stated deliverables:** [Specific table, plot, derivation, dataset, note, or code output]
- **Must-have references / prior outputs:** [Paper, notebook, run, figure, or benchmark that must remain visible]
- **Stop / rethink conditions:** [When to pause, ask again, or re-scope before continuing]
### Scope Boundaries
**In scope**
- [Approved in-scope item]
**Out of scope**
- [Approved out-of-scope item]
### Active Anchor Registry
- [Anchor ID or short label]: [Paper, dataset, spec, benchmark, or prior artifact]
- Why it matters: [What it constrains]
- Carry forward: [planning | execution | verification | writing]
- Required action: [read | use | compare | cite | avoid]
### Carry-Forward Inputs
- [Prior output, notebook, figure, baseline, or "None confirmed yet"]
### Skeptical Review
- **Weakest anchor:** [Least-certain assumption, reference, or prior result]
- **Disconfirming observation:** [What would make the framing look wrong]
- **False progress to reject:** [What might look promising but should not count as success]
### Open Contract Questions
- [Unresolved question or context gap]
## Physics Subfield
[Inferred from input, e.g., "Condensed matter — phase transitions"]
## Mathematical Framework
[Inferred from input, or "To be determined during Phase 1"]
## Notation Conventions
To be established during initial phases.
## Unit System
[Inferred from input, or "Natural units (hbar = c = 1)"]
## Computational Tools
[Extracted from input, or "To be determined"]
## Requirements
### Validated
(None yet — derive and validate to confirm)
### Active
- [ ] [One requirement per extracted phase goal]
### Out of Scope
(To be refined as project progresses)
## Key References
[Approved must-read references, benchmarks, or "None confirmed yet"]
## Target Publication
(To be determined)
## Constraints
(None specified)
## Key Decisions
| Decision | Rationale | Outcome |
| ------------------------------------------- | ---------------------- | --------- |
| Minimal initialization — defer deep scoping | Fast project bootstrap | — Pending |
---
_Last updated: [today's date] after initialization (minimal)_
Auto-generate REQ-IDs from the phase goals or major work chunks extracted in M1.
For each phase, create one or more requirements using the standard format:
# Research Requirements
## Current Requirements
### Phase-Derived Requirements
[For each confirmed phase or work chunk, generate requirements with REQ-IDs:]
- [ ] **REQ-01**: [Goal of phase 1, made specific and testable]
- [ ] **REQ-02**: [Goal of phase 2, made specific and testable]
- [ ] **REQ-03**: [Goal of phase 3, made specific and testable]
[... one per confirmed phase or work chunk minimum ...]
## Future Work
(To be identified as project progresses)
## Out of Scope
(To be refined — use $gpd-settings or edit REQUIREMENTS.md directly)
## Traceability
| REQ-ID | Phase | Status |
| ------ | ----- | ------- |
| REQ-01 | 1 | Planned |
| REQ-02 | 2 | Planned |
| REQ-03 | 3 | Planned |
Create .gpd/ROADMAP.md directly from the phase descriptions or inferred work chunks (no roadmapper agent).
Use the coarsest decomposition the approved contract actually supports. If the input only supports one grounded stage so far, create a one-phase roadmap and carry later decomposition as an open question instead of inventing filler phases.
Use the standard roadmap template structure:
# Roadmap: [Research Project Title]
## Overview
[One paragraph synthesized from the research description]
## Phases
- [ ] **Phase 1: [Phase name]** - [One-line description]
- [ ] **Phase 2: [Phase name]** - [One-line description]
- [ ] **Phase 3: [Phase name]** - [One-line description]
[... from extracted or inferred stages/work chunks ...]
## Phase Details
### Phase 1: [Phase name]
**Goal:** [Extracted from input]
**Depends on:** Nothing (first phase)
**Requirements:** [REQ-01]
**Success Criteria** (what must be TRUE):
1. [Derived from phase goal — one concrete observable outcome]
2. [Second criterion if obvious from context]
Plans:
- [ ] 01-01: [TBD — created during $gpd-plan-phase]
[... repeat for each phase ...]
## Progress
| Phase | Plans Complete | Status | Completed |
| --------- | -------------- | ----------- | --------- |
| 1. [Name] | 0/TBD | Not started | - |
| 2. [Name] | 0/TBD | Not started | - |
| 3. [Name] | 0/TBD | Not started | - |
STATE.md — Initialize using the standard template:
# Research State
## Project Reference
See: .gpd/PROJECT.md (updated [today's date])
**Core research question:** [From PROJECT.md]
**Current focus:** Phase 1 — [Phase 1 name]
## Current Position
**Current Phase:** 1
**Current Phase Name:** [Phase 1 name]
**Total Phases:** [N]
**Current Plan:** 0
**Total Plans in Phase:** 0
**Status:** Ready to plan
**Last Activity:** [today's date]
**Last Activity Description:** Project initialized (minimal)
**Progress:** [░░░░░░░░░░] 0%
## Active Calculations
None yet.
## Intermediate Results
None yet.
## Open Questions
[Populate from approved scoping-contract unresolved questions. If none, say "None yet."]
## Performance Metrics
| Label | Duration | Tasks | Files |
| ----- | -------- | ----- | ----- |
| - | - | - | - |
## Accumulated Context
### Decisions
- [Phase 1]: Minimal mode — scoping contract approved before phase planning
### Active Approximations
None yet.
### Pending Todos
None yet.
### Blockers/Concerns
None yet.
## Session Continuity
**Last session:** [today's date]
**Stopped at:** Project initialized (minimal)
**Resume file:** —
config.json — Create with sensible defaults (no config questions asked):
{
"autonomy": "balanced",
"research_mode": "balanced",
"execution": {
"review_cadence": "adaptive"
},
"parallelization": true,
"commit_docs": true,
"model_profile": "review",
"workflow": {
"research": true,
"plan_checker": true,
"verifier": true
}
}
Create the directory structure and commit everything in a single commit:
mkdir -p .gpd
PRE_CHECK=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local pre-commit-check --files .gpd/PROJECT.md .gpd/REQUIREMENTS.md .gpd/ROADMAP.md .gpd/STATE.md .gpd/state.json .gpd/config.json 2>&1) || true
echo "$PRE_CHECK"
/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local commit "docs: initialize research project (minimal)" --files .gpd/PROJECT.md .gpd/REQUIREMENTS.md .gpd/ROADMAP.md .gpd/STATE.md .gpd/state.json .gpd/config.json
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
GPD >>> RESEARCH PROJECT INITIALIZED (MINIMAL)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**[Research Project Name]**
| Artifact | Location |
|--------------|-----------------------------|
| Project | `.gpd/PROJECT.md` |
| Config | `.gpd/config.json` |
| Requirements | `.gpd/REQUIREMENTS.md` |
| Roadmap | `.gpd/ROADMAP.md` |
| State | `.gpd/STATE.md` |
**[N] phases** | **[N] requirements** | Ready to investigate
Note: Initialized with --minimal. Literature survey and deep scoping
were skipped. Use $gpd-settings to adjust workflow preferences.
---------------------------------------------------------------
## >> Next Up
**Phase 1: [Phase Name]** — [Goal from ROADMAP.md]
Ask the user once using a single compact prompt block:
If "Plan phase 1": Tell the user to run $gpd-plan-phase 1 (and suggest /clear first for a fresh context window).
If "Review artifacts first": List the files and let the user inspect them. Suggest edits if needed, then re-offer planning.
If "Done for now": Exit. Remind them to use $gpd-resume-work or $gpd-plan-phase 1 when ready.
End of Minimal Initialization Path. The standard flow (Steps 2-9) is not executed when --minimal is active.
</minimal_mode>
MANDATORY FIRST STEP — Execute these checks before ANY user interaction:
INIT=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local init new-project)
if [ $? -ne 0 ]; then
echo "ERROR: gpd initialization failed: $INIT"
# STOP — display the error to the user and do not proceed with the workflow.
fi
Parse JSON for: researcher_model, synthesizer_model, roadmapper_model, commit_docs, autonomy, research_mode, project_exists, has_research_map, planning_exists, has_research_files, has_project_manifest, has_existing_project, needs_research_map, has_git.
Mode-aware behavior:
autonomy=supervised: Pause for user confirmation after each major step (questioning, scoping contract, research, roadmap). Show summaries and wait for approval before proceeding.autonomy=balanced (default): Execute the full pipeline automatically. Pause only if research results are ambiguous, the roadmap has gaps, or scope-setting decisions need user judgment. The initial scoping contract is always a user-judgment checkpoint.autonomy=yolo: Execute full pipeline, skip optional literature survey, auto-approve roadmap. Do NOT skip the initial scoping-contract approval gate. Do NOT skip the requirement to show contract coverage in the roadmap.--auto changes how intake happens, not who owns later review gates. If autonomy=supervised, keep the roadmap approval checkpoint even in auto mode.research_mode=explore: Expand literature survey (spawn 5+ researchers), broader questioning, include speculative research directions in roadmap.research_mode=exploit: Focused literature survey (2-3 researchers), targeted questioning, lean roadmap with minimal exploratory phases.research_mode=adaptive: Start broad enough to compare viable approaches while scoping the project. Narrow the roadmap only after anchors or decisive evidence make one method family clearly preferable..gpd/config.json exists, the autonomy and research_mode values from gpd init new-project are temporary defaults, not a durable user choice. Let those defaults govern the initial questioning and scoping pass, then run Step 5 immediately after scope approval and before the first project-artifact commit so the durable config takes over before research and roadmap execution.If project_exists is true: Error — project already initialized. Use $gpd-progress.
If has_git is false: Initialize git:
git init
Check for previous initialization attempt:
if [ -f .gpd/init-progress.json ]; then
# Guard against corrupted JSON (e.g., from interrupted write)
PREV_STEP=""
PREV_DESC=""
INIT_PROGRESS_RAW=$(cat .gpd/init-progress.json 2>/dev/null || echo "")
if [ -n "$INIT_PROGRESS_RAW" ]; then
PREV_STEP=$(echo "$INIT_PROGRESS_RAW" | python3 -c "import sys,json; d=json.loads(sys.stdin.read()); print(d.get('step',''))" 2>/dev/null)
PREV_DESC=$(echo "$INIT_PROGRESS_RAW" | python3 -c "import sys,json; d=json.loads(sys.stdin.read()); print(d.get('description',''))" 2>/dev/null)
fi
# If JSON was corrupted (empty step), treat as fresh start
if [ -z "$PREV_STEP" ]; then
echo "WARNING: init-progress.json exists but is corrupted or empty. Starting fresh."
rm -f .gpd/init-progress.json
fi
fi
If init-progress.json exists and is valid, offer to resume:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
GPD > PREVIOUS INITIALIZATION DETECTED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Completed through step {PREV_STEP}: {PREV_DESC}
──────────────────────────────────────────────────────
Options:
1. "Resume from step {PREV_STEP + 1}" -- continue where you left off
2. "Start fresh" -- re-run from the beginning
──────────────────────────────────────────────────────
If resume: skip to the step after PREV_STEP (check which artifacts already exist on disk to confirm).
If start fresh: delete init-progress.json and proceed normally.
If auto mode: Do not offer the full mapping flow by default, but do NOT assume a fresh project if prior artifacts exist. If needs_research_map is true or existing artifacts are detected, ask one lightweight routing question:
If no prior artifacts are detected, continue directly to Step 3 / Step 4 as appropriate.
If needs_research_map is true (from init — existing research artifacts detected but no research map):
Ask the user once using a single compact prompt block:
If "Map existing work first":
Run `$gpd-map-research` first, then return to `$gpd-new-project`
Exit command.
If "Skip mapping" OR needs_research_map is false: Continue to Step 3.
If auto mode: Skip. Extract research context from provided document instead and proceed to Step 4.
Display stage banner:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
GPD >>> RESEARCH QUESTION FORMULATION
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Open the conversation:
Ask one inline freeform question with no preamble or restatement:
"What physics problem do you want to investigate?"
Wait for their response. This gives you the context needed to ask intelligent follow-up questions.
Follow the thread:
Based on what they said, ask follow-up questions that dig into their response. Ask the user once using a single compact prompt block with options that probe what they mentioned — interpretations, clarifications, concrete examples.
Keep following threads. Each answer opens new threads to explore. Ask about:
If the user names a specific observable, deliverable, anchor paper, benchmark, figure, notebook, or prior result, reflect it back using recognizable wording and treat it as binding context unless the user later revises it.
Consult ./.codex/get-physics-done/references/research/questioning.md for techniques:
Check context (background, not out loud):
As you go, mentally check the context checklist from ./.codex/get-physics-done/references/research/questioning.md. If gaps remain, weave questions naturally. Don't suddenly switch to checklist mode.
Context to gather:
Decision gate:
When you could write a clear scoping contract, use ask_user:
If "Keep exploring" — ask what they want to add, or identify gaps and probe naturally.
Avoid rigid turn-counting. After several substantive exchanges, if you can state the core question, one decisive output or deliverable, and at least one anchor (or an explicit "anchor unknown" note), offer to proceed. If those blocking fields are still missing after roughly 6 follow-ups, summarize what is missing and ask whether to keep exploring or proceed with explicit open questions. A full phase breakdown is not required at this stage; if only the first grounded investigation chunk is clear, say so and carry later decomposition as an open question. Do not force closure just because a counter was hit, and do not imply certainty where there is still ambiguity. If you only have limiting cases, sanity checks, or generic benchmark language with no decisive smoking-gun observable, curve, or benchmark reproduction, keep exploring unless the user explicitly says that is the decisive standard.
If auto mode: Synthesize the scoping contract from the provided document, ask at most one repair prompt for blocking gaps, and require one explicit scope approval before continuing.
Before writing PROJECT.md, synthesize a canonical project contract with at least these elements:
scope.questionscope.in_scopescope.out_of_scopescope.unresolved_questionscontext_intake.must_read_refscontext_intake.must_include_prior_outputscontext_intake.user_asserted_anchorscontext_intake.known_good_baselinescontext_intake.context_gapscontext_intake.crucial_inputs for user-stated observables, deliverables, stop conditions, or anything the user said must stay visibleobservables for any user-named decisive quantity, signal, or behavior, especially the first smoking-gun check they would trust over softer proxies or limiting casesuncertainty_markers.weakest_anchorsuncertainty_markers.unvalidated_assumptionsuncertainty_markers.competing_explanationsuncertainty_markers.disconfirming_observationsIf no must-read references are confirmed yet, record that explicitly in the contract rather than inventing one.
If the user does not know the anchor yet, record that explicitly as an unresolved question or context gap rather than fabricating a paper, dataset, benchmark, or baseline.
If the user supplied explicit observables, deliverables, prior outputs, or stop conditions, preserve them in the contract using wording the user would still recognize. Do not paraphrase them into generic "benchmark" or "artifact" language unless the user asked you to broaden them.
If the user named a prior output, review checkpoint, or "come back to me before continuing" condition, carry it into context_intake.must_include_prior_outputs or context_intake.crucial_inputs rather than leaving it only in prose.
Do not approve a scoping contract that strips decisive outputs, anchors, prior outputs, or review/stop triggers down to generic placeholders. The approved contract must preserve the user guidance that downstream planning needs.
If the only checks captured so far are limiting cases, sanity checks, or qualitative expectations, treat the contract as still underspecified unless the user explicitly states that these are the decisive standard.
Before you ask for approval, build the raw contract as a literal JSON object that follows templates/state-json-schema.md exactly:
project_contract is a JSON object, not proseobservables, claims, deliverables, acceptance_tests, references, forbidden_proxies, and links are arrays of objects, not stringsidcontext_intake.must_read_refs must contain only references[].id valuesclaims[].observables, claims[].deliverables, claims[].acceptance_tests, and claims[].references must point only to declared IDsacceptance_tests[].subject, references[].applies_to, and forbidden_proxies[].subject must point to a claim ID or deliverable ID, never an observable label or free textacceptance_tests[].evidence_required, links[].source, and links[].target may only point to declared claim, deliverable, acceptance-test, or reference IDsPresent a concise scoping summary and require explicit approval before downstream artifact generation:
Validate the approved contract before persisting it:
printf '%s\n' "$PROJECT_CONTRACT_JSON" | /home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local --raw validate project-contract -
If validation fails, show the errors, revise the scoping contract, and do NOT continue.
Persist the approved contract into .gpd/state.json from the same stdin payload:
printf '%s\n' "$PROJECT_CONTRACT_JSON" | /home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local state set-project-contract -
Do not write /tmp intermediates for the approved contract. Prefer piping the exact approved JSON directly to gpd ... -. Only write a file if the user explicitly wants a durable saved copy, and if so place it under the project, not an OS temp directory.
If .gpd/config.json does not exist yet, run Step 5 now before generating or committing PROJECT.md. This keeps the opening focused on the physics question while still letting commit_docs and other durable workflow settings apply before the first project-artifact commit. After Step 5 completes, return here and continue.
Then synthesize all context into .gpd/PROJECT.md using the template from templates/project.md.
For fresh research projects:
Initialize research questions as hypotheses:
## Research Questions
### Answered
(None yet — investigate to answer)
### Active
- [ ] [Research question 1]
- [ ] [Research question 2]
- [ ] [Research question 3]
### Out of Scope
- [Question 1] — [why: e.g., requires experiment, different subfield]
- [Question 2] — [why]
For continuation projects (existing work map exists):
Infer answered questions from existing work:
.gpd/research-map/ARCHITECTURE.md and FORMALISM.md## Research Questions
### Answered
- [checkmark] [Existing result 1] — established
- [checkmark] [Existing result 2] — established
- [checkmark] [Existing result 3] — established
### Active
- [ ] [New question 1]
- [ ] [New question 2]
### Out of Scope
- [Question 1] — [why]
Scoping Contract Summary:
Ensure PROJECT.md visibly summarizes the approved contract, including:
## Scoping Contract Summary
### Contract Coverage
- [Claim / deliverable]: [What counts as success]
- [Acceptance signal]: [Benchmark match, proof obligation, figure, dataset, or note]
- [False progress to reject]: [Proxy that must not count]
### Scope Boundaries
**In scope**
- [Approved in-scope item]
**Out of scope**
- [Approved out-of-scope item]
### Active Anchor Registry
- [Anchor ID or short label]: [Paper, dataset, spec, benchmark, or prior artifact]
- Why it matters: [What it constrains]
- Carry forward: [planning | execution | verification | writing]
- Required action: [read | use | compare | cite | avoid]
### Carry-Forward Inputs
- [Prior output, notebook, figure, baseline, or "None confirmed yet"]
### Skeptical Review
- **Weakest anchor:** [Least-certain assumption, reference, or prior result]
- **Unvalidated assumptions:** [What is currently assumed rather than checked]
- **Competing explanation:** [Alternative story that could also fit]
- **Disconfirming observation:** [What would make the framing look wrong]
- **False progress to reject:** [What might look promising but should not count as success]
### Open Contract Questions
- [Unresolved question or context gap]
Key Decisions:
Initialize with any decisions made during questioning:
## Key Decisions
| Decision | Rationale | Outcome |
| ------------------------- | --------- | --------- |
| [Choice from questioning] | [Why] | — Pending |
Research Context:
## Research Context
### Physical System
[Description of the system under study]
### Theoretical Framework
[QFT / condensed matter / GR / statistical mechanics / etc.]
### Key Parameters and Scales
| Parameter | Symbol | Regime | Notes |
| --------- | ------ | ------- | ------- |
| [param 1] | [sym] | [range] | [notes] |
### Known Results
- [Prior work 1] — [reference]
- [Prior work 2] — [reference]
### What Is New
[What this project contributes beyond existing work]
### Target Venue
[Journal or conference, with rationale]
### Computational Environment
[Available resources: local workstation, cluster, cloud, specific codes]
Last updated footer:
---
_Last updated: [date] after initialization_
Do not compress. Capture everything gathered.
Commit PROJECT.md:
mkdir -p .gpd
PRE_CHECK=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local pre-commit-check --files .gpd/PROJECT.md .gpd/state.json 2>&1) || true
echo "$PRE_CHECK"
/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local commit "docs: initialize research project" --files .gpd/PROJECT.md .gpd/state.json
Checkpoint step 4:
cat > .gpd/init-progress.json << CHECKPOINT
{"step": 4, "completed_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)", "description": "Approved project contract and PROJECT.md created and committed"}
CHECKPOINT
Quick setup gate — offer recommended defaults before individual questions:
Run this step after scope approval and before the first project-artifact commit whenever .gpd/config.json does not exist yet.
Ask the user once using a single compact prompt block:
.gpd/config.json? Recommended defaults set autonomy=balanced, research_mode=balanced, parallelization=true, commit_docs=true, model_profile=review, and enable workflow.research, workflow.plan_checker, and workflow.verifier."autonomy, research_mode, parallelization, commit_docs, workflow agents, and model_profile individuallyIf "Use recommended defaults": Skip all 8 config questions below. Create config.json directly with:
{
"autonomy": "balanced",
"research_mode": "balanced",
"parallelization": true,
"commit_docs": true,
"model_profile": "review",
"workflow": {
"research": true,
"plan_checker": true,
"verifier": true
}
}
Display confirmation:
Config: Balanced autonomy | Adaptive review cadence | Balanced research mode | Parallel | All agents | Review profile
(Change anytime with $gpd-settings)
Skip to "Commit config.json" below.
If "Customize settings": Proceed through Round 1 and Round 2 below.
Round 1 — Core workflow settings (4 questions):