Surface the AI's assumptions about a phase approach before planning
Codex shell compatibility:
gpd on PATH.GPD_ACTIVE_RUNTIME=codex uv run gpd ....
</codex_runtime_notes>Purpose: Help users see what the AI thinks BEFORE planning begins -- enabling course correction early when assumptions are wrong. Output: Conversational output only (no file creation) -- ends with "What do you think?" prompt
Assumption categories for physics research:
Physical assumptions -- approximations taken for granted
Mathematical assumptions -- properties assumed about the formalism
Computational assumptions -- believed-sufficient numerics
Methodological assumptions -- approach and ordering
Scope assumptions -- what is in and out
Anchor assumptions -- what trusted references, baselines, prior outputs, or benchmarks the AI assumes constrain the phase
Skeptical assumptions -- what looks weakest, what could falsify the current framing early, and what might be false progress </objective>
<execution_context>
Key difference from discuss-phase: This is ANALYSIS of what the AI thinks, not INTAKE of what user knows. No file output - purely conversational to prompt discussion. </purpose>
If argument missing:
Error: Phase number required.
Usage: $gpd-list-phase-assumptions [phase-number]
Example: $gpd-list-phase-assumptions 3
Exit workflow.
If argument provided: Validate phase exists in roadmap:
cat .gpd/ROADMAP.md | grep -i "Phase ${PHASE}"
If phase not found:
Error: Phase ${PHASE} not found in roadmap.
Available phases:
[list phases from roadmap]
Exit workflow.
If phase found: Parse phase details from roadmap:
Continue to analyze_phase. </step>
1. Physical Assumptions: What physical regime, symmetries, and conservation laws does the AI assume apply?
2. Mathematical Framework: What mathematical structures, equations, and solution methods does the AI assume?
3. Approximation Scheme: What approximations does the AI plan to make, and what are their regimes of validity?
4. Computational Approach: What numerical methods, algorithms, and tools does the AI assume?
5. Scope Boundaries: What's included vs excluded in the assistant's interpretation?
6. Anchor Inputs: What trusted references, prior outputs, baselines, or benchmarks does the AI assume constrain the phase?
7. Expected Results: What does the assistant expect the answer to look like?
8. Dependencies and Prerequisites: What does the assistant assume exists or needs to be in place?
9. User Guidance I Am Treating As Binding: What explicit user requests does the assistant think must survive into planning and execution?
Also name:
Be honest about uncertainty. Mark assumptions with confidence levels:
## My Assumptions for Phase ${PHASE}: ${PHASE_NAME}
### Physical Assumptions
[List assumptions about the physics: regime, symmetries, conservation laws, negligible interactions]
For each: state the assumption, why it seems reasonable, and what would change if it's wrong
### Mathematical Framework
[List assumptions about formalism, equations, representations, and mathematical properties]
For each: state why this framework is appropriate and what alternatives exist
### Approximation Scheme
[List planned approximations with stated regimes of validity]
For each: state the controlling parameter, when it breaks down, and the expected error
### Computational Approach
[List assumed algorithms, packages, and numerical parameters]
For each: state why this method suits the problem and what its limitations are
### Scope Boundaries
**In scope:** [what's included]
**Out of scope:** [what's excluded]
**Ambiguous:** [what could go either way]
### Anchor Inputs
[List assumed references, baselines, prior outputs, and benchmark anchors]
For each: state why it matters and how confident you are that it should constrain the phase
### Expected Results
**Scaling:** [expected functional forms]
**Limiting cases:** [what known results must be recovered]
**Order of magnitude:** [rough estimates with reasoning]
**Consistency checks:** [sum rules, Ward identities, conservation laws to verify]
### Skeptical Review
**Weakest anchor / assumption:** [what feels least grounded]
**Fast falsifier:** [what would most quickly prove the framing wrong]
**False progress:** [what might look promising but would not count as success]
### Dependencies
**From prior phases:** [what's needed]
**External:** [packages, data, known results]
**Feeds into:** [what future phases need from this]
### User Guidance I Am Treating As Binding
[List the observables, deliverables, prior outputs, required references, and stop conditions I believe the user explicitly cares about]
For each: state why I think it is binding and where I might be paraphrasing too loosely
---
**What do you think?**
Probe these assumptions critically:
- Which physical assumptions are you least confident about?
- Are the approximation regimes appropriate for your parameter values?
- Do the expected results match your intuition?
- Am I missing any important limiting cases?
- Which anchor or prior output feels weakest?
- What should we check early so a wrong framing does not survive too long?
- What result would look like progress here but should not count as success?
- Which of your explicit requests am I at risk of generalizing away or weakening?
- Did I miss any required reference, prior output, decisive observable, or stop condition?
Wait for user response. </step>
Acknowledge the corrections and probe deeper:
Key corrections:
- [correction 1] -- this changes [what it impacts]
- [correction 2] -- this means [revised understanding]
This significantly affects the approach. In particular:
- [Assumption X] is now [revised]
- [Method Y] may need to be replaced by [alternative]
- [Expected result Z] should instead look like [revised expectation]
Does this revised understanding match your picture?
If user identifies risky assumptions:
You've flagged [assumption] as risky. Let me think about what happens if it fails:
- If [assumption] doesn't hold, then [consequence]
- We could hedge by [alternative approach or verification strategy]
- The plan should include a checkpoint to verify this early
Should I note this as a critical validation point?
If user confirms assumptions:
Assumptions validated. Key physics confirmed:
- [most important confirmed assumption]
- [second most important]
Continue to offer_next. </step>
What's next?
1. Discuss methodology ($gpd-discuss-phase ${PHASE}) - Socratic dialogue to make methodological decisions
2. Plan this phase ($gpd-plan-phase ${PHASE}) - Create detailed research execution plans
3. Re-examine assumptions - I'll analyze again with your corrections
4. Done for now
Wait for user selection.
If "Discuss methodology": Note that CONTEXT.md will incorporate any corrections discussed here If "Plan this phase": Proceed knowing assumptions are understood If "Re-examine": Return to analyze_phase with updated understanding </step>
<success_criteria>
</execution_context>
Load project state first: @.gpd/STATE.md
Load roadmap: @.gpd/ROADMAP.md </context>
<success_criteria>