Write agency-specific proposal sections with atomic per-section checkpointing, mm-ask multi-model fact verification, and human review gates.
You are writing the core sections of a grant proposal according to agency-specific templates, word limits, and formatting requirements. Each section is written with full context from prior pipeline phases, checkpointed atomically, fact-checked with mm-ask (when provider CLIs are available), and reviewed by the PI before finalization.
--proposal-dir <path>: Proposal directory (required)--section <name>: Write a specific section only (optional — writes all sections if omitted)--no-multi-model: Skip external multi-model verification even if provider CLIs are installed--multi-draft: Generate N variants per section in parallel and pick the best (requires multi_draft.enabled: true in config or explicit flag)Parse from the user's message.
Read from the proposal directory (quote paths — they may contain spaces):
PROP="$proposal_dir"
Load:
"$PROP/config.yaml" — agency, mechanism, language, proposal metadata"$PROP/foa_requirements.json" — section requirements and word limits"$PROP/sections/objectives.md" — research objectives from /gw:aims"$PROP/landscape/competitive_brief.md" — competitive positioning"$PROP/landscape/literature.md" — literature context and gap analysis"$PROP/sections/preliminary_data.md" — PI's evidence"$PROP/sections/bibliography.md" — citation listLoad agency-specific section order:
uv run gw-agency sections "$agency" "$mechanism"
Read citation style from the agency manifest:
"numbered": Use [1], [2] inline citations"author_year": Use (Author et al., Year) inline citationsMM_AVAILABLE=0
if uv run mm-detect --json 2>/dev/null | python3 -c "
import json, sys
d = json.load(sys.stdin)
sys.exit(0 if any(v.get('installed') for v in d.get('providers', {}).values()) else 1)
"; then
MM_AVAILABLE=1
fi
When --no-multi-model is passed, force MM_AVAILABLE=0.
The project summary (abstract) is written first as a standalone document with its own word limits. It provides the framing for all subsequent sections.
Before drafting, check whether this section is already complete:
uv run gw-state checkpoint-status "$PROP" proposal_writing project_summary
If next_step is "complete", skip the draft. Otherwise proceed.
Draft sections/project_summary.md:
Save the draft checkpoint:
cat sections/project_summary.md | uv run gw-state save-checkpoint "$PROP" proposal_writing project_summary draft
Human checkpoint: Present the project summary for PI review before continuing.
Mark the phase in_progress exactly once before entering the loop:
uv run gw-state update "$PROP" --phase proposal_writing --status in_progress
For each section in the agency's section order:
STATUS=$(uv run gw-state checkpoint-status "$PROP" proposal_writing "$section")
NEXT=$(echo "$STATUS" | python3 -c "import json,sys; print(json.load(sys.stdin)['next_step'])")
if [ "$NEXT" = "complete" ]; then
echo "Section $section already done — skipping"
continue
fi
uv run gw-agency info "$agency"
Read the corresponding section template from the printed template directory.
Write the section respecting:
language: ro → Romanian headers with English scientific content)Grant writing is coherence-sensitive — Claude does the writing directly. Do NOT delegate prose generation to a multi-model ensemble (that produces inconsistent voice). Use mm-ask ONLY for verification, not generation.
Reference figures using Markdown:

*Figure N: Caption text explaining the figure content.*
Ensure every referenced figure exists in "$PROP/sections/figures/".
Save the draft checkpoint atomically:
cat "$PROP/sections/$section.md" | uv run gw-state save-checkpoint "$PROP" proposal_writing "$section" draft
If --multi-draft is passed OR multi_draft.enabled: true in config:
uv run gw-multi-draft \
--command "claude --print 'Write section $section variant {variant_id} at {output}'" \
--variants v1 v2 v3 \
--output-dir "$PROP/state/checkpoints/proposal_writing/$section/drafts" \
--log-dir "$PROP/logs" \
--target-words "$WORD_LIMIT" \
--select-best \
--output-file "$PROP/logs/${section}_multi_draft.json"
Then copy the best variant into sections/$section.md AND re-save the draft
checkpoint so a subsequent resume picks up the winning variant, not the
pre-multi-draft placeholder:
BEST=$(python3 -c "import json; print(json.load(open('$PROP/logs/${section}_multi_draft.json'))['best']['output_path'])")
cp "$BEST" "$PROP/sections/$section.md"
cat "$PROP/sections/$section.md" | uv run gw-state save-checkpoint "$PROP" proposal_writing "$section" draft
If MM_AVAILABLE=1 and multi_model.claim_verification: true:
Launch mm-ask to verify claims in parallel across 3 external models:
cat > /tmp/gw_section_verify.txt <<EOF
Verify the factual claims and citations in this grant section. For each
problem you find, output ONE JSON object per line (JSONL):
{"claim": "...", "issue": "CITATION_HALLUCINATED|CLAIM_UNSUPPORTED|FACT_OUTDATED|...", "severity": "critical|warning|info", "fix": "..."}
Check: (1) every citation is real, (2) citations actually support the claim
next to them, (3) quantitative claims are accurate, (4) no internal
contradictions with other sections.
Return nothing if the section is clean. Do NOT wrap in markdown — just JSONL.
--- SECTION: $section ---
$(cat "$PROP/sections/$section.md")
EOF
uv run mm-ask \
--models gpt-5.4-high,gemini-3.1-pro,grok-4-20-thinking \
--prompt-file /tmp/gw_section_verify.txt \
--output "/tmp/gw_section_${section}_verify.json" \
--timeout 300
Merge the 3 reviewers' JSONL outputs: an issue flagged by ≥2 reviewers is high-confidence; single-reviewer issues are still worth checking but flagged as lower-confidence. Save the merged result as the section's verification