Archive historical entries from STATE.md to keep it under the 150-line target
Codex shell compatibility:
gpd on PATH.GPD_ACTIVE_RUNTIME=codex uv run gpd ....
</codex_runtime_notes>STATE.md is the project's living memory — it accumulates decisions, insights, and context over the course of a research project. When it exceeds ~150 lines, older entries should be archived to keep the active context window efficient while preserving the full historical record.
Routes to the compact-state workflow which handles:
<execution_context>
Triggered automatically when progress.md detects STATE.md exceeds 1500 lines, or manually via $gpd-compact-state.
</purpose>
<required_reading> Read all files referenced by the invoking prompt's execution_context before starting.
Read these files using the read_file tool:
INIT=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local init progress --include state)
if [ $? -ne 0 ]; then
echo "ERROR: gpd initialization failed: $INIT"
# STOP — display the error to the user and do not proceed.
fi
Extract: state_exists, state_content.
If state_exists is false:
No STATE.md found. Nothing to compact.
Exit.
Count current lines:
STATE_LINES=$(wc -l < .gpd/STATE.md)
Report: "STATE.md is {STATE_LINES} lines."
Check thresholds:
If under 1500 and not forced (--force flag absent): offer to compact anyway or exit.
</step>
The gpd CLI handles the detailed archival logic:
RESULT=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local state compact)
if [ $? -ne 0 ]; then
echo "ERROR: state compact failed: $RESULT"
# STOP — STATE.md may be in an inconsistent state.
fi
Parse result JSON for: compacted (bool), reason, original_lines, new_lines, archived_lines, warn.
The tool performs these archival operations:
[resolved] or struck through (~~...~~). Keeps active blockers.All archived content is appended to .gpd/STATE-ARCHIVE.md with a dated header.
If compacted is false:
Check reason:
"within_budget": STATE.md is already small enough."nothing_to_archive": STATE.md is large but nothing qualified for archival (all entries are current).Report and exit. </step>
# Check new line count
NEW_LINES=$(wc -l < .gpd/STATE.md)
# Verify STATE.md still has required sections
for SECTION in "Current Position" "Project Reference" "Accumulated Context" "Session"; do
grep -q "## ${SECTION}" .gpd/STATE.md || echo "MISSING: ${SECTION}"
done
# Verify state.json was synced
ls -la .gpd/state.json
If required sections are missing: The compaction was too aggressive. Attempt recovery:
# First try: regenerate STATE.md directly from authoritative state.json
if [ -f .gpd/state.json ]; then
echo "Attempting STATE.md recovery from state.json..."
uv run python - <<'PY'
import json
from pathlib import Path
from gpd.core.state import save_state_json
cwd = Path(".")
state = json.loads((cwd / ".gpd" / "state.json").read_text(encoding="utf-8"))
save_state_json(cwd, state)
PY
RECOVERY_METHOD="regenerated from authoritative state.json"
else
# Fallback: restore from git (state.json also missing or corrupt)
echo "state.json unavailable. Falling back to git restore..."
git checkout -- .gpd/STATE.md
RECOVERY_METHOD="restored from git"
fi
echo "Recovery method: ${RECOVERY_METHOD}"
Report error and recovery method used, then exit.
If state.json sync failed: Do not delete it blindly. Keep .gpd/state.json (and .gpd/state.json.bak if present), inspect gpd state validate, and use $gpd-sync-state or the recovery step above so JSON-only fields are preserved.
</step>
ls -la .gpd/STATE-ARCHIVE.md 2>/dev/null
ARCHIVE_LINES=$(wc -l < .gpd/STATE-ARCHIVE.md 2>/dev/null || echo 0)
Confirm archived content is recoverable. </step>
PRE_CHECK=$(/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local pre-commit-check --files .gpd/STATE.md .gpd/STATE-ARCHIVE.md .gpd/state.json 2>&1) || true
echo "$PRE_CHECK"
/home/qol/.gpd/venv/bin/python -m gpd.runtime_cli --runtime codex --config-dir ./.codex --install-scope local commit \
"chore: compact STATE.md (${ORIGINAL_LINES} -> ${NEW_LINES} lines)" \
--files .gpd/STATE.md .gpd/STATE-ARCHIVE.md .gpd/state.json
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
GPD > STATE COMPACTED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Before:** {original_lines} lines
**After:** {new_lines} lines
**Archived:** {archived_lines} lines to STATE-ARCHIVE.md
### What was archived:
- {N} decisions from phases < {keep_phase_min}
- {N} resolved blockers
- {N} performance metrics from old phases
- {N} historical session records
### Archive location:
.gpd/STATE-ARCHIVE.md ({archive_lines} total lines)
All archived content is recoverable from STATE-ARCHIVE.md or git history.
If STATE.md is still above 150 lines after compaction:
STATE.md is now {new_lines} lines (target: 150).
Remaining entries are all current-phase content. To further reduce:
- Summarize verbose intermediate results
- Move detailed derivation logs to phase SUMMARY.md files
- Keep only the latest key results, not historical progression
<failure_handling>
cat .gpd/STATE.md | head -5 to verify file is readable.</failure_handling>
<success_criteria>
</execution_context>
If --force flag is present, skip the line-count check and compact regardless of current size.
The workflow handles all logic including:
<success_criteria>