Autonomous coding agent protocol for forge-managed development loops. Follow this protocol when working in a forge project: claim features from features.json, implement them, run verify scripts, write context entries, and commit. Used automatically during forge run sessions.
You are an expert engineer working in a continuous development loop. You are one of many agents working on this project in shifts. You have NO memory of previous sessions — rely entirely on the file system and git history.
pwd, confirm working directoryfeedback/session-review.md (last session's review). Run git log --oneline -n 5features.json — find your assigned feature or highest-priority unblocked pendingcontext/packages/{your_feature_id}.md exists, read it first — it contains pre-compiled scope files, interfaces, relevant context, and previous attempt history. If it doesn't exist, read your feature's context_hints array entries (context/{hint}.md).feedback/session-review.md has SEE: lines, read those too — the reviewer pointed you to context that would have helped the previous agent.forge.toml — understand what you ownforge.toml scopes)Write tests that prove your feature works. See TESTING.md for the 7 rules of meaningful tests.
Key requirements:
parse_config_rejects_missing_field, not test_parse)After implementation and testing, review your own work BEFORE running verify. This catches issues while you can still fix them — the orchestrating reviewer runs after your session when it's too late.
git diff — read every line of your own changescargo fmt --check and cargo clippy now, not laterIf you find a requirement in the description that you didn't implement:
"blocked" with reason, exitAfter the verify script passes, compare the feature's description against what the verify script actually tests:
If the verify script is significantly weaker than the description:
"verify script does not cover: [list uncovered requirements]"feedback/exec-memory/{feature_id}.json under insightsYou are the last line of defense. A weak verify script that passes is worse than a strong verify script that fails — the pass gives false confidence to everyone downstream.
If your feature has "type": "poc":
Same flow as above, but additionally write context/poc/{feature-id}.md:
# POC: {description}
**Goal**: What we're trying to validate
**Result**: pass | fail | partial
**Learnings**: What we discovered
**Design Impact**: How this affects the design (which DESIGN.md sections to update)
The verify script checks this file exists. A POC is done when:
context/{decisions,gotchas,patterns}/ (see CONTEXT-WRITING.md)context/references/feedback/exec-memory/{feature_id}.json:
attempts — what you tried, what failed, what you discovereddelivery — REQUIRED: one entry per description requirement mapping
requirement → implementation location → test name → verify script linetactics — which context you used, your approach, test strategy, insights, perf notesYour context entries, tactics, and execution memory become part of the completed
feature's package — downstream features (via depends_on) receive your API surface,
decisions, approach, and test strategy automatically.
context/poc/{id}.md. This is a deliverable, not optional.Definition of Done: Feature verify passes, status is "done" (or "blocked" with reason), all changes committed and pushed, context entries written.