Per-agent implementation loop for the Pulse ecosystem. Use in two modes: worker mode under pulse:swarming, or degraded standalone single-worker mode when preflight does not allow a swarm. Implements the bead loop, verification discipline, coordination reporting, and safe pause/resume behavior.
If .pulse/onboarding.json is missing or stale for the current repo, stop and invoke pulse:using-pulse before continuing.
pulse:executing supports two modes:
pulse:swarmingIn both modes, the live bead graph is the source of truth for what to do next.
Initialize → Get Bead → Reserve Files → Implement → Verify → Close & Report
↑ |
└─────────────── Context OK? Loop ─────────────────────────────┘
Context >65%? → Handoff → Stop
Determine mode from invocation plus .pulse/tooling-status.json:
pulse:swarming, run in worker moderecommended_mode=single-workerSwarming gives you a runtime nickname first. Use that nickname as the attempted Agent Mail name, then keep the returned Agent Mail name for all later coordination calls.
Register your session with the coordination runtime:
Record both identities in your startup acknowledgment:
Runtime nickname: <runtime-nickname>Agent Mail name: <resolved-agent-mail-name>From this point on, use resolved_agent_mail_name for every coordination call.
node .codex/pulse_status.mjs --json — quick onboarding/state/handoff scoutIf any of these files does not exist, note the absence and proceed — do not fabricate content.
If the bead references learning_refs, read those specific learning files. Do not load all learnings by default.
Before you select a bead, you must report in on the epic thread. Startup is not complete until you read AGENTS.md, post a startup acknowledgment with both identities, say AGENTS.md was read and pulse:executing is loading, and run fetch_inbox(...) on the epic topic.
Do not call bv --robot-priority before this sequence is complete.
Use owner-scoped handoffs:
.pulse/handoffs/worker-<agent>.json.pulse/handoffs/single-worker.jsonIf a handoff exists and was written by a prior instance of you (same agent identity):
In worker mode, every loop starts with coordination, not bead selection.
Start with fetch_inbox(project_key="<project-root-path>", agent_name="<resolved-agent-mail-name>", topic="<EPIC_TOPIC>").
If the thread looks stale, also run fetch_topic(project_key="<project-root-path>", topic_name="<EPIC_TOPIC>").
bv --robot-priority
Select the top-ranked bead that:
If swarming suggests a bead via coordination, treat it as a startup hint or rescue instruction, not as a permanent assignment. Re-check the live graph before claiming the work.
br show <bead-id>
Minimum fields to confirm:
| Field | Purpose |
|---|---|
dependencies | Upstream bead IDs that must close first |
files | Files/modules in scope for this bead |
verify | Concrete verification commands to run |
verification_evidence | Path to the canonical evidence artifact (typically history/<feature>/verification/<bead-id>.md) |
testing_mode | standard / tdd-required |
decision_refs | Locked decisions from CONTEXT.md relevant to this bead |
learning_refs | Learning file paths to read before implementing |
If any required field is missing, stop and bounce the bead back to validating or planning. Do not guess from free-form prose.
If testing_mode is tdd-required, confirm tdd_steps is present before implementation starts.
In worker mode, reserve all listed files before editing.
file_reservation_paths(
project_key: "<project-root-path>",
agent_name: "<resolved-agent-mail-name>",
paths: ["src/foo.ts", "src/bar.ts"],
reason: "Working bead <bead-id>"
)
In standalone mode, there is no cross-worker race, but still treat the bead's files list as a hard scope boundary. Do not blend multiple beads into one ad hoc change.
Report the conflict to the coordinator:
send_message(
project_key: "<project-root-path>",
sender_name: "<resolved-agent-mail-name>",
to: ["<COORDINATOR_AGENT_NAME>"],
thread_id: "<EPIC_ID>",
topic: "<EPIC_TOPIC>",
subject: "File conflict on <bead-id>",
body_md: "Need files: [list]. Currently held by: [holder]. Requesting resolution."
)
Wait for resolution. Do not proceed without your reservations.
While waiting, keep polling fetch_inbox(...) on the epic topic.
Proceed to implementation immediately.
Read every source file you will modify. Do not write from memory or assumptions about file contents.
Before writing any code, scan your bead's description for decision IDs (D1, D2, ...). For each referenced ID:
history/<feature>/CONTEXT.mdMatch naming conventions, error handling patterns, import styles, and test structures found in the codebase.
Every artifact you create must be:
Respect the bead's testing_mode:
standard -> implement normally, then verify with fresh evidencetdd-required -> run a real red-green loop before production code closesFor tdd-required beads:
tdd_steps.red and confirm it fails for the expected reasontdd_steps.green and confirm it passesIf production code was written before the red check, discard or rewrite that portion within the bead scope and restart the loop. Do not claim TDD from memory or intent alone.
Run the bead's verify steps exactly as written. Do not substitute easier checks.
Verification is not complete until you have fresh evidence from this execution pass.
Read the bead's verification_evidence field and update every declared artifact or explicit record there.
The standard artifact path is:
history/<feature>/verification/<bead-id>.md
The evidence record must include:
testing_modeverify command actually runIf testing_mode=tdd-required, also record:
tdd_steps.red commandtdd_steps.green commandIf verification fails:
fetch_inbox(...) while blockedpulse:debugging or surface the blocker to the userDo not close the bead without a passing verification result and a fresh evidence record.
All actions must complete. Do not skip any, and do not start another bead until the completion report is sent (worker mode) or recorded (standalone mode).
Before br close, confirm all are true:
files scope, or any expansion was surfaced and approveddecision_refs were re-checked against the final implementationverify steps passed in a fresh runverification_evidence entry is present and substantivetdd-required red-green evidence is recordedbr close <bead-id> --reason "Completed: <one-line summary of what was implemented>"
One commit per bead. Exactly this format:
git add <files-you-modified>
git commit -m "feat(<bead-id>): <summary matching br close reason>"
Do not batch multiple beads into one commit. Do not commit unrelated changes.
release_file_reservations(
agent_name: "<resolved-agent-mail-name>",
paths: ["src/foo.ts", "src/bar.ts"]
)
Release before sending the completion report so other agents can acquire these files immediately.
Worker mode:
send_message(
project_key: "<project-root-path>",
sender_name: "<resolved-agent-mail-name>",
to: ["<COORDINATOR_AGENT_NAME>"],
thread_id: "<EPIC_ID>",
topic: "<EPIC_TOPIC>",
subject: "Completed <bead-id>",
body_md: "Runtime nickname: <runtime-nickname>. Agent Mail name: <resolved-agent-mail-name>. Implemented: [summary]. Files: [list]. Verification: [tests passed / build clean]. Commit: [hash]."
)
Standalone mode: record completion in .pulse/STATE.md.
Completion reports should include the verification evidence path or paths, the final verification status, and any scoped follow-up that still needs a new bead.
Before you claim the next bead, run fetch_inbox(project_key="<project-root-path>", agent_name="<resolved-agent-mail-name>", topic="<EPIC_TOPIC>").
After each bead:
Use the standard handoff summary/resume briefing/transfer block contract from pulse:using-pulse. Treat this pause boundary as a checkpoint trigger: capture or refresh the feature checkpoint before stopping when the current phase meaningfully changed.
Worker mode handoff payload (write to .pulse/handoffs/worker-<agent>.json):
{
"schema_version": "2.0",
"handoff_id": "worker-<resolved-agent-mail-name>-<ISO-8601>",
"owner_type": "worker",
"owner_id": "worker-<resolved-agent-mail-name>",
"skill": "pulse:executing",
"feature": "<feature>",
"phase": "execution/<EPIC_ID>",
"status": "ready_to_resume",
"paused_at": "<ISO timestamp>",
"reason": "context_critical",
"next_action": "Check the epic thread, then run bv --robot-priority before claiming more work.",
"read_first": [
"AGENTS.md",
".pulse/STATE.md",
"history/<feature>/CONTEXT.md",
".pulse/handoffs/worker-<agent>.json"
],
"summary": "Worker paused cleanly because context is near the limit. The next turn should rejoin coordination, confirm the current graph state, and continue from the highest-priority executable bead.",
"payload": {
"runtime": {
"runtime_nickname": "<runtime-nickname>",
"agent_mail_name": "<resolved-agent-mail-name>",
"epic_id": "<EPIC_ID>",
"epic_topic": "<EPIC_TOPIC>",
"coordinator_agent_name": "<COORDINATOR_AGENT_NAME>"
},
"context_snapshot": {
"tokens_used_pct": 0.67,
"last_bead_closed": "<bead-id or null>"
},
"transfer": {
"status": "Worker is paused safely and no longer editing files.",
"completed": [
"Closed bead <bead-id> and sent the completion report"
],
"in_flight": [
"No bead currently claimed; resume from the live graph after checking mail"
],
"blockers": [],
"resume_notes": [
"Run fetch_inbox(...) on <EPIC_TOPIC> before selecting work",
"Re-check file reservations before editing any file",
"Use bv --robot-priority as the source of truth for the next bead"
]
},
"verification_evidence_paths": [
"history/<feature>/verification/<bead-id>.md"
]
}
}
Standalone mode handoff payload (write to .pulse/handoffs/single-worker.json):
{
"schema_version": "2.0",
"handoff_id": "single-worker-<ISO-8601>",
"owner_type": "worker",
"owner_id": "single-worker",
"skill": "pulse:executing",
"feature": "<feature>",
"phase": "execution/standalone",
"status": "ready_to_resume",
"paused_at": "<ISO timestamp>",
"reason": "context_critical",
"next_action": "Re-read state, inspect the next executable bead, and continue the standalone loop.",
"read_first": [
"AGENTS.md",
".pulse/STATE.md",
"history/<feature>/CONTEXT.md",
".pulse/handoffs/single-worker.json"
],
"summary": "Single-worker execution paused cleanly because context is near the limit. Resume by restoring state, checking the next bead, and continuing verification discipline.",
"payload": {
"context_snapshot": {
"tokens_used_pct": 0.67,
"last_bead_closed": "<bead-id or null>"
},
"transfer": {
"status": "Standalone execution is paused safely.",
"completed": [
"Closed bead <bead-id> and recorded completion in .pulse/STATE.md"
],
"in_flight": [
"Next priority hint: <bead-id or short description>"
],
"blockers": [],
"resume_notes": [
"Read the current bead fully with br show before editing",
"Keep file edits within the next bead's declared scope",
"Update verification evidence before closing another bead"
]
},
"verification_evidence_paths": [
"history/<feature>/verification/<bead-id>.md"
]
}
}
Register the handoff in .pulse/handoffs/manifest.json using the same summary, next_action, and owner file path.
Worker mode: notify the coordinator after writing the handoff.
send_message(
project_key: "<project-root-path>",
sender_name: "<resolved-agent-mail-name>",
to: ["<COORDINATOR_AGENT_NAME>"],
thread_id: "<EPIC_ID>",
topic: "<EPIC_TOPIC>",
subject: "[HANDOFF] <runtime-nickname> / <resolved-agent-mail-name>",
body_md: "Handoff summary: Worker paused cleanly because context is near the limit.\n\nResume briefing:\n- Next action: Check the epic thread, then run bv --robot-priority before claiming more work.\n- Read first: AGENTS.md, .pulse/STATE.md, history/<feature>/CONTEXT.md, .pulse/handoffs/worker-<agent>.json\n\nTransfer block:\n- Status: Worker is paused safely and no longer editing files.\n- Completed: [closed bead(s), sent completion report, updated evidence]\n- In flight: [next bead or \"none currently claimed\"]\n- Blockers: [none or concrete blocker]\n- Resume notes: [mail check, reservation check, graph check]"
)
If you detect context compaction (your conversation was summarized, or you notice gaps in your context):
STOP immediately. Do not continue implementing.
Re-read in this exact order before any further action:
AGENTS.mdhistory/<feature>/CONTEXT.mdbr show <bead-id>Only after re-reading all applicable items may you continue.
Why this is non-negotiable: Compaction erases knowledge of AGENTS.md, active reservations, and locked decisions. Agents that skip this step produce implementations that conflict with other workers and violate CONTEXT.md decisions.
Stop and reassess if you notice any of these:
verification_evidence recordtdd-required was satisfied without a real red failure and green pass| Action | Call |
|---|---|
| Register | Session registration via coordination runtime |
| Get priority bead | bv --robot-priority |
| Read bead | br show <id> |
| Reserve files | file_reservation_paths(...) |
| Release files | release_file_reservations(...) |
| Close bead | br close <id> --reason "..." |
| Send mail | send_message(project_key=..., sender_name=..., to=[...], thread_id=..., topic=..., subject=..., body_md=...) |
| Reply in thread | reply_message(project_key=..., message_id=..., sender_name=..., body_md=...) |
| Check inbox | fetch_inbox(project_key=..., agent_name=..., topic=...) |
| Check epic timeline | fetch_topic(project_key=..., topic_name=...) |
When spawned, swarming provides (via coordination message or task prompt):
runtime_nickname — your runtime nickname from the parent spawn resultcoordinator_agent_name — swarm coordinator identityepic_thread_id — the coordination thread for this feature (normally the epic bead ID)epic_topic — shared swarm topic tag (recommended: epic-<EPIC_ID>)startup_hint — optional: a bead or area the orchestrator wants checked firstfeature_name — used to locate history/<feature>/CONTEXT.mdYou resolve resolved_agent_mail_name yourself during session registration with the coordination runtime.
If any of the startup inputs are missing, query the coordination runtime for the swarm coordination message before proceeding.