Build a project using Claude Code Agent Teams with tmux split panes. Takes a plan document path and optional team size. Use when you want multiple agents collaborating on a build.
You are coordinating a build using Claude Code Agent Teams. Read the plan document, determine the right team structure, spawn teammates, and orchestrate the build.
$ARGUMENTS[0] - Path to a markdown file describing what to build$ARGUMENTS[1] - Number of agents (optional)Read the plan document at $ARGUMENTS[0]. Understand:
If team size is specified ($ARGUMENTS[1]), use that number of agents.
If NOT specified, analyze the plan and determine the optimal team size based on:
Guidelines:
For each agent, define:
Enable tmux split panes so each agent is visible:
teammateMode: "tmux"
Before spawning agents, the lead reads the plan and defines the integration contracts between layers. This focused upfront work is what enables all agents to spawn in parallel without diverging on interfaces. Agents that build in parallel will diverge on endpoint URLs, response shapes, trailing slashes, and data storage semantics unless they start with agreed-upon contracts.
Identify which layers need to agree on interfaces:
Database → function signatures, data shapes → Backend
Backend → API contract (URLs, response shapes, SSE format) → Frontend
From the plan, define each integration contract with enough specificity that agents can build to it independently:
Database → Backend contract:
Backend → Frontend contract:
{"session": {...}, "messages": [...]})Some behaviors span multiple agents and will fall through the cracks unless explicitly assigned. Identify these from the plan and assign ownership to one agent:
Common cross-cutting concerns:
Assign each concern to one agent with instructions to coordinate with others.
Before including a contract in agent prompts, verify:
POST /api/sessions/ vs POST /api/sessions){"session": {...}, "messages": [...]} not "returns session with messages")With contracts defined, spawn all agents simultaneously. Each agent receives the full context they need to build independently from the start. This is the whole point of agent teams — parallel work across boundaries.
Enter Delegate Mode (Shift+Tab) before spawning. You should not implement code yourself — your role is coordination.
You are the [ROLE] agent for this build.
## Your Ownership
- You own: [directories/files]
- Do NOT touch: [other agents' files]
## What You're Building
[Relevant section from plan]
## Contracts
### Contract You Produce
[Include the lead-authored contract this agent is responsible for]
- Build to match this exactly
- If you need to deviate, message the lead and wait for approval before changing
### Contract You Consume
[Include the lead-authored contract this agent depends on]
- Build against this interface exactly — do not guess or deviate
### Cross-Cutting Concerns You Own
[Explicitly list integration behaviors this agent is responsible for]
## Coordination
- Message the lead if you discover something that affects a contract
- Ask before deviating from any agreed contract
- Flag cross-cutting concerns that weren't anticipated
- Share with [other agent] when: [trigger]
- Challenge [other agent]'s work on: [integration point]
## Before Reporting Done
Run these validations and fix any failures:
1. [specific validation command]
2. [specific validation command]
Do NOT report done until all validations pass.
All agents are working in parallel. Your job as lead is to keep them aligned and unblock them.
Before any agent reports "done", run a contract diff:
Each agent reviews another's work:
Anti-pattern: Parallel spawn without contracts (agents diverge)
Lead spawns all 3 agents simultaneously without defining interfaces
Each agent builds to their own assumptions
Integration fails on URL mismatches, response shape mismatches ❌
Anti-pattern: Fully sequential spawning (defeats purpose of agent teams)
Lead spawns database agent → waits for contract → spawns backend → waits → spawns frontend
Only one agent works at a time, no parallelism ❌
Anti-pattern: "Tell them to talk" (they won't reliably)
Lead tells backend "share your contract with frontend"
Backend sends contract but frontend already built half the app ❌
Good pattern: Lead-authored contracts, parallel spawn
Lead reads plan → defines all contracts upfront → spawns all agents in parallel with contracts included
All agents build simultaneously to agreed interfaces → minimal integration mismatches ✅
Good pattern: Active collaboration during parallel work
Agent A: "I need to add a field to the response — messaging the lead"
Lead: "Approved. Agent B, the response now includes 'metadata'. Update your fetch."
Agent B: "Got it, updating now"
Create a shared task list. Since contracts are defined upfront, agents can start building immediately — no inter-agent blocking for initial implementation work. Only block tasks that genuinely require another agent's output (like integration testing).
[ ] Agent A: Build UI components
[ ] Agent B: Implement API endpoints
[ ] Agent C: Build schema and data layer
[ ] Agent A + B + C: Integration testing (blocked by all implementation tasks)
Track progress and facilitate communication when agents need to coordinate.
opacity-0 on interactive elements → Invisible to automation. Add aria-labels, ensure keyboard/focus visibilityThe build is complete when:
Validation happens at two levels: agent-level (each agent validates their domain) and lead-level (you validate the integrated system).
Before any agent reports "done", they must validate their work. When analyzing the plan, identify what validation each agent should run:
Database agent validates:
Backend agent validates:
Frontend agent validates:
tsc --noEmit)npm run build)When spawning agents, include their validation checklist:
## Before Reporting Done
Run these validations and fix any failures:
1. [specific validation command]
2. [specific validation command]
3. [manual check if needed]
Do NOT report done until all validations pass.
After ALL agents return control to you, run end-to-end validation yourself. This catches integration issues that individual agents can't see.
Your validation checklist:
Can the system start?
Does the happy path work?
Do integrations connect?
Are edge cases handled?
If validation fails:
Good plans include a Validation section with specific commands for each layer. When reading the plan:
Example plan validation section:
## Validation
### Database Validation
[specific commands to test schema and queries]
### Backend Validation
[specific commands to test API endpoints]
### Frontend Validation
[specific commands to test build and UI]
### End-to-End Validation
[full flow to run after integration]
Now read the plan at $ARGUMENTS[0] and begin:
$ARGUMENTS[1] if provided, otherwise decide)