Coordinate multiple specialized agents through a structured 7-step operational workflow. Use when: (1) task is too large for a single pass, (2) user asks to "break down", "orchestrate", "parallelize", "delegate", or "run a mission", (3) multiple independent investigations or actions needed concurrently, (4) project-wide refactoring/migration/analysis, (5) user asks for multi-agent coordination, (6) task decomposes into research, planning, implementation, validation phases, (7) work requires risk controls, quality gates, or structured monitoring.
You are coordinating a multi-agent mission. Follow this 7-step workflow precisely, taking action at each step. Do not describe what you will do. Execute.
Before scoping, load configuration from available sources:
.mission-control/settings.md — project-level settings (defaultModel, maxConcurrentAgents, requireApproval, retryOnFailure, maxRetries, escalateModelOnRetry, autoReview, autoTest, autoLearn, memoryEnabled, testCommand, devCommand, useWorktrees, custom agent types).Mission-level overrides (from user request or playbook) win over project settings.
If memoryEnabled is true and .mission-control/memory/ exists, read all .md files in that directory. Always load files tagged regardless of relevance. For other files, match tags against the mission goal terms and load the top 5 most relevant. Incorporate loaded learnings into your planning — they contain patterns, anti-patterns, and gotchas from past missions.
gotchaCheck .mission-control/playbooks/ for project-specific playbooks. Also check built-in playbooks provided by this plugin (see references/orchestration-patterns.md for pattern-based templates). If a playbook matches the mission type (full-stack-feature, bug-investigation, refactoring, security-audit, migration), suggest it to the user:
PLAYBOOK MATCH: "full-stack-feature"
This playbook provides a pre-built task graph for feature implementation.
Use it? [Y/n]
If the user confirms (or requireApproval is never), load the playbook and skip to Step 3 with the playbook's predefined task structure, adjusting parameters to match the specific goal.
Produce the mission scope now using this format:
MISSION SCOPE
---------------------------------------------------------------------
Outcome: [One sentence: what success looks like]
Success Metric: [Measurable criteria to verify completion]
Budget: [Token/time constraints, or "standard session"]
Constraints: [Forbidden actions, compliance rules, reliability reqs]
In Scope: [What this mission covers]
Out of Scope: [What this mission explicitly excludes]
Stop Criteria: [When to halt if conditions change]
Deliverables: [Artifacts that must be produced]
---------------------------------------------------------------------
If the user's request is ambiguous, ask one clarifying question. Do not ask more than one.
Assign a risk tier (0-3) to the overall mission and to each subtask you will create in Step 3. Reference references/risk-tiers.md for full tier definitions and the failure-mode checklist.
| Tier | Name | Criteria | Required Controls |
|---|---|---|---|
| 0 | Low | Read-only, low blast radius, easy rollback | Basic validation evidence, rollback step recorded |
| 1 | Medium | User-visible changes, moderate impact, partial coupling | Independent reviewer agent, negative test, rollback note |
| 2 | High | Security/compliance/data integrity, high blast radius | Reviewer + adversarial failure-mode checklist, go/no-go gate |
| 3 | Critical | Irreversible actions, regulated data, severe incident on failure | Human confirmation before execution, two-step verification, contingency plan |
Output the tier assignment inline with the mission scope.
Apply these rules throughout the mission:
maxRetries from settings. If escalateModelOnRetry is false, retry with the same model.Load the agent registry before decomposing. Read .mission-control/settings.md if not already loaded in Step 1. The full registry consists of built-in agents from this plugin plus any custom agents from the project settings file.
Built-in agents:
| Agent | subagent_type | Role | Default Model | Isolation |
|---|---|---|---|---|
| mission-planner | mission-control:mission-planner | Goal decomposition into task dependency graphs | sonnet | none |
| researcher | mission-control:researcher | Read-only codebase exploration and analysis | haiku | none |
| implementer | mission-control:implementer | Code implementation from specifications | sonnet | worktree |
| reviewer | mission-control:reviewer | Independent quality assurance and validation | sonnet | none |
| retrospective | mission-control:retrospective | Post-mission learning extraction | sonnet | none |
Always use the exact subagent_type value shown above when spawning built-in agents. This ensures each agent runs with the correct tool permissions and isolation settings defined in its agent file. In particular, mission-control:implementer must be used for all implementation tasks — it is the only agent type that runs in an isolated git worktree.
Custom agents from .mission-control/settings.md extend (never replace) the built-in agents. Each custom agent maps to one of the four core subagent_type values: Explore, Plan, Bash, or general-purpose. Never apply the mission-control:* prefix to custom agents — only built-in agents have registered agent files under that namespace.
Output the merged registry, including the exact subagent_type to use for each agent:
AGENT REGISTRY
---------------------------------------------------------------------
Built-in:
mission-planner → mission-control:mission-planner
researcher → mission-control:researcher
implementer → mission-control:implementer (isolated worktree)
reviewer → mission-control:reviewer
retrospective → mission-control:retrospective
Custom:
[agent-name] → [subagent_type from settings] (or "none")
---------------------------------------------------------------------
Choose the planning depth based on task size:
| Depth | When to Use | What It Produces |
|---|---|---|
| skip | 1 file, trivial change | Directly execute, no task graph needed |
| lite | 2-3 files, straightforward | Flat task list, minimal dependencies |
| spec | 4+ files, moderate complexity | Full task graph with dependencies and waves |
| full | Architectural changes, cross-cutting work | Detailed spec document + task graph + risk analysis |
If skip depth is selected, bypass the rest of the orchestration workflow. Execute the change directly with tools. For all other depths, continue.
Reference references/task-decomposition.md for dependency graph construction, parallel grouping, critical path identification, and Kahn's algorithm for topological ordering.
List all subtasks now in dependency order. For each subtask, produce a task card:
TASK [ID]: [Name]
Agent Type: [mission-planner / researcher / implementer / reviewer / retrospective / <custom-type>]
Deliverable: [What this task produces]
Dependencies: [Task IDs that must complete first, or "none"]
Risk Tier: [0-3]
File Ownership: [Specific files this agent owns, or "read-only"]
Model: [haiku / sonnet / opus]
Rules for task cards:
Group tasks into execution waves based on dependencies:
EXECUTION PLAN
---------------------------------------------------------------------
Wave 1 (parallel): Task 1, Task 2, Task 3, Task 4 [no dependencies]
Wave 2 (sequential): Task 5 [depends on Wave 1]
Wave 3 (parallel): Task 6, Task 7 [depends on Task 5]
Wave 4 (sequential): Task 8 [depends on Wave 3]
Wave 5 (sequential): Task 9 [depends on Wave 4]
---------------------------------------------------------------------
Critical path: Task 1 -> Task 5 -> Task 7 -> Task 8 -> Task 9
Select the pattern that best fits the mission's structure:
1. Fan-Out / Fan-In — Multiple independent queries, then synthesize results. Use when: several independent research queries, searching across different areas, running independent analyses.
Orchestrator
|-- Agent A (area 1) --\
|-- Agent B (area 2) ---+-> Synthesize
\-- Agent C (area 3) --/
2. Pipeline — Sequential stages where each output feeds the next. Use when: natural ordering exists (research -> plan -> implement -> validate).
Agent A -> Agent B -> Agent C -> Final Output
(research) (plan) (implement)
3. Explore-Then-Act — Deep exploration phase, then focused action. Use when: unfamiliar codebases, complex bugs, refactoring where understanding must precede changes.
Explore agents (parallel) -> Orchestrator decides -> Action agents
4. Competitive — Multiple agents attempt the same task with different approaches; pick the best. Use when: algorithm design, performance optimization, uncertain best approach.
Orchestrator
|-- Agent A (approach 1) --\
|-- Agent B (approach 2) ---+-> Evaluate & Select
\-- Agent C (approach 3) --/
5. Iterative Refinement — Implement, review, fix loop until quality threshold is met. Use when: code review cycles, documentation quality, complex implementations needing validation.
Agent A (implement) -> Agent B (review) -> [issues?] -> Agent A (fix) -> Agent B (re-review)
6. Supervisor with Workers — Persistent supervisor delegates to short-lived workers. Use when: many small subtasks across files, project-wide refactoring, batch operations.
Supervisor (persistent)
|-- Worker 1 (file A) -> done
|-- Worker 2 (file B) -> done
|-- Worker 3 (file C) -> done
\-- ... continues until all complete
7. Adaptive Retry — Automatic retry with escalated model on failure. Wraps any of the above patterns. Use when: retryOnFailure is true in settings. Applied automatically when an agent fails.
Agent A (haiku) -> [fail] -> Agent A' (sonnet) -> [fail] -> Agent A'' (opus) -> [fail] -> Escalate to user
Reference references/orchestration-patterns.md for detailed examples of each pattern.
Select the execution mode based on mission characteristics:
| Mission Characteristics | Execution Mode |
|---|---|
| Sequential work or same files | Direct tools (no subagents) |
| 2-3 independent read-only queries | Standalone subagents (no team) |
| Parallel work (3+ agents or any writes) | Agent-team (DEFAULT) |
| Parallel work + agent-to-agent coordination | Agent-team with peer messaging |
| High risk (Tier 2+) | Agent-team + dedicated reviewer teammate |
Default to agent-team mode. Only use standalone subagents for trivial fan-out of 2-3 read-only research agents where no coordination is needed. Only use direct tools for skip planning depth.
State your pattern and execution mode choice explicitly:
ORCHESTRATION
---------------------------------------------------------------------
Pattern: [Pattern name]
Execution Mode: [direct / standalone / agent-team / agent-team+messaging]
Rationale: [One sentence justification]
---------------------------------------------------------------------
Execute this step NOW. Do not describe what you plan to do. Act.
Use TeamCreate to establish the team:
TeamCreate(team_name: "<mission-name>", description: "<brief mission description>")
Name the team after the mission goal (kebab-case, descriptive).
Use TaskCreate for every subtask from Step 3. Set up dependencies with TaskUpdate(addBlockedBy) to enforce execution order.
Identify every task with no dependencies (Wave 1). Spawn agents for ALL of them in a single message. Do not spawn them one at a time.
For each agent, write a self-contained prompt that includes:
Choose the right model per agent:
haiku — Simple searches, running commands, straightforward tasks.sonnet — Moderate complexity, most implementation work (default).opus — Complex reasoning, architecture decisions, security review.For Tier 1+ tasks, spawn a reviewer agent AFTER the implementation agent completes. Never spawn implementer and reviewer for the same work simultaneously.
Save the mission state to .mission-control/missions/active.json. Include the mission scope, risk tier, task graph, pattern, settings, and current status. This enables session recovery if the conversation is interrupted.
{
"id": "mission-<timestamp>",
"name": "<mission name>",
"goal": "<goal>",
"status": "active",
"createdAt": "<ISO-8601>",
"updatedAt": "<ISO-8601>",
"scope": { ... },
"settings": { ... },
"pattern": "<pattern>",
"riskTier": <tier>,
"tasks": [ ... ],
"log": [],
"playbook": "<playbook name or null>"
}
Only skip team creation for trivial fan-out of 2-3 read-only research agents where no coordination is needed. In that case, launch standalone Task calls without team_name.
TeamCreate(team_name: "preferences-feature", description: "Add user preferences page")
TaskCreate(subject: "Find auth patterns", ...)
TaskCreate(subject: "Find routing config", ...)
TaskCreate(subject: "Find theme implementation", ...)
TaskCreate(subject: "Find i18n config", ...)
[Spawn 4 agents in ONE message — all with no dependencies]
Task(team_name: "preferences-feature", name: "researcher-auth", subagent_type: "mission-control:researcher", ...)
Task(team_name: "preferences-feature", name: "researcher-routing", subagent_type: "mission-control:researcher", ...)
Task(team_name: "preferences-feature", name: "researcher-theme", subagent_type: "mission-control:researcher", ...)
Task(team_name: "preferences-feature", name: "researcher-i18n", subagent_type: "mission-control:researcher", ...)
Use TaskList to check team progress after each wave. Teammates send messages when they complete tasks or need help.
After each wave completes, produce a checkpoint report:
CHECKPOINT REPORT
---------------------------------------------------------------------
Wave: [Current wave number]
Tasks Completed: [IDs and names]
Tasks In Progress: [IDs and agents]
Tasks Blocked: [IDs -> Blocker -> Next Action]
Budget Burn: [Tokens used / estimated total]
Risk Updates: [New risks discovered during execution]
Decision: CONTINUE | RESCOPE | STOP
Rationale: [Why this decision]
---------------------------------------------------------------------
Apply these adjustments in real time:
Agent failure. If an agent fails its task and retryOnFailure is true:
escalateModelOnRetry is true. Otherwise retry with same model and a refined prompt.maxRetries exhausted: Mark task as blocked, report to user, recommend action.Task too complex. If an agent reports the task exceeds its scope:
Agent stalled. If an agent produces no meaningful output:
SendMessage.Drift detection. If completed work diverges from the success metric:
After each wave, identify tasks whose dependencies are now satisfied. Spawn agents for all newly unblocked tasks in a single message. Use TaskUpdate to assign owners and update status.
After each checkpoint, update .mission-control/missions/active.json with current progress: task statuses, completed artifacts, log entries, and timestamp.
When all tasks are complete:
COMPLETION SUMMARY
---------------------------------------------------------------------
Planned Outcome: [From Step 1]
Achieved Outcome: [What actually got delivered]
Artifacts: [Files created/modified with full paths]
Key Decisions: [Important choices made during execution]
Validation Evidence: [Test results, review outcomes, manual verification]
Open Risks: [Unresolved issues or technical debt]
Follow-Ups: [Work items for future sessions]
Lessons Learned: [Reusable patterns or anti-patterns discovered]
---------------------------------------------------------------------
Present this summary to the user as the final deliverable.
If autoLearn is enabled in settings:
.mission-control/memory/ as tagged markdown files.---