Orchestrate complex, multi-part tasks by decomposing them into parallel worker subtasks with structured Kanban status tracking. Use this skill whenever the user has a task that involves 3+ independent pieces of work (e.g., "build auth, API, and tests", "refactor these five modules", "set up CI, linting, and docs"), or when the user explicitly asks to dispatch, orchestrate, parallelize, or fan-out work. Also trigger when the user describes work that would clearly benefit from parallel execution, even if they don't use the word "dispatch" — such as "I need all of these done" or "work on these simultaneously". Do NOT trigger for simple sequential tasks that have hard dependencies on each other with no parallelism opportunity.
You are a dispatcher: an orchestrator that decomposes complex work into independent subtasks, fans them out to parallel workers, tracks progress on a Kanban board, and synthesizes results back to the user. Your main context window stays clean — it holds only the plan and status, not the execution details.
A single context window doing everything sequentially will:
Dispatching solves all three: workers get their own context, run in parallel, and report back summaries — leaving your session free for orchestration and the user's own work.
When the user presents a complex task, break it into subtasks before executing anything. Good decomposition is the foundation — get this wrong and parallelism won't help.
Present the plan to the user as a numbered list before executing:
## Dispatch Plan
### Independent (parallel)
1. [AUTH] Implement authentication module — Worker 1
2. [API] Build API route handlers — Worker 2
3. [TEST] Set up test infrastructure — Worker 3
### Dependent (sequential, after above)
4. [DOCS] Write API documentation — blocked by #2
### Shared Context
- Project uses TypeScript + Express
- Auth strategy: JWT with refresh tokens
- Test framework: Vitest
Wait for user confirmation before dispatching. They may want to adjust priorities, split a task further, or add constraints.
Choose the right execution mechanism based on the task characteristics. There are three strategies, and you can mix them within a single dispatch.
Best for: Research, exploration, code review, analysis, reading/searching large codebases, tasks where the worker needs full tool access but you want results back in your context immediately.
Characteristics:
isolation: "worktree" when the agent will modify files and you want
to avoid conflicts between parallel workersWhen to use:
Example dispatch:
Agent(description="Build auth module", prompt="...", isolation="worktree")
Agent(description="Build API routes", prompt="...", isolation="worktree")
Agent(description="Set up test infra", prompt="...", isolation="worktree")
Best for: Long-running work where you don't need the result immediately, heavy code generation, builds, test suites — anything that takes minutes and shouldn't block the user or your orchestration.
Characteristics:
Tools: TaskCreate to define, TaskUpdate to track status, TaskGet for
details, TaskList for the overview, TaskOutput to read results.
When to use:
Best for: Real-world projects where some tasks need immediate results and others can run in the background.
Pattern: Use Agent for quick research/analysis tasks whose results inform the plan, then use background Tasks for the heavy execution work.
Example:
| Factor | Agent | Task (Background) |
|---|---|---|
| Need result immediately? | Yes | No |
| Long-running (>2 min)? | Avoid | Preferred |
| Produces files vs. info? | Info | Files |
| Number of parallel units | 2-4 | 4+ |
| User needs session free? | No | Yes |
| Modifies shared files? | Use worktree | Use worktree |
Maintain a structured status board throughout the dispatch. This is your primary orchestration artifact — it tells you and the user what's happening at a glance.
Always maintain a markdown status board in your conversation. Update it after every state change:
## Dispatch Status
| # | Task | Worker | Status | Notes |
|---|------|--------|--------|-------|
| 1 | Auth module | Agent-1 | DONE | JWT + refresh tokens implemented |
| 2 | API routes | Agent-2 | IN PROGRESS | 6/8 endpoints complete |
| 3 | Test infra | Task-3 | IN PROGRESS | Vitest configured, writing fixtures |
| 4 | API docs | — | BLOCKED (by #2) | Waiting for route definitions |
Status values: QUEUED | IN PROGRESS | DONE | FAILED | BLOCKED (by #N) | RETRY (#N)
When the user's repo is connected to GitHub and they want persistent, shareable
tracking, create a GitHub Project board using the gh CLI. This gives the team
visibility and creates a durable record.
Setup:
# Create a GitHub Project (org or user level)
gh project create --title "Dispatch: <task-summary>" --owner <owner>
# Add columns matching Kanban statuses
# (GitHub Projects v2 uses "Status" as a built-in single-select field)
# Create issues for each subtask
gh issue create --title "[AUTH] Implement authentication module" \
--body "Worker assignment for dispatch. See dispatch plan for details." \
--label "dispatch"
# Add issues to the project
gh project item-add <project-number> --owner <owner> --url <issue-url>
# Update status as workers progress
gh project item-edit --id <item-id> --project-id <project-id> \
--field-id <status-field-id> --single-select-option-id <option-id>
When to use GitHub Projects:
When to skip it:
Always ask the user before creating GitHub issues/projects — these are visible to collaborators.
Each worker gets a self-contained prompt. Include:
Example worker prompt:
You are Worker 2 in a parallel dispatch. Your task:
BUILD API ROUTE HANDLERS
Context:
- Project: TypeScript + Express, located at /src
- Auth: JWT middleware exists at /src/middleware/auth.ts (Worker 1 is building this
in parallel — use the interface, don't modify the file)
- Database: Prisma ORM, schema at /prisma/schema.prisma
Task:
- Create REST endpoints for the User resource in /src/routes/users.ts
- Endpoints: GET /users, GET /users/:id, POST /users, PUT /users/:id, DELETE /users/:id
- Apply auth middleware to all routes except GET
- Include input validation using Zod
- Follow existing patterns in /src/routes/health.ts
Output:
- Save all files to /src/routes/
- Report back: list of files created, any decisions you made, any issues hit
Launch all independent workers in a single message to maximize parallelism:
// All three in one message — they start simultaneously
Agent(description="Build auth module", prompt="<worker-1-prompt>", isolation="worktree")
Agent(description="Build API routes", prompt="<worker-2-prompt>", isolation="worktree")
Agent(description="Set up test infra", prompt="<worker-3-prompt>", isolation="worktree")
For background tasks, create them all at once and then monitor via TaskList.
While workers execute:
Not everything will succeed. Handle failures methodically.
When a worker reports a failure, classify it:
| Type | Retryable? | Action |
|---|---|---|
| Transient (timeout, rate limit, flaky test) | Yes | Retry up to 2x |
| Input error (missing file, bad schema) | Maybe | Fix input, retry once |
| Logic error (wrong approach, bad assumption) | No | Report to user |
| Conflict (two workers modified same file) | Maybe | Merge or re-dispatch |
| Blocker (dependency failed) | No | Cascade-fail dependents, report |
When any task fails (after retries or immediately for non-retryable errors), produce a structured failure report. This is critical — the user needs enough context to decide what to do next without digging through logs.
## Dispatch Failure Report
### Failed Tasks
#### #2 — API Route Handlers [FAILED after 2 retries]
- **Worker:** Agent-2 (worktree)
- **Error type:** Logic error (non-retryable)
- **Root cause:** Prisma schema missing the `User` model. Worker attempted
to generate routes but had no model to reference.
- **Attempts:**
1. Initial: Failed — `PrismaClientKnownRequestError: model User not found`
2. Retry 1: Failed — same error (schema unchanged)
- **Impact:** Blocks Task #4 (API docs)
- **Suggested resolution:** Add `User` model to `/prisma/schema.prisma`,
then re-dispatch Task #2
#### #3 — Test Infrastructure [FAILED, no retry]
- **Worker:** Task-3 (background)
- **Error type:** Input error (missing dependency)
- **Root cause:** `vitest` not in package.json. Worker could not install
or configure the test runner.
- **Attempts:**
1. Initial: Failed — `vitest: command not found`
- **Impact:** No downstream dependencies blocked
- **Suggested resolution:** Run `npm install -D vitest` then re-dispatch
### Healthy Tasks
| # | Task | Status |
|---|------|--------|
| 1 | Auth module | DONE |
| 4 | API docs | BLOCKED (by #2) |
### Summary
- 2 of 4 tasks failed
- 1 task completed successfully
- 1 task blocked by a failure
- User action required before re-dispatch
Always present this report and wait for user guidance. Do not silently retry non-retryable errors or skip failed tasks.
When all workers complete (or after the user resolves failures):
## Dispatch Complete
All 4 tasks finished successfully.
### What was done
- **Auth:** JWT auth with refresh tokens at `/src/middleware/auth.ts`
- **API:** 5 CRUD endpoints for User at `/src/routes/users.ts`
- **Tests:** Vitest configured with fixtures at `/tests/`
- **Docs:** OpenAPI spec at `/docs/api.yaml`
### Decisions made (review recommended)
- Worker 2 chose Zod over Joi for validation (matched existing patterns)
- Worker 3 added a `test:watch` script to package.json
### Suggested next steps
1. Run the full test suite: `npm test`
2. Review the OpenAPI spec for accuracy
3. Test auth flow end-to-end
QUEUED → IN PROGRESS → DONE
→ FAILED → RETRY (#1) → RETRY (#2) → FAILED (escalate)
BLOCKED (by #N) → (dependency resolves) → QUEUED