Multi-executor task routing skill for intelligently delegating coding tasks to the right local AI coding agent CLI (claude, gemini, codex, cursor, copilot). Use when local AI coding CLIs are available and you need to decide which agent handles which subtask.
When one or more local AI coding CLI tools are available (claude_execute, gemini_execute, codex_execute, cursor_execute, copilot_execute), this skill guides how to intelligently route each subtask to the right executor rather than implementing it manually.
The core principles:
| Executor | Tool Name | Best For | Avoid For |
|---|
| Claude Code | claude_execute | Architecture decisions, code review, complex refactoring, cross-file analysis, debugging hard logic bugs, explaining large codebases | Pure frontend pixel work |
| Gemini CLI | gemini_execute | Frontend UI components (React/Vue/HTML/CSS), algorithm implementation, tasks needing 1M token context, multimodal (image + code) | Database schema design |
| OpenAI Codex | codex_execute | Backend API (REST/GraphQL), database/ORM, server-side logic, CLI tools, scripts, data pipelines | UI component styling |
| Cursor | cursor_execute | General-purpose AI coding: any coding task, file editing, shell commands, codebase search, debugging | — |
| GitHub Copilot | copilot_execute | General-purpose AI coding: any coding task, file editing, shell commands, codebase search, debugging | — |
Task received
↓
CHECK available tools (inspect which CLI tools are present)
↓
ONE tool available?
└─ YES → Use that single tool for ALL coding tasks immediately
└─ NO (multiple tools) → continue below
↓
CLASSIFY the task:
├─ Frontend UI / styling / components? → gemini_execute (if available)
├─ Backend API / DB / server logic? → codex_execute (if available)
├─ Architecture / review / refactor? → claude_execute (if available)
└─ General coding / no specialist match? → cursor_execute or copilot_execute or claude_execute
(whichever is available, in this priority order)
↓
Multiple concerns (full-stack)? → split into subtasks, route each to the right specialist
↓
DELEGATE with a complete self-contained prompt
↓
If agent fails or is unavailable → retry with next available tool from the list
↓
VERIFY the result
↓
INTEGRATE and report back
When only one coding agent is available (e.g., only copilot_execute or only claude_execute):
copilot_execute is available, use it for backend, frontend, refactoring, and everything else.When multiple coding agents are available:
gemini_executecodex_executeclaude_executecursor_execute or copilot_executeFor full-stack features, split into specialist subtasks and run them in parallel when dependencies allow:
Feature: "Add user profile editing"
↓
Split:
├─ [gemini_execute] "Create ProfileEditForm React component with validation UI"
├─ [codex_execute] "Implement PUT /api/users/:id endpoint with validation"
└─ [claude_execute] "Review the profile update data flow for security issues"
↓
Run frontend + backend in parallel (no dependency)
↓
Run review after both complete
A good prompt to a coding agent must be self-contained — the agent has no context from the conversation.
Implement a REST API endpoint `PUT /api/users/:id` in the Express app located at
./src/server.ts. The endpoint should:
1. Accept JSON body with fields: displayName (string), bio (string, max 500 chars)
2. Validate input using Zod
3. Update the user record in the SQLite database (schema in ./src/db/schema.ts)
4. Return the updated user object as JSON
5. Add a corresponding test in __tests__/api/users.test.ts
Working directory: /Users/alice/my-project
Add the user profile update endpoint
(Missing: file locations, database type, validation library, test requirements)
[Action verb] [what to build] in [file/module location].
Requirements:
1. [Specific requirement]
2. [Specific requirement]
[Technologies/libraries to use]
[Any constraints or existing patterns to follow]
Working directory: [absolute path]
When tasks have no dependencies between them, delegate all in a single step using parallel tool calls:
// Parallel: frontend + backend have no dependency on each other
await Promise.all([
gemini_execute({ prompt: "...", cwd: projectDir }),
codex_execute({ prompt: "...", cwd: projectDir }),
]);
// Then sequential: review depends on both completing first
await claude_execute({ prompt: "Review the changes made to ...", cwd: projectDir });
For complex tasks routed to claude_execute, pass --extended-thinking via the args parameter to unlock Claude Code's deeper reasoning mode. This produces significantly better analysis for architecture, security, and multi-file refactoring:
// Complex tasks — use extended thinking
claude_execute({
prompt: "Review these 5 files for security vulnerabilities in the auth flow...",
args: ["--extended-thinking"],
cwd: projectDir
})
claude_execute({
prompt: "Refactor the order module to remove circular dependencies across 6 files...",
args: ["--extended-thinking"],
cwd: projectDir
})
// Simple tasks — standard mode is fine
claude_execute({
prompt: "Add JSDoc comments to the getUser function in src/services/user.ts",
cwd: projectDir
})
--extended-thinking| Use extended thinking | Standard mode |
|---|---|
| Refactoring across 5+ files | Adding a single function |
| Architecture or data model design | Fixing a clear bug |
| Security vulnerability analysis | Adding comments or docs |
| Debugging async race conditions | Renaming a variable |
| Evaluating multiple design approaches | Adding an import |
After each coding agent responds:
success field in the JSON responsestdout to see what the agent producedglob_files or view_fileexecute_bash: npm run build or equivalent) to catch compile errorsexecute_bash: npm test -- --testPathPatterns="...")If a specialist tool call fails or the CLI is not installed:
codex_execute fails for a backend task, try copilot_execute or claude_execute)edit_file, write_file, execute_bash) as a last resort after all available coding agents have been tried| Anti-pattern | Why it fails | Correct approach |
|---|---|---|
Routing all tasks to claude_execute only | Misses Gemini's UI strength and Codex's backend depth | Use the routing table |
| Sending vague prompts ("fix the bug") | Agent lacks context to act | Write self-contained prompts with file paths and requirements |
| Not verifying output | Undetected compile errors or test failures | Always build + test after delegation |
| Doing the implementation yourself instead of delegating | Defeats the purpose of having specialist agents | Delegate first; only fall back if delegation fails |
| Ignoring a locally installed tool because it's "not the best specialist" | Any available coding agent is better than doing it manually | If a tool is available, use it — even if it's not the ideal specialist for the task |
| Assuming only one coding tool is installed | There may be multiple tools; each should be tried on failure | Enumerate all available tools and use them as a fallback chain |