Use when performing multiple independent operations like reading multiple files, searching patterns, or querying memory - executes operations in parallel for 5-8x performance improvement by sending all tool calls in a single message
Execute independent operations in parallel for dramatic performance improvements. Instead of sequential tool calls (5 operations × 8 seconds = 40 seconds), use parallel calls (5 operations in 1 message = 8 seconds).
Core principle: If operations don't depend on each other, execute them in parallel (single message, multiple tool calls).
How to invoke:
Skill({ skill: "parallel-execution-patterns" })
When to invoke: Before reading 2+ files, running 2+ searches, or dispatching 2+ agents.
Use parallel execution when:
Don't use when:
Sequential execution:
Read file A (8 sec)
→ Read file B (8 sec)
→ Read file C (8 sec)
Total: 24 seconds
Parallel execution:
Read file A ]
Read file B ] (all in single message)
Read file C ]
Total: 8 seconds (3x faster)
Real-world improvement: 5-8x faster for typical workflows
Read("README.md")
[wait 8 seconds]
Read("ARCHITECTURE.md")
[wait 8 seconds]
Read("package.json")
[wait 8 seconds]
Total: 24 seconds
Single message with multiple Read calls:
// All reads execute in parallel
Read({ file_path: "/path/to/README.md" })
Read({ file_path: "/path/to/ARCHITECTURE.md" })
Read({ file_path: "/path/to/package.json" })
Read({ file_path: "/path/to/CONTRIBUTING.md" })
// Total: 8 seconds (same as one read)
When to use:
Grep(pattern: "authentication")
[wait 8 seconds]
Grep(pattern: "OAuth")
[wait 8 seconds]
Glob(pattern: "**/*.test.ts")
[wait 8 seconds]
Total: 24 seconds
Single message with multiple search calls:
// All searches execute in parallel
Grep({ pattern: "authentication", output_mode: "files_with_matches" })
Grep({ pattern: "OAuth", output_mode: "files_with_matches" })
Grep({ pattern: "JWT", output_mode: "files_with_matches" })
Glob({ pattern: "**/*.test.ts" })
Glob({ pattern: "**/*.spec.ts" })
// Total: 8 seconds
When to use:
mcp__memory__search_nodes("authentication")
[wait 2 seconds]
mcp__memory__open_nodes(["ProjectArchitecture"])
[wait 2 seconds]
mcp__memory__search_nodes("OAuth patterns")
[wait 2 seconds]
mcp__memory__search_nodes("failed approach")
[wait 2 seconds]
Total: 8 seconds
Single message with multiple MCP calls:
// All queries execute in parallel
const [similar, architecture, patterns, failures] = await Promise.all([
mcp__memory__search_nodes({ query: "authentication implementation" }),
mcp__memory__open_nodes({ names: ["ProjectArchitecture"] }),
mcp__memory__search_nodes({ query: "OAuth patterns" }),
mcp__memory__search_nodes({ query: "authentication failed approach" })
]);
// Total: 2 seconds (same as one query)
When to use:
Task(fix bug in file A)
[wait for agent to complete: 5 minutes]
Task(fix bug in file B)
[wait for agent to complete: 5 minutes]
Task(fix bug in file C)
[wait for agent to complete: 5 minutes]
Total: 15 minutes
Single message with multiple Task calls:
// All agents execute in parallel
Task({
subagent_type: "general-purpose",
description: "Fix bug in file A",
prompt: "Context file: tasks/session_context_bugfix_a.md. [details]"
})
Task({
subagent_type: "general-purpose",
description: "Fix bug in file B",
prompt: "Context file: tasks/session_context_bugfix_b.md. [details]"
})
Task({
subagent_type: "general-purpose",
description: "Fix bug in file C",
prompt: "Context file: tasks/session_context_bugfix_c.md. [details]"
})
// Total: 5 minutes (same as one agent)
When to use:
Bash("git status")
[wait 3 seconds]
Bash("git diff")
[wait 3 seconds]
Bash("git log --oneline -10")
[wait 3 seconds]
Total: 9 seconds
Single message with multiple Bash calls:
// All git commands execute in parallel
Bash({ command: "git status", description: "Show working tree status" })
Bash({ command: "git diff", description: "Show unstaged changes" })
Bash({ command: "git log --oneline -10", description: "Show recent commits" })
// Total: 3 seconds
When to use:
Does operation B need result from operation A?
Do operations modify same resource?
Does order matter for correctness?
Are operations reading vs writing?
Multiple operations needed?
├─ Yes → Are they independent?
│ ├─ Yes → Do they modify shared state?
│ │ ├─ No → ✅ PARALLELIZE
│ │ └─ Yes → ❌ Sequential
│ └─ No → ❌ Sequential
└─ No → Single operation (no parallelization)
Scenario: Gather context from multiple docs
Operations:
Independent? Yes (reading different files)
Parallelize: ✅ Yes
Scenario: Find patterns and implementations
Operations:
Independent? Yes (different search patterns)
Parallelize: ✅ Yes
Scenario: Query memory before planning
Operations:
Independent? Yes (different queries)
Parallelize: ✅ Yes
Scenario: Read implementation and tests
Operations:
Independent? Yes (reading different files)
Parallelize: ✅ Yes
Scenario: Search then read results
Operations:
Independent? No (step 2 needs step 1's output)
Parallelize: ❌ No (must be sequential)
Scenario: Search based on previous result
Operations:
Independent? No (step 2 needs step 1's output)
Parallelize: ❌ No (must be sequential)
Scenario: Edit same file multiple times
Operations:
Independent? No (both modify same file)
Parallelize: ❌ No (must be sequential)
Scenario: Commit and push
Operations:
Independent? No (must execute in order)
Parallelize: ❌ No (use chaining: git add . && git commit -m "msg" && git push)
Before (sequential):
Read architecture doc
[Commentary about architecture]
Read testing guide
[Commentary about testing]
Read API docs
[Commentary about API]
After (parallel):
[Read architecture doc, testing guide, API docs in parallel]
[Single commentary synthesizing all three]
Before (sequential):
Search for auth patterns
[Analyze results]
Search for OAuth code
[Analyze results]
Search for JWT usage
[Analyze results]
After (parallel):
[Search for auth patterns, OAuth code, JWT usage in parallel]
[Analyze all results together]
Before (sequential):
Query memory for architecture
Query memory for patterns
Query memory for failures
[Apply findings]
After (parallel):
[Query all memory contexts in parallel]
[Synthesize and apply findings]
# ❌ Bad: Sequential when could be parallel
Read README.md
[wait]
Read package.json
[wait]
Read tsconfig.json
# ✅ Good: Parallel reads
Read README.md, package.json, tsconfig.json (single message)
# ❌ Bad: Trying to parallelize dependent operations
Glob("**/*.ts") ] Parallel attempt, but...
Read(glob_results) ] This needs glob results!
# ✅ Good: Sequential when necessary
Glob("**/*.ts")
[wait for results]
Read(specific files from results)
# ❌ Bad: Parallelizing when result synthesis is complex
Read 50 files in parallel
[Now have to synthesize 50 file contents - overwhelming]
# ✅ Good: Reasonable parallelization
Read 5-10 most relevant files in parallel
[Manageable synthesis]
Before parallel execution:
After parallel execution:
Improvement: 4x faster (64s → 16s)
Typical workflow improvements:
Uses parallel execution for:
Result: 5-8x faster codebase analysis
Uses parallel execution for:
Result: Faster context loading, quicker implementation start
Uses parallel execution for:
Result: Faster test context gathering
Before executing operations:
// Single message with multiple tool calls:
// Pattern 1: File reads
Read({ file_path: "path/to/file1.ts" })
Read({ file_path: "path/to/file2.ts" })
Read({ file_path: "path/to/file3.ts" })
// Pattern 2: Searches
Grep({ pattern: "pattern1" })
Grep({ pattern: "pattern2" })
Glob({ pattern: "**/*.test.ts" })
// Pattern 3: Memory queries
mcp__memory__search_nodes({ query: "query1" })
mcp__memory__search_nodes({ query: "query2" })
mcp__memory__open_nodes({ names: ["Entity1"] })
// Pattern 4: Agent dispatch
Task({ subagent_type: "type", prompt: "task1" })
Task({ subagent_type: "type", prompt: "task2" })
// All execute in parallel!
| Mistake | Fix |
|---|---|
| Sequential reads of independent files | Read all in single message |
| One search at a time | Batch all searches in parallel |
| Sequential memory queries | Use Promise.all pattern |
| Dispatching agents in separate messages | Single message, multiple Task calls |
| Parallelizing dependent operations | Check dependencies first |
| Not batching git commands | Parallel for independent, chain for sequential |
Good parallelization:
Bad parallelization:
With parallel execution:
Without parallel execution: