Use context-mode tools (ctx_execute, ctx_execute_file) instead of Bash/cat when processing large outputs. Triggers: "analyze logs", "summarize output", "process data", "parse JSON", "filter results", "extract errors", "check build output", "analyze dependencies", "process API response", "large file analysis", "page snapshot", "browser snapshot", "DOM structure", "inspect page", "accessibility tree", "Playwright snapshot", "run tests", "test output", "coverage report", "git log", "recent commits", "diff between branches", "list containers", "pod status", "disk usage", "fetch docs", "API reference", "index documentation", "call API", "check response", "query results", "find TODOs", "count lines", "codebase statistics", "security audit", "outdated packages", "dependency tree", "cloud resources", "CI/CD output". Also triggers on ANY MCP tool output that may exceed 20 lines. Subagent routing is handled automatically via PreToolUse hook.
<context_mode_logic> <mandatory_rule> Default to context-mode for ALL commands. Only use Bash for guaranteed-small-output operations. </mandatory_rule> </context_mode_logic>
Bash whitelist (safe to run directly):
mkdir, mv, cp, rm, touch, chmodgit add, git commit, git push, git checkout, git branch, git mergecd, pwd, whichkill, pkillnpm install, npm publish, pip installecho, printfEverything else → or . Any command that reads, queries, fetches, lists, logs, tests, builds, diffs, inspects, or calls an external service. This includes ALL CLIs (gh, aws, kubectl, docker, terraform, wrangler, fly, heroku, gcloud, etc.) — there are thousands and we cannot list them all.
ctx_executectx_execute_fileWhen uncertain, use context-mode. Every KB of unnecessary context reduces the quality and speed of the entire session.
About to run a command / read a file / call an API?
│
├── Command is on the Bash whitelist (file mutations, git writes, navigation, echo)?
│ └── Use Bash
│
├── Output MIGHT be large or you're UNSURE?
│ └── Use context-mode ctx_execute or ctx_execute_file
│
├── Fetching web documentation or HTML page?
│ └── Use ctx_fetch_and_index → ctx_search
│
├── Using Playwright (navigate, snapshot, console, network)?
│ └── ALWAYS use filename parameter to save to file, then:
│ browser_snapshot(filename) → ctx_index(path) or ctx_execute_file(path)
│ browser_console_messages(filename) → ctx_execute_file(path)
│ browser_network_requests(filename) → ctx_execute_file(path)
│ ⚠ browser_navigate returns a snapshot automatically — ignore it,
│ use browser_snapshot(filename) for any inspection.
│ ⚠ Playwright MCP uses a SINGLE browser instance — NOT parallel-safe.
│ For parallel browser ops, use agent-browser via execute instead.
│
├── Using agent-browser (parallel-safe browser automation)?
│ └── Run via execute (shell) — each call gets its own subprocess:
│ execute("agent-browser open example.com && agent-browser snapshot -i -c")
│ ✓ Supports sessions for isolated browser instances
│ ✓ Safe for parallel subagent execution
│ ✓ Lightweight accessibility tree with ref-based interaction
│
├── Processing output from another MCP tool (Context7, GitHub API, etc.)?
│ ├── Output already in context from a previous tool call?
│ │ └── Use it directly. Do NOT re-index with ctx_index(content: ...).
│ ├── Need to search the output multiple times?
│ │ └── Save to file via ctx_execute, then ctx_index(path) → ctx_search
│ └── One-shot extraction?
│ └── Save to file via ctx_execute, then ctx_execute_file(path)
│
└── Reading a file to analyze/summarize (not edit)?
└── Use ctx_execute_file (file loads into FILE_CONTENT, not context)
| Situation | Tool | Example |
|---|---|---|
| Hit an API endpoint | ctx_execute | fetch('http://localhost:3000/api/orders') |
| Run CLI that returns data | ctx_execute | gh pr list, aws s3 ls, kubectl get pods |
| Run tests | ctx_execute | npm test, pytest, go test ./... |
| Git operations | ctx_execute | git log --oneline -50, git diff HEAD~5 |
| Docker/K8s inspection | ctx_execute | docker stats --no-stream, kubectl describe pod |
| Read a log file | ctx_execute_file | Parse access.log, error.log, build output |
| Read a data file | ctx_execute_file | Analyze CSV, JSON, YAML, XML |
| Read source code to analyze | ctx_execute_file | Count functions, find patterns, extract metrics |
| Fetch web docs | ctx_fetch_and_index | Index React/Next.js/Zod docs, then search |
| Playwright snapshot | browser_snapshot(filename) → ctx_index(path) → ctx_search | Save to file, index server-side, query |
| Playwright snapshot (one-shot) | browser_snapshot(filename) → ctx_execute_file(path) | Save to file, extract in sandbox |
| Playwright console/network | browser_*(filename) → ctx_execute_file(path) | Save to file, analyze in sandbox |
| MCP output (already in context) | Use directly | Don't re-index — it's already loaded |
| MCP output (need multi-query) | ctx_execute to save → ctx_index(path) → ctx_search | Save to file first, index server-side |
| Wipe indexed KB content | ctx_purge(confirm: true) | Permanently deletes all indexed content |
Use context-mode for ANY of these, without being asked:
| Situation | Language | Why |
|---|---|---|
| HTTP/API calls, JSON | javascript | Native fetch, JSON.parse, async/await |
| Data analysis, CSV, stats | python | csv, statistics, collections, re |
| Shell commands with pipes | shell | grep, awk, jq, native tools |
| File pattern matching | shell | find, wc, sort, uniq |
source parameter when multiple docs are indexed to avoid cross-source contamination
source: "Node" matches "Node.js v22 CHANGELOG"queries array — batch ALL search questions in ONE call:
ctx_search(queries: ["transform pipe", "refine superRefine", "coerce codec"], source: "Zod")ctx_fetch_and_index for external docs — NEVER cat or ctx_execute with local paths for packages you don't ownhttps://raw.githubusercontent.com/org/repo/main/CHANGELOG.mdsource parameter in search to scope results to that specific documentconsole.log(JSON.stringify(data)) — analyze first, print findings.ctx_index(content: large_data). Use ctx_index(path: ...) to read files server-side. The content parameter sends data through context as a tool parameter — use it only for small inline text.filename parameter on Playwright tools (browser_snapshot, browser_console_messages, browser_network_requests). Without it, the full output enters context.<sandboxed_data_workflow> <critical_rule> When using tools that support saving to a file: ALWAYS use the 'filename' parameter. NEVER return large raw datasets directly to context. </critical_rule> <workflow> LargeDataTool(filename: "path") → mcp__context-mode__ctx_index(path: "path") → ctx_search() </workflow> </sandboxed_data_workflow>
This is the universal pattern for context preservation regardless of the source tool (Playwright, GitHub API, AWS CLI, etc.).
const resp = await fetch('http://localhost:3000/api/orders');
const { orders } = await resp.json();
const bugs = [];
const negQty = orders.filter(o => o.quantity < 0);
if (negQty.length) bugs.push(`Negative qty: ${negQty.map(o => o.id).join(', ')}`);
const nullFields = orders.filter(o => !o.product || !o.customer);
if (nullFields.length) bugs.push(`Null fields: ${nullFields.map(o => o.id).join(', ')}`);
console.log(`${orders.length} orders, ${bugs.length} bugs found:`);
bugs.forEach(b => console.log(`- ${b}`));
npm test 2>&1
echo "EXIT=$?"
gh pr list --json number,title,state,reviewDecision --jq '.[] | "\(.number) [\(.state)] \(.title) — \(.reviewDecision // "no review")"'
# FILE_CONTENT is pre-loaded by ctx_execute_file
import json
data = json.loads(FILE_CONTENT)
print(f"Records: {len(data)}")
# ... analyze and print findings
When a task involves Playwright snapshots, screenshots, or page inspection, ALWAYS route through file → sandbox.
Playwright browser_snapshot returns 10K–135K tokens of accessibility tree data. Calling it without filename dumps all of that into context. Passing the output to ctx_index(content: ...) sends it into context a SECOND time as a parameter. Both are wrong.
The key insight: browser_snapshot has a filename parameter that saves to file instead of returning to context. ctx_index has a path parameter that reads files server-side. ctx_execute_file processes files in a sandbox. None of these touch context.
Step 1: browser_snapshot(filename: "/tmp/playwright-snapshot.md")
→ saves to file, returns ~50B confirmation (NOT 135K tokens)
Step 2: ctx_index(path: "/tmp/playwright-snapshot.md", source: "Playwright snapshot")
→ reads file SERVER-SIDE, indexes into FTS5, returns ~80B confirmation
Step 3: ctx_search(queries: ["login form email password"], source: "Playwright")
→ returns only matching chunks (~300B)
Total context: ~430B instead of 270K tokens. Real 99% savings.
Step 1: browser_snapshot(filename: "/tmp/playwright-snapshot.md")
→ saves to file, returns ~50B confirmation
Step 2: ctx_execute_file(path: "/tmp/playwright-snapshot.md", language: "javascript", code: "
const links = [...FILE_CONTENT.matchAll(/- link \"([^\"]+)\"/g)].map(m => m[1]);
const buttons = [...FILE_CONTENT.matchAll(/- button \"([^\"]+)\"/g)].map(m => m[1]);
const inputs = [...FILE_CONTENT.matchAll(/- textbox|- checkbox|- radio/g)];
console.log('Links:', links.length, '| Buttons:', buttons.length, '| Inputs:', inputs.length);
console.log('Navigation:', links.slice(0, 10).join(', '));
")
→ processes in sandbox, returns ~200B summary
Total context: ~250B instead of 135K tokens.
browser_console_messages(level: "error", filename: "/tmp/console.md")
→ ctx_execute_file(path: "/tmp/console.md", ...) or ctx_index(path: "/tmp/console.md", ...)
browser_network_requests(includeStatic: false, filename: "/tmp/network.md")
→ ctx_execute_file(path: "/tmp/network.md", ...) or ctx_index(path: "/tmp/network.md", ...)
filename + path is mandatory| Approach | Context cost | Correct? |
|---|---|---|
browser_snapshot() → raw into context | 135K tokens | NO |
browser_snapshot() → ctx_index(content: raw) | 270K tokens (doubled!) | NO |
browser_snapshot(filename) → ctx_index(path) → ctx_search | ~430B | YES |
browser_snapshot(filename) → ctx_execute_file(path) | ~250B | YES |
ALWAYS use
filenameparameter when callingbrowser_snapshot,browser_console_messages, orbrowser_network_requests. Then process viactx_index(path: ...)orctx_execute_file(path: ...)— neverctx_index(content: ...).Data flow: Playwright → file → server-side read → context. Never: Playwright → context → ctx_index(content) → context again.
Subagents automatically receive context-mode tool routing via a PreToolUse hook. You do NOT need to manually add tool names to subagent prompts — the hook injects them. Just write natural task descriptions.
curl http://api/endpoint via Bash → 50KB floods context. Use ctx_execute with fetch instead.cat large-file.json via Bash → entire file in context. Use ctx_execute_file instead.gh pr list via Bash → raw JSON in context. Use ctx_execute with --jq filter instead.| head -20 → you lose the rest. Use ctx_execute to analyze ALL data and print summary.npm test via Bash → full test output in context. Use ctx_execute to capture and summarize.browser_snapshot() WITHOUT filename parameter → 135K tokens flood context. Always use browser_snapshot(filename: "/tmp/snap.md").browser_console_messages() or browser_network_requests() WITHOUT filename → entire output floods context. Always use the filename parameter.ctx_index(content: ...) → data enters context as a parameter. Always use ctx_index(path: ...) to read server-side. The content parameter should only be used for small inline text you're composing yourself.query-docs, GitHub API, etc.) then passing the response to ctx_index(content: response) → doubles context usage. The response is already in context — use it directly or save to file first.browser_navigate auto-snapshot → navigation response includes a full page snapshot. Don't rely on it for inspection — call browser_snapshot(filename) separately.ctx_stats to reset or wipe anything → ctx_stats is read-only (shows stats only). Use ctx_purge(confirm: true) to permanently delete all indexed content.