Diagnose Nx sandbox violations from a sandbox report. Use when asked to "diagnose sandbox", "analyze sandbox report", "investigate sandbox violations", "check violations", when given a sandbox report JSON file or URL to investigate, or when the user pastes a staging.nx.app sandbox-report URL. Also trigger when discussing unexpected reads/writes in Nx task execution. Guides structured investigation of why tasks read/write undeclared files, determines root causes, and recommends fixes.
Sandbox violations occur when an Nx task reads files not declared as inputs or writes files not declared as outputs.
Unexpected reads are one of:
Unexpected writes follow the same logic:
The default assumption is that an unexpected access IS a missing declaration. The investigation's job is to understand WHY the process accesses the file — not to find reasons it shouldn't.
Read, cat, head, python3, or jq on the raw report. All report parsing is handled by the script.inference.plugin in the script output or run jq '.targets.<target>.metadata' <detail-file>. Fixing the wrong plugin wastes entire investigation rounds.User provides one of:
If a task ID is provided but no report, ask the user for the report file.
Filtering: Most invocations will focus on specific files, not the entire report. The user may specify:
e2e.logapps/nx-cloud/e2e.log,apps/nx-cloud/build/client/assets/main.js*.tsbuildinfo, apps/nx-cloud/build/**apps/nx-cloud/build/client/assetsWhen the user specifies files to focus on, pass them via --filter to the script. When they don't specify a filter and the report has many violations, summarize the groupings (by directory, extension) and ask which group(s) to investigate first rather than trying to investigate everything at once.
Run the context-gathering script immediately — this is the first tool call after reading the user's input.
Call it exactly as shown — do NOT append 2>&1 or 2>/dev/null (the script manages its own stderr internally). Run in the foreground (no run_in_background) with a 3-minute timeout — reports can be large and the script runs the task + multiple nx commands:
npx tsx ${CLAUDE_SKILL_DIR}/scripts/gather-sandbox-context.ts <report.json or URL> [--filter <pattern>] [--workspace <path>]
Pass --filter when the user wants to focus on specific files or patterns. The script filters violations before all downstream processing (grouping, validation, classification), so the output only contains relevant data.
The script produces two outputs:
stdout (~3-5KB compact brief) — everything needed to start investigating:
summary: violation counts (total, filtered, confirmed vs undeclared)undeclaredFiles: the actual file paths that are true violationsgrouping: violations grouped by directory and extensioncommands: processes with violations (pid, cmd, executable, arguments, counts) — no full file listsclassificationSummary: counts per category (cross-project, build artifacts, config files, etc.)crossProjectDependencyCheck: whether cross-project file owners are in the task's dependency chainstaleDeclarations: grouped analysis of expectedInputsNotRead / expectedOutputsNotWrittendependentTasksOutputFiles: extracted from target inputs config and named inputs — shows what dep output globs are declared (critical for cross-project violations)executorInfo: executor name and resolved source path in node_modules — read this file to understand how the tool is invokedcheckSample: results of --check on up to 5 undeclared files (catches false positives early)inference + pluginRegistration: plugin metadataverificationCommands: pre-built --check commands with the correct task refdetailFile: path to the full detail JSONdetail file (/tmp/sandbox-diagnosis-detail-<project>-<target>.json) — full data for drill-down. Structure:
processTree.processTree: array of {pid, cmd, parentPid} entriesprocessTree.processPidToCmd: { "pid": "command string" } mapprocessTree.readsByPid: { "pid": ["file1", "file2"] } — violated reads grouped by PIDprocessTree.writesByPid: { "pid": ["file1", "file2"] } — violated writes grouped by PIDtargetConfig: full target configuration (executor, options, inputs, outputs, dependsOn)projectConfig: full project configurationresolvedInputs: { files: [...], depOutputs: [...], runtime: [...], environment: [...] }resolvedOutputs: { outputPaths: [...], expandedOutputs: [...] }validation: { reads: { confirmed: [...], undeclared: [...] }, writes: { ... } }classification: { reads: { crossProject, buildArtifacts, configFiles, ... }, writes: { ... } }Read the brief output — it has everything to start. Use jq on the detail file only when you need to drill into specific sections. When querying the detail file, use the structure above — do not guess the schema. Do NOT use Python, ad-hoc scripts, or the Read tool on the detail file — only jq.
For reports with many violations, use --filter to narrow scope. When investigating without a filter, use the grouping data to identify patterns and prioritize — don't try to trace every file individually.
If summary.undeclaredReads and summary.undeclaredWrites are both 0, all violations were resolved by the script's validation against resolved inputs/outputs. Report this to the user — no further investigation needed.
The commands array pre-parses each process — use executable and arguments to identify the tool without re-parsing cmd. When many files share the same root cause, group them under one finding using a glob pattern or count (e.g., "88 .d.ts files matching packages/nx/dist/**/*.d.ts").
This is the most important phase. The goal is to determine with 100% certainty why each process reads or writes each violated file. Do not classify violations from file names or paths alone — trace the actual causal chain from command → config → file access.
The brief's commands array pre-parses each process. Use the executable and arguments fields directly — don't re-parse cmd. Identify:
executable)arguments)For each violated file, establish the exact causal chain that leads the command to read or write it. The approach is the same regardless of tool:
includes, extends, presets, entry points, pluginsCommon causal patterns:
extends, eslint config chain, jest preset chain).next/, eslint reading .d.ts alongside .ts)For any tool, read its source code in node_modules to understand its file discovery behavior. Don't assume — trace the actual code.
You must be able to explain the full path: e.g., "eslint loads .eslintrc.json → configures @typescript-eslint/parser → parser resolves parserOptions.project → walks up to find tsconfig.json → reads it." If you can't trace the full path, keep investigating — do not guess.
When theoretical analysis is inconclusive, verify empirically. For difficult cases, instrument node_modules with interceptors to capture real stack traces. For example, patch fs.readFileSync in the tool's entry point to log stack traces when the violated file is accessed. A confirmed stack trace is worth more than multiple rounds of code reading.
--checkThis step is mandatory — do not skip it. The script already runs --check on a sample of up to 5 undeclared files (see checkSample in the brief). Review those results first — if the sample files are confirmed as inputs/outputs, the corresponding violations are false positives.
For files not in the sample, use the pre-generated commands from verificationCommands in the brief:
npx nx show target inputs <project>:<target> --check <violated-read-files>
npx nx show target outputs <project>:<target> --check <violated-write-files>
If the commands fail because output files don't exist (e.g., the script's task run timed out), run the task first with verificationCommands.runTask.
If --check shows the file IS already an input/output, the violation is a false positive from the script's static analysis. If it confirms the file is NOT an input/output, proceed to classification.
With the causal chain established and the violation confirmed, classify into one of these categories:
Missing input/output (most common) — the process legitimately needs this file. Understand why:
.d.ts files while linting .ts). Still a legitimate access from the tool's perspective.Bad tool configuration — the tool accesses a file it shouldn't because its scope is too broad. The fix is fixing the tool's config, NOT adding an input. Investigate:
eslint . instead of eslint src/)Potential sandboxing gap (last resort) — the access is genuinely irrelevant to correctness (PID files, temp sockets, dev server logs that no task consumes). Only conclude this after exhausting categories 1 and 2.
For violations that aren't immediately obvious, investigate further:
inference.plugin in the brief output, or nx show project --json metadatacreateNodesV2 implementation to understand inference logicinclude/exclude patterns in nx.json that should filter this projectproject.json, package.json, or nx.json targetDefaults may override plugin-inferred inputs, rendering plugin-level fixes invisible. Check all three before concluding a plugin fix is sufficient.preset, extends, references, setupFiles, resolver, moduleNameMapper, transform, etc.dependsOn to understand task dependency chaindependentTasksOutputFiles glob pattern — is it too narrow?**/*.d.ts missing .tsbuildinfo)After diagnosing the root cause, determine scope:
You MUST present findings using the structured format below before proceeding to any implementation discussion. Do not use free-form narrative — the structure ensures completeness and makes findings reviewable.
Present findings grouped by category:
=== Sandbox Violation Diagnosis: {project}:{target} ===
## Summary
Unexpected reads: N total → M validated as declared → K true violations
Unexpected writes: N total → M validated as declared → K true violations
## Findings
### [MISSING INPUT] {short description}
Files: {file list or pattern}
Process: PID {pid} — {command}
Why: {why the process legitimately needs this file}
Scope: {project-specific or affects all projects using this tool/plugin}
Fix: {where/how to add the input declaration — consider both declarative (add input) and systemic (improve plugin inference) options}
### [MISSING OUTPUT] {short description}
Files: {file list or pattern}
Process: PID {pid} — {command}
Why: {why the process produces this file}
Scope: {project-specific or affects all projects using this tool/plugin}
Fix: {where/how to add the output declaration}
### [BAD TOOL CONFIG] {short description}
Files: {file list or pattern}
Process: PID {pid} — {command}
Why: {why the tool accesses files it shouldn't — config too broad, missing ignore, etc.}
Fix: {specific tool config change}
### [POTENTIAL SANDBOXING GAP] {short description}
Files: {file list or pattern}
Process: PID {pid} — {command}
Why: {why this access is irrelevant to correctness}
Evidence: {proof that categories 1-2 were exhausted}
### [INVESTIGATE] {short description}
Files: {file list or pattern}
Notes: {what's known, what needs more info}
Question: {what to ask the user or team}
## Stale Declarations
expectedInputsNotRead: {count and details if relevant}
expectedOutputsNotWritten: {count and details if relevant}
## Verification Plan
For each fix, provide the exact commands to verify:
1. Run the task so output files exist on disk: `npx nx <target> <project> --skip-nx-cache`
2. Check each violation file is now an input: `npx nx show target <project>:<target> inputs --check <space-separated files>`
3. For plugin-level fixes: build the plugin, patch node_modules, then verify with steps 1-2
node_modules, capture stack traces, run with debug flags. Wrong theories waste entire investigation rounds.When the investigation is complex and requires parallel research, you can delegate to subagents. Follow this pattern:
jq specific sections (process tree, resolved inputs, etc.) without re-running the script.tsconfig.base.json? Trace the full causal chain from the eslint config."--check, don't guess from file names. Include these instructions in the subagent prompt.For the sandbox report data model and field definitions, see references/data-model.md.