Use when encountering any bug, test failure, unexpected behavior, build failure, or performance problem — before proposing fixes. Also use when a previous fix attempt didn't work, when under time pressure with a technical issue, or when you've tried multiple approaches without success. This skill enforces root-cause investigation before any fix attempt. If you're about to suggest a code change to fix something and haven't completed Phase 1, stop and use this skill first.
Random fixes waste time and create new bugs. Quick patches mask underlying issues.
Core principle: Find root cause before attempting fixes. Symptom fixes are failure.
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
If you haven't completed Phase 1, you cannot propose fixes.
Complete each phase before proceeding to the next.
BEFORE attempting ANY fix:
1. Read Error Messages Carefully
2. Reproduce Consistently
3. Check Recent Changes
4. Gather Evidence in Multi-Component Systems
When the system has multiple components (CI → build → signing, API → service → database), add diagnostic instrumentation before proposing fixes:
For EACH component boundary:
- Log what data enters the component
- Log what data exits the component
- Verify environment/config propagation
- Check state at each layer
Run once to gather evidence showing WHERE it breaks
THEN analyze evidence to identify the failing component
THEN investigate that specific component
5. Trace Data Flow (Root Cause Tracing)
When the error is deep in the call stack, trace backward to find the original trigger:
When you can't trace manually, add instrumentation:
// Before the problematic operation
const stack = new Error().stack;
console.error('DEBUG operation:', {
inputValue,
cwd: process.cwd(),
stack,
});
Use console.error() in tests (not logger — may be suppressed). Log before the dangerous operation, not after it fails.
The key principle: Never fix where the error appears. Trace back to find the original trigger.
1. Find Working Examples
2. Compare Against References
3. Identify Differences
4. Understand Dependencies
1. Form Single Hypothesis
2. Test Minimally
3. Verify Before Continuing
4. When You Don't Know
1. Create Failing Test Case
If the project uses TDD mode, invoke the test-driven-development skill for writing proper failing tests. Otherwise, create the simplest possible automated reproduction.
2. Implement Single Fix
3. Verify Fix
4. If Fix Doesn't Work
5. If 3+ Fixes Failed: Question Architecture
Pattern indicating an architectural problem:
Stop and question fundamentals:
Discuss with the user before attempting more fixes. This is not a failed hypothesis — this is a wrong architecture.
6. Apply Defense-in-Depth
After fixing the root cause, add validation at EVERY layer data passes through. Single validation can be bypassed by different code paths, refactoring, or mocks.
| Layer | Purpose | Example |
|---|---|---|
| Entry Point | Reject invalid input at API boundary | Validate not empty, exists, correct type |
| Business Logic | Ensure data makes sense for this operation | Domain-specific invariants |
| Environment Guards | Prevent dangerous operations in specific contexts | Refuse destructive ops outside tmpdir in tests |
| Debug Instrumentation | Capture context for forensics | Log with stack trace before risky operations |
All four layers are necessary — during testing, each layer catches bugs the others miss. Different code paths bypass entry validation, mocks bypass business logic, edge cases need environment guards.
When debugging flaky tests that use arbitrary delays, replace timeouts with condition polling:
// BAD: Guessing at timing
await new Promise(r => setTimeout(r, 50));
// GOOD: Waiting for the actual condition
await waitFor(() => getResult() !== undefined);
Generic polling function:
async function waitFor<T>(
condition: () => T | undefined | null | false,
description: string,
timeoutMs = 5000
): Promise<T> {
const startTime = Date.now();
while (true) {
const result = condition();
if (result) return result;
if (Date.now() - startTime > timeoutMs) {
throw new Error(`Timeout waiting for ${description} after ${timeoutMs}ms`);
}
await new Promise(r => setTimeout(r, 10));
}
}
Arbitrary timeouts are ONLY correct when testing actual timing behavior (debounce, throttle). Always document WHY with a comment.
If you catch yourself thinking any of these, return to Phase 1:
| Excuse | Reality |
|---|---|
| "Issue is simple, don't need process" | Simple issues have root causes too. Process is fast for simple bugs. |
| "Emergency, no time for process" | Systematic debugging is FASTER than guess-and-check thrashing. |
| "Just try this first, then investigate" | First fix sets the pattern. Do it right from the start. |
| "I'll write test after confirming fix works" | Untested fixes don't stick. Test first proves it. |
| "Multiple fixes at once saves time" | Can't isolate what worked. Causes new bugs. |
| "Reference too long, I'll adapt the pattern" | Partial understanding guarantees bugs. Read it completely. |
| "I see the problem, let me fix it" | Seeing symptoms != understanding root cause. |
| "One more fix attempt" (after 2+ failures) | 3+ failures = architectural problem. Question pattern, don't fix again. |
| Phase | Key Activities | Done When |
|---|---|---|
| 1. Root Cause | Read errors, reproduce, check changes, gather evidence, trace data flow | Understand WHAT and WHY |
| 2. Pattern | Find working examples, compare, identify differences | Know what's different |
| 3. Hypothesis | Form theory, test minimally | Confirmed or new hypothesis |
| 4. Implementation | Create test, fix root cause, verify, add defense-in-depth | Bug resolved, tests pass, layers validated |