Use when writing, reviewing, or committing code to enforce Karpathy's 4 coding principles — surface assumptions before coding, keep it simple, make surgical changes, define verifiable goals. Triggers on "review my diff", "check complexity", "am I overcomplicating this", "karpathy check", "before I commit", or any code quality concern where the LLM might be overcoding.
Derived from Andrej Karpathy's observations on LLM coding pitfalls. This is not just guidelines — it ships Python tools that detect violations, a review agent, a slash command, and a pre-commit hook.
"The models make wrong assumptions on your behalf and just run along with them without checking. They don't manage their confusion, don't seek clarifications, don't surface inconsistencies, don't present tradeoffs, don't push back when they should."
"They really like to overcomplicate code and APIs, bloat abstractions, don't clean up dead code... implement a bloated construction over 1000 lines when 100 would do."
"LLMs are exceptionally good at looping until they meet specific goals... Don't tell it what to do, give it success criteria and watch it go."
— Andrej Karpathy
Don't assume. Don't hide confusion. Surface tradeoffs.
Minimum code that solves the problem. Nothing speculative.
The test: Would a senior engineer say this is overcomplicated? If yes, simplify.
Touch only what you must. Clean up only your own mess.
The test: Every changed line should trace directly to the user's request.
Define success criteria. Loop until verified.
| Instead of... | Transform to... |
|---|---|
| "Add validation" | "Write tests for invalid inputs, then make them pass" |
| "Fix the bug" | "Write a test that reproduces it, then make it pass" |
| "Refactor X" | "Ensure tests pass before and after" |
For multi-step tasks, state a brief plan:
1. [Step] → verify: [check]
2. [Step] → verify: [check]
3. [Step] → verify: [check]
/karpathy-check — Run the full 4-principle review on your staged changes.
scripts/)All tools are stdlib-only. Run with --help.
| Script | What it detects |
|---|---|
complexity_checker.py | Over-engineering: too many classes, deep nesting, high cyclomatic complexity, unused params, premature abstractions |
diff_surgeon.py | Diff noise: lines that don't trace to the stated goal — comment changes, style drift, drive-by refactors |
assumption_linter.py | Hidden assumptions in a plan: unasked features, missing clarifications, silent interpretation choices |
goal_verifier.py | Weak success criteria: vague plans without verifiable checks, missing test assertions |
karpathy-reviewer — Runs all 4 principles against a diff. Dispatched by /karpathy-check or manually before committing.
hooks/karpathy-gate.sh — runs complexity_checker.py and diff_surgeon.py on staged files. Warns (non-blocking) when violations are found. Wire it via .claude/settings.json or Husky.
references/karpathy-principles.md — the source quotes, deeper context, when to relax each principlereferences/anti-patterns.md — 10+ before/after examples across Python, TypeScript, and shellreferences/enforcement-patterns.md — how to wire hooks, CI integration, team adoptionThese principles bias toward caution over speed. For trivial tasks (typo fixes, obvious one-liners), use judgment. The principles matter most on:
Installs via plugin for Claude Code. For other tools, copy the principles into your schema file:
| Tool | Schema file |
|---|---|
| Claude Code | CLAUDE.md (auto-loaded by plugin) |
| Codex CLI | AGENTS.md |
| Cursor | AGENTS.md or .cursorrules |
| Antigravity / OpenCode / Gemini CLI | AGENTS.md |
context: fork)self-eval — honest quality scoring after completing workcode-reviewer — broader code review; karpathy-coder focuses on the 4 LLM-specific pitfallsllm-wiki — compound knowledge; karpathy-coder ensures you don't overcomplicate while building it