Reviews code against PROJECT_GUIDELINES.md and produces an actionable compliance report. Use this skill when the user asks to review code against guidelines, check code standards compliance, audit the project, run a style check, or says things like "review against guidelines", "check my code", "are we following our standards", "audit this file", or "guidelines check". Also triggers when the user asks for a code review in a project that has a PROJECT_GUIDELINES.md file.
Review code against the project's PROJECT_GUIDELINES.md and produce a compliance
report. Supports two modes (simple and full), auto-detected from the guidelines file.
$ARGUMENTS can be:
| Argument | Files to review |
|---|---|
| (empty) | All source files in the project |
| File path | Just that file |
| Directory path | All source files in that directory (recursive) |
--changed | Files changed since the last commit (git diff --name-only HEAD) |
--staged | Git-staged files only (git diff --cached --name-only) |
--pr | Files changed in current branch vs main (git diff --name-only main...HEAD; falls back to master if main doesn't exist) |
PROJECT_GUIDELINES.md in the repo root.docs/PROJECT_GUIDELINES.md and .github/PROJECT_GUIDELINES.md.No
PROJECT_GUIDELINES.mdfound. Run/guidelines-initfirst to set up your project guidelines, or point me to your guidelines file. Then stop.
<!-- mode: simple --> is found → simple mode.<!-- mode: full --> is found → full mode.<!-- severity: ... -->
tags. If found → full mode. If not found → simple mode.Based on $ARGUMENTS, collect the files to review.
For --changed, --staged, and --pr, use Bash to run the appropriate git command.
Normalize all file paths to forward slashes.
For full-project or directory scopes, use Glob to collect source files.
Always exclude: node_modules, .git, dist, build, vendor, __pycache__,
.next, coverage, *.min.*, *.lock, *.map, generated code, binary files,
lockfiles, and images.
For guidelines files with <!-- mode: simple --> or no severity tags.
Read the ## Rules section. Each bullet is a rule. No severity parsing needed.
Every rule has equal weight — a rule is a rule.
Read each file in scope and check it against the flat rule list.
reviewer subagent with simple-mode input
(rules list + file batch). Split into batches of ~20 files.Produce a clean, direct report:
# Guidelines Review
**Scope:** [what was reviewed]
**Files scanned:** [count]
## Issues Found
### [Rule text]
- `src/api/handler.ts:42` — [what is wrong] — [how to fix]
- `src/api/router.ts:18` — [what is wrong] — [how to fix]
### [Rule text]
- `src/utils/db.ts:7` — [what is wrong] — [how to fix]
## Clean
[List of rules with no violations, or "All other rules passed."]
No summary table. No severity grouping. No pass/warn/fail columns. Just issues grouped by rule, with file:line and a fix for each.
If no issues are found, say so clearly:
# Guidelines Review
**Scope:** [what was reviewed]
**Files scanned:** [count]
All files pass all rules.
After presenting the report, offer:
docs/reviews/YYYY-MM-DD.md?"For guidelines files with <!-- mode: full --> or <!-- severity: ... --> tags.
Parse each section heading and look for a severity comment on the following line.
The format is <!-- severity: (error|warning|info) -->.
<!-- severity:error -->
and <!-- severity: error -->.warning.{ "Code Style > Naming": "warning", "Security": "error", ... }For scopes of 50 files or fewer: Review files directly (inline).
For scopes of 51+ files: Delegate to the reviewer subagent for parallel processing:
Per-category review heuristics:
When reviewing (inline or via subagent), check these patterns per category:
Code Style (Naming, Formatting, Imports)
Architecture
Error Handling
catch {}, catch (e) {} without handling, except: passconsole.log / print used instead of proper logging frameworkawait or .catch())Testing
Git Workflow
Documentation
Security
sk-, ghp_, etc.).env files that should be gitignoredeval(), innerHTML, SQL string concatenation,
Function(), child_process.exec with string interpolation)Performance
Produce a structured report:
# Guidelines Review Report
**Scope:** [what was reviewed]
**Date:** [today]
**Guidelines version:** [date from PROJECT_GUIDELINES.md header]
**Files scanned:** [count]
## Summary
| Category | Pass | Warn | Fail |
|---|---|---|---|
| Code Style | 12 | 3 | 1 |
| Architecture | 5 | 0 | 0 |
| ... | ... | ... | ... |
## Violations (must fix)
### [Category]: [Specific guideline]
**Severity:** error
**Files:** `src/api/handler.ts:42`, `src/api/router.ts:18`
**Guideline:** [quote the specific rule from PROJECT_GUIDELINES.md]
**Issue:** [what's wrong]
**Fix:** [concrete suggestion with corrected code]
---
## Warnings (should fix)
[same format as violations]
---
## Suggestions (nice to have)
[same format, for severity: info findings]
---
## Compliant
[brief list of categories and rules that passed cleanly]
## Recommendations
[patterns noticed that aren't covered by current guidelines — suggest additions
to PROJECT_GUIDELINES.md]
After presenting the report, offer:
docs/reviews/YYYY-MM-DD.md?"file:line, quote the rule, and suggest a fix.
Never say "some files have issues" without naming them.info items to sound like
errors. Don't downplay error items. The guideline author chose the severity
deliberately.