Audits all Markdown (*.md) documentation files in the project for consistency, duplication, broken links, and contradictions. Checks single-source-of-truth violations, verifies relative links and anchors, reconciles contradictions between files, and fixes all findings in place. Iterates until a full pass produces zero fixable findings.
Perform a full consistency audit of the project documentation, fix all findings, and iterate until the documentation is clean.
By default this skill audits the entire repository. If the user explicitly names specific files, directories, or glob patterns, apply the following rules for the entire run:
.git/, tooling dirs, etc.) as in a full run.Reference sidebar — not a procedural step. Consult during Step 1b when mapping files to roles and during Step 2 when deciding where information belongs.
Assign each file's role from its name and location using the signals below. Where name and location are ambiguous, the catch-all row applies; content-based signals (opening section, structured schema) are confirmed during Step 1c when the file is actually read:
| Signal | Likely role |
|---|---|
Named README.md at repo root | Entry point — developer orientation, prerequisites, index of other docs |
Named CLAUDE.md at repo root | AI agent context — information exclusively useful for AI agents operating in this repo; links out to other docs rather than duplicating them |
In a directory named runbooks/, how-to/, guides/, playbooks/, or similar | Operational guide — task-oriented, one topic per file |
In a directory named docs/, documentation/, references/, architecture/, or similar (but not a runbooks subdirectory) | Reference file — single source of truth for a specific topic; not a step-by-step guide |
| Follows a structured schema (ADR status/context/decision fields, RFC template, CHANGELOG sections, etc.) | Skill-governed file — see Step 1c |
Sidecar diagram file (.mmd, .mermaid, .puml, .plantuml, .svg, .drawio) | Inherits the role of its referencing .md file. If referenced by multiple .md files with different roles, use the role of the most authoritative referencing file (apply the role hierarchy: reference file > entry point > AI agent context > operational guide). |
| Anywhere else, name suggests a specific topic | Treat as a reference file until content proves otherwise |
When multiple signals match: skill-governed takes precedence over all other roles. A
CHANGELOG.md in a docs/ directory matches both "reference file" and "skill-governed" —
treat it as skill-governed. All other signal conflicts use the catch-all row.
README.md) orient new readers. They should link to other files, not
duplicate their content.CLAUDE.md) contain information that is exclusively useful to an
AI agent — machine-readable inventories, agent-specific rules, env var names needed for code
generation. They should not contain general information that developers also need.When it is unclear which file should own a piece of information, apply these rules in order:
Is it exclusively useful to an AI agent and not to a developer?
If yes → CLAUDE.md (or the project's equivalent agent context file). Examples:
machine-readable resource inventories, env var names an agent needs to generate code,
agent-specific rules with no operational meaning for humans.
If no → continue to rule 2.
What is its nature? Place it in the most appropriate existing file:
README.mdDoes the same information appear in the agent context file AND another file? Remove it from the agent context file and replace with a link — unless the content is genuinely agent-specific (rule 1 above). General information does not belong in the agent context file even if an agent also needs to read it.
Use Glob with **/*.md to get the full list of markdown files in the repository, then exclude:
node_modules/, vendor/, build/, and similar generated output foldersapps-rendered/, dist/, out/).git/; also read .gitignore if present and exclude any
directories it names — they are untracked by design and not authored documentation content.claude/, .agents/ (and all their descendants), and any other hidden directories at the repo root that contain skill, agent, or tool definitions rather than project documentationDo not assume a fixed directory structure — projects vary. Run this glob fresh on every pass.
The surviving set of .md files is the working list for this pass. It grows during Step 1c
as sidecar diagram files are discovered.
Before checking consistency, assign a role to each file using the File Roles reference above.
Role assignment based on name and location is done here. Reading happens in Step 1c — see the
note there about reading CLAUDE.md before other files.
If a file's role cannot be determined from name and location alone, mark it as ambiguous — finalize its role during Step 1c after reading its content.
If a file's name or directory suggests it follows a structured schema (e.g. files in an adr/
or decisions/ directory, files named CHANGELOG.md), flag it as potentially skill-governed.
Complete steps 1–2 of skill-governed detection during Step 1c after reading the file.
The delegation protocol (assess scope, fix at boundary, note in report) runs in Step 3 when
each finding is processed.
If CLAUDE.md exists in the repository, read it before any other file — its conventions inform
role mapping for all other files. This applies even in scoped runs where CLAUDE.md falls
outside the scope: read it as context but do not fix it and do not add it to the working list.
In a scoped run, read other out-of-scope files on demand — only when a specific finding requires context from a referenced external file, not speculatively upfront. When reading an out-of-scope file, do not fix it and do not collect its diagram references (see Scope section).
Read every remaining file in the working list completely, skipping CLAUDE.md if it was
already read above. As you read each file, perform the following:
Collect sidecar diagram references. For every link or image reference whose target has a
textual diagram extension (.mmd, .mermaid, .puml, .plantuml, .svg, .drawio):
Textual diagram formats and the content to extract for terminology checks:
.mmd, .mermaid) and fenced ```mermaid blocks: node names, edge labels, annotations.puml, .plantuml) and fenced ```plantuml blocks: element names, labels, notes.svg): <text>, <title>, <desc>, aria labels.drawio): cell labels and tooltip attributes in the XMLFenced script blocks (```bash, ```sh, ```zsh, ```shell, ```powershell,
```fish) contain executable commands. Note their contents for contradiction checks in Step 2
— commands and tool invocations must not contradict documented procedures or tooling choices.
Do not apply terminology drift checks to their contents.
Fenced data/config blocks (```yaml, ```json, ```toml, ```xml, ```csv)
and all other non-diagram, non-script language tags are not documentation content. Skip them
entirely.
Handle malformed diagram files. If a sidecar diagram file exists but its content cannot be
parsed (e.g. malformed XML in a .drawio or .svg file), report it as a finding per the
report-only path in Step 3. Do not attempt to fix its content or extract terminology from it.
Handle binary image references. For every link or image reference whose target is a binary
image file (.png, .jpg, .jpeg, .gif), note its path for the existence check in Step 2.
Do not read binary image content.
Handle unknown or missing extensions. If a referenced file has no extension or an unrecognized extension, treat it as binary: note its path for an existence check only.
Check front matter. If a file contains YAML front matter or metadata comments — including
sidecar diagram files that carry such metadata — treat those fields (name, description,
type, etc.) as content subject to consistency checks. Verify they are not stale or mismatched
relative to the file's actual content.
Complete skill-governed detection. For any file flagged as potentially skill-governed in Step 1b, or any file whose content reveals a structured schema upon reading:
Status: Accepted), fixed section headings (## Context, ## Decision, ## Consequences),
or a machine-readable front matter block..claude/skills/ and .agents/ for a skill whose
name or description matches the detected format (e.g. architecture-decision-record).
These directories are excluded from the documentation audit in Step 1a, but you must still
read them here to identify whether an owning skill exists.Record the owning skill (if found) against the file. Delegation of findings to the owning skill happens in Step 3 when each finding is processed.
Check across all files for the following patterns. When deciding where information belongs, apply the "Where Does Information Belong?" rules in the File Roles section above.
Fix: Keep the information in the most authoritative file. Replace the duplicate with a
markdown link: See [filename.md](relative/path/to/filename.md) for … Route the fix through
Step 3: a short self-contained duplicate (a single sentence, a single bullet, or a brief
definition) is a silent fix; anything larger or structural (a section, a table, a multi-step
procedure) is a structural fix.
.md files, sidecar diagram files, and
binary image files.docs/runbooks/foo.md must use ../ to reach docs/bar.md).#section-name does not match any heading in the target file..drawio or .svg file).https://...) that is an obvious placeholder (e.g. https://example.com)
or is explicitly noted as broken or moved in another file. Do not speculatively check live
external URLs — flag only what can be determined from the documentation itself.Fix: Correct relative path and anchor issues. Verify by tracing from the file's directory. Malformed diagram files and absolute URL issues are report-only — do not attempt to fix them.
yarn when the docs mandate npm,
or shows a deprecated command that another file explicitly replaces).Fix: Route through Step 3 based on scope:
Note: Do not cross-check documentation against application source code — that is out of scope for this skill. Only compare documentation files against each other.
Fix: Move the content to the correct file and replace the moved block with a link. This is always a structural fix — announce before applying.
<angle-bracket>
in another).Fix: Pick the canonical term used by the most authoritative file and update all others to match, including diagram labels and annotations. Updating terms in-place is a silent fix. If fixing terminology requires moving or restructuring content, use the structural fix path.
Exempt from this check: The following are never flagged as orphaned:
README.md and CLAUDE.md at the repo root — reachable by convention..md reference — that reference
is their inbound link.Fix: Do not auto-fix. Report the orphaned file per the report-only path in Step 3.
For every finding, state the affected file(s) and line reference (where applicable — file-level findings such as orphaned files or malformed diagrams have no specific line) and describe the issue (duplication / broken link / contradiction / misplacement / terminology / orphan), then act according to one of the four paths below.
If a finding matches more than one category, apply the fix path that resolves the most issues in a single action: structural always beats silent; among structural fixes, prefer the action that eliminates the most violations (e.g. moving misplaced content resolves both misplacement and duplication at once).
Before routing, check whether the finding affects a skill-governed file identified in Step 1c. If so, use the delegated findings path first.
When a finding falls within the scope of a skill-governed file's owning skill:
SKILL.md to confirm the finding (e.g. a missing
required section, a malformed status field, an outdated superseded-by reference) is within what
that skill is designed to handle. If yes, delegate rather than fix directly.State the file, the issue, and the user's options. Do not make any edit.
State the file, line (where applicable), and issue, then apply immediately:
State what you are about to do, then apply immediately — do not wait for the user to reply, but make the action visible:
Update all linking files in the same pass as the fix.
After completing a full pass and applying all fixes, restart from Step 1. Re-read all files from disk (do not rely on memory of the previous read — edits may have introduced new issues). Repeat until a complete pass produces zero fixable findings.
Suppressing duplicate reports: Track all report-only findings already reported (orphaned files, version contradictions, malformed diagrams, absolute URL issues). On passes 2 and beyond, do not re-report a finding that was already reported in a prior pass — unless it is newly introduced as a side effect of edits made in the previous pass (e.g. the only inbound link to a file was replaced, making it newly orphaned).
Delegated findings across passes: Do not suppress delegated findings — re-delegate on each pass if the finding still exists. If a delegated finding persists after the owning skill has been invoked on three separate passes without resolving it, stop delegating and escalate to the user as a report-only finding with a note that delegation did not resolve it.
Termination guard: If pass 4 still finds fixable issues, stop iterating. Report the remaining findings to the user with a note that they require human judgment to resolve — this signals a structural conflict (e.g. two files that each have a legitimate claim to owning the same information) that the skill cannot resolve unambiguously on its own.
After every pass (including the terminal one), output a single summary line:
Pass N — X fixed, Y reported for user action, across Z files. (Z = number of distinct files that had at least one finding, including sidecar diagram files. Use "0 fixed" for a clean pass; omit "reported" if there are none.)
A pass with 0 fixable findings is the signal to stop.
.png, .jpg, .jpeg, .gif) or
files with unknown extensions — only verify they exist at the linked path.