Use when auditing installed skills and commands for quality. Supports Quick Scan (changed skills only) and Full Stocktake modes with sequential subagent batch evaluation.
Workflow that audits installed skills and commands using a quality checklist plus AI holistic judgment. Supports two modes: Quick Scan for recently changed skills, and Full Stocktake for a complete review.
Use this skill when you want a quality audit of skills/commands, especially after adding new skills, changing command behavior, or before release.
The workflow targets the following paths relative to the directory where it is invoked:
| Path | Description |
|---|---|
<config>/skills/ | Global skills (all projects). <config> is the active tool config root. |
{cwd}/.<tool>/skills/ | Project-level skills if the active tool supports repo-local skills and the directory exists |
At the start of Phase 1, the workflow explicitly lists which paths were found and scanned.
To include project-level skills, run from that project's root directory:
cd ~/path/to/my-project
run the tool surface that invokes `skill-stocktake`
If the project has no repo-local skill directory for the active tool, only global skills and commands are evaluated.
| Mode | Trigger | Duration |
|---|---|---|
| Quick Scan | results.json exists (default) | 5–10 min |
| Full Stocktake | results.json absent, or explicit full-run invocation | 20–30 min |
Results cache: <config>/skills/skill-stocktake/results.json
Re-evaluate only skills that have changed since the last run (5–10 min).
<config>/skills/skill-stocktake/results.jsonnode <config>/skills/skill-stocktake/scripts/quick-diff.js <config>/skills/skill-stocktake/results.json
(Project dir is auto-detected from supported repo-local skill directories; override via SKILL_STOCKTAKE_PROJECT_DIR if needed)[]: report "No changes since last run." and stopnode <config>/skills/skill-stocktake/scripts/save-results.js <config>/skills/skill-stocktake/results.json with EVAL_RESULTS on stdinRun: node <config>/skills/skill-stocktake/scripts/scan.js
The script enumerates skill files, extracts frontmatter, and collects UTC mtimes. Project dir is auto-detected from the active tool's repo-local skill directory; pass it explicitly only if needed. Present the scan summary and inventory table from the script output:
Scanning:
✓ <config>/skills/ (17 files)
✗ {cwd}/.cursor/skills/ (not found — global skills only)
| Skill | 7d use | 30d use | Description |
|---|
Launch a subagent with the full inventory and checklist. The subagent reads each skill, applies the checklist, and returns per-skill JSON:
{ "verdict": "Keep"|"Improve"|"Update"|"Retire"|"Merge into [X]", "reason": "..." }
Chunk guidance: Process ~20 skills per subagent invocation to keep context manageable. Save intermediate results to results.json (status: "in_progress") after each chunk.
After all skills are evaluated: set status: "completed", proceed to Phase 3.
Resume detection: If status: "in_progress" is found on startup, resume from the first unevaluated skill.
Each skill is evaluated against this checklist:
- [ ] Content overlap with other skills checked
- [ ] Overlap with durable instruction surfaces checked
- [ ] Freshness of technical references verified (use WebSearch if tool names / CLI flags / APIs are present)
- [ ] Usage frequency considered
Verdict criteria:
| Verdict | Meaning |
|---|---|
| Keep | Useful and current |
| Improve | Worth keeping, but specific improvements needed |
| Update | Referenced technology is outdated (verify with WebSearch) |
| Retire | Low quality, stale, or cost-asymmetric |
| Merge into [X] | Substantial overlap with another skill; name the merge target |
Evaluation is holistic AI judgment — not a numeric rubric. Guiding dimensions:
Reason quality requirements — the reason field must be self-contained and decision-enabling:
"Superseded""disable-model-invocation: true already set; superseded by ai-learning which covers all the same patterns plus confidence scoring. No unique content remains.""Overlaps with X""42-line thin content; Step 4 of chatlog-to-article already covers the same workflow. Integrate the 'article angle' tip as a note in that skill.""Too long""276 lines; Section 'Framework Comparison' (L80–140) duplicates ai-era-architecture-principles; delete it to reach ~150 lines.""Unchanged""mtime updated but content unchanged. Unique Python reference explicitly imported by rules/python/; no overlap found."| Skill | 7d use | Verdict | Reason |
|---|
<config>/skills/skill-stocktake/results.json:
evaluated_at: Must be set to the actual UTC time of evaluation completion.
The save-results.js script sets this automatically. Never use a date-only approximation like T00:00:00Z.
{
"evaluated_at": "2026-02-21T10:00:00Z",
"mode": "full",
"batch_progress": {
"total": 80,
"evaluated": 80,
"status": "completed"
},
"skills": {
"skill-name": {
"path": "<config>/skills/skill-name/SKILL.md",
"verdict": "Keep",
"reason": "Concrete, actionable, unique value for X workflow",
"mtime": "2026-01-15T08:30:00Z"
}
}
}