Show skill usage stats and codify frequently-used LLM skills into scripts
Read /home/hubt/.claude/skills/skill-tracker/counts.json (may not exist yet — treat as empty).
List all directories under /home/hubt/.claude/skills/.
For each skill directory, check whether it contains any executable files (.sh, .py, .go, compiled binaries) beyond SKILL.md — if so, classify it as Script-backed, otherwise LLM-prompt.
Print a table:
| Skill | Uses | Avg Time | Tokens (in/out/cache) | Raw Cost | Net Cost | Type | Status |
|---|---|---|---|---|---|---|---|
| hello | 3 | 42s | 12k / 2k / 8k | $0.042 ($0.014/use) | $0.005 ($0.002/use) | LLM-prompt | Learning |
total_ms / uses, formatted as Xs (seconds) or for >60sXm Ysinput_tokens / output_tokens / (cache_creation_tokens + cache_read_tokens) — show as Xk roundedestimated_cost_usd formatted as $0.000 ($0.000/use)estimated_cost_usd - overhead_cost_usd formatted as $0.000 ($0.000/use) — this is the cost attributable to the skill's LLM work, minus the baseline per-turn invocation overhead (cache reads from system prompt)Status tiers (read thresholds from /home/hubt/.claude/skills/skill-tracker/config.json → thresholds; defaults: candidate=5, priority=10):
Also sort by estimated_cost_usd descending so the most expensive skills surface first.
If any skills are Candidate or Priority and are still LLM-prompt type, list them as "Codification candidates", sorted by cost/use descending (highest ROI first), and ask the user if they'd like to codify one.
When the user picks a skill to codify:
SKILL.md and understand exactly what it does end-to-end.claude -p as a subprocess.go build -o <name> ./<name>.goSKILL.md instructions to simply invoke the compiled binary or script, passing $args through.