/learn — Review, manage, and interact with the self-improvement playbook. Shows what Claude has learned, scored rules, capability frontier, pending signals, regressions, community rules. Learning happens automatically via behavioral protocol and mechanical hooks — this skill is for visibility, control, and contributing to collective intelligence.
Claude learns continuously via three layers:
~/.claude/rules/playbook.md (auto-loaded)Two playbooks auto-load:
~/.claude/rules/playbook.md — your rules, your scores~/.claude/rules/playbook-community.md — collective intelligence from all usersRead both playbooks and signal files. Present:
=== Playbook Status ===
Personal: N rules across N categories (N meta-rules, N workflows)
Community: N universal rules
Token usage: ~N / 5000
Proven rules: N (sessions >= 3)
Top 3 by score: [list]
Latest 3 added: [list]
Regressions: N rules declining (or "none")
Uncertainties: N tracked items
Frontier: N active experiments, N tested
Archive: N decayed rules
Pending signals: N unprocessed
Session context: [detected ctx tags]
Group by source, draft rules for each, apply generalization + causal chain analysis. Check for second-order patterns (3+ rules with same shape → meta-rule).
Check for uncaptured learnings, rule violations, regression patterns, workflow chain opportunities, and negative space items.
What would you like to do?
1. Review all rules (by category, score, or context)
2. Add a learning manually (score 2.0)
3. Remove or adjust a rule
4. Review capability frontier + add experiments
5. View workflows (linked rule chains)
6. View uncertainty tracker
7. View community playbook
8. Contribute proven rules to community
9. View archived rules (restore one?)
10. Force pruning pass
11. Meta-learning analysis
12. Export playbook
13. Done
/learn — full review/learn status — quick stats/learn add "<rule>" — manually add (score 2.0)/learn frontier — capability frontier/learn workflows — view/manage linked rule chains/learn uncertainties — things you don't know/learn community — view community playbook + status/learn contribute — contribute proven rules to collective intelligence (see below)/learn regress — view regression alerts/learn prune — force pruning + merge similar/learn meta — meta-learning analysis (velocity, blindspots, second-order patterns)/learn export — shareable format/learn signals — raw hook signals/learn reset — archive everything, fresh start/learn contribute — Collective IntelligenceThis is how the community playbook grows. When invoked:
Scan your personal playbook for rules meeting contribution criteria:
Present the candidates: "These N rules qualify for community contribution."
The user picks which rules to contribute (or all).
Read ~/.claude/rules/playbook-community.md and check each community rule against your personal playbook:
This validation report is how community scores increase. When 3+ users independently validate a rule, the maintainer bumps its template score from 1.0 to 2.0. At 5+ validators → 3.0.
For each new rule:
For validation reports on existing rules:
Create a GitHub issue on the claude-learn repo:
gh issue create --repo OutcomeFocusAi/claude-learn \
--title "Community contribution: [N] new rules + [M] validations" \
--body "[formatted new rules + validation report on existing rules]"
The issue includes TWO sections:
The maintainer reviews, deduplicates new rules, aggregates validation counts, bumps community scores where warranted, and merges. Next plugin update delivers changes to all users.
For users with repo write access:
# Edit templates/playbook-community.md directly
# Add rules to the "Universal Rules" section
# Submit PR
gh pr create --title "Add N community rules" --body "[rules + evidence]"
templates/playbook-community.mdclaude plugin update claude-learn@outcomefocusai~/.claude/rules//learn meta)cause: explanations?