Use when learnings feel bloated, before sprints, after /retro, or when rules need a health check. Scores entries by relevance, archives stale ones, and detects knowledge gaps. Works on agent learnings or project rules. Keywords: curate, learnings, rules, optimize, relevance, prune, archive, gap, score, triage, audit.
You are running curate — scoring entries by relevance to upcoming tasks, flagging stale or redundant items, and detecting gaps. Works in two modes:
/curate <agent-name>): optimize an agent's learnings, archive stale entries, fill gaps from archive or cross-agent sources/curate rules): audit project rule files for relevance, redundancy, and passive context budgetTarget: $ARGUMENTS
/retro adds new entries — prune stale entries to stay under the 60-line cap/tend orchestrates the full lifecycle — curate runs first, promote reads the outputDetect artifact type (learnings or rules)
-> Load entries + upcoming work + cross-references
-> Score each entry (relevance × freshness × scope)
-> Map dimensions to actions (keep / archive / review)
-> Detect gaps (upcoming work domains vs. coverage)
-> [learnings] Pull candidates from archive or cross-agent sources
-> [rules] Compute passive context budget
-> Emit pipe-format report
-> [learnings] (Optional) Write back to learnings.md and archive.md
-> [rules] Write review manifest to .claude/tackline/memory/scratch/
If $ARGUMENTS equals rules (case-insensitive):
Otherwise:
If $ARGUMENTS is empty:
ls .claude/tackline/memory/agents/ to list available agentsIf $ARGUMENTS is provided:
.claude/tackline/memory/agents/<name>/learnings.md exists.claude/team.yaml exists and lists this agent.claude/tackline/memory/agents/<name>/archive.md exists (needed for Phase 4).claude/team.yaml exists (used in Phase 1d to read the agent definition)ls rules/*.md .claude/rules/*.md 2>/dev/null
Learnings mode (bootstrap): Skip this step. The target has no learnings.md. Note that the starting entry count is 0. Continue to Phase 1b.
Learnings mode (normal): Read .claude/tackline/memory/agents/<name>/learnings.md. Note:
(added: YYYY-MM-DD, dispatch: <source>)Rules mode: Read all rule files from both locations:
ls rules/*.md .claude/rules/*.md 2>/dev/null
For each rule file, note:
rules/ vs project-local .claude/rules/)# heading)Also read CLAUDE.md and note any overlap between CLAUDE.md content and rule files.
Gather the upcoming task signal from whichever sources are available. Check your project's task tracker for ready and in-progress work, and check for epic state files:
ls .claude/tackline/memory/epics/*/epic.md 2>/dev/null
Read any epic state files found. Extract:
If no task tracker is available and no epics exist, use git signals:
git log --oneline -10
git diff --name-only HEAD~5..HEAD 2>/dev/null
Recent commit activity approximates upcoming work focus areas.
Learnings mode: Read all rule files to detect overlap with learnings:
ls rules/*.md .claude/rules/*.md 2>/dev/null
Read each rule file. Read CLAUDE.md. Build a list of topics already covered passively so learnings that duplicate them can be flagged as PASSIVE.
Rules mode: Read agent learnings to find PASSIVE rules (rules whose content has been internalized by agents):
ls .claude/tackline/memory/agents/*/learnings.md 2>/dev/null
For each rule, check whether 3+ agents have learnings entries that restate the rule's constraints. A rule internalized by 3+ agents is a candidate for PASSIVE — the knowledge is already distributed and the rule may be adding redundant passive context cost.
Skip this step in rules mode.
If .claude/team.yaml exists, read it and extract the target agent's entry:
role: what the agent is responsible forowns: file glob patterns defining ownershipmodel: complexity contextIf team.yaml does not exist, skip this step.
If docs/domains.md exists, read it. The domain index maps work areas to their relevant rules, learnings, and skills. Use it to:
If docs/domains.md does not exist, skip this step.
In bootstrap mode: Skip this entire phase. There are no existing entries to score. Proceed directly to Phase 3.
For each entry in the primary artifact, evaluate three independent dimensions. The composite of these dimensions drives the action (keep, archive, review).
| Dimension | Values | What It Measures |
|---|---|---|
| Relevance | high / medium / low / passive | Connection to upcoming work |
| Freshness | fresh / aging / stale | Time since last confirmed or modified |
| Scope | agent / project / global | Breadth of applicability |
| Value | Meaning |
|---|---|
| high | Directly relates to files, domains, or tools in upcoming tasks |
| medium | Related to the general area of upcoming work but not specific tasks |
| low | No connection to upcoming work — relevant in general but not now |
| passive | Already covered by another artifact (rule covers learning, or rule duplicates another rule) |
Learnings heuristics:
Rules heuristics:
Check the entry's provenance date or the file's last git modification:
| Value | Learnings | Rules |
|---|---|---|
| fresh | added: or dispatch: date within 14 days | freshness: frontmatter date within 30 days, or modified in git within 30 days |
| aging | 14-60 days old | 30-90 days since freshness date or last git modification |
| stale | >60 days old with no recent references | >90 days since freshness date and no git activity |
For learnings without added: dates, check git blame for the line's last modification date. For rules with freshness: frontmatter, use that date as the baseline.
| Value | Meaning |
|---|---|
| agent | Applies only to one specific agent's workflow or owned files |
| project | Applies across agents within this project |
| global | Applies to any Claude Code project |
For learnings, scope informs /promote — entries with project or global scope and high relevance in 2+ agents are strong promotion candidates. For rules, scope validates placement — global rules belong in rules/, project rules in .claude/rules/.
The action for each entry is derived from its dimension scores:
Learnings mode:
| Action | Criteria |
|---|---|
| KEEP | relevance: high or medium (regardless of freshness) |
| KEEP | relevance: low AND freshness: fresh (recently added, give it time) |
| ARCHIVE | relevance: low AND freshness: aging or stale |
| ARCHIVE | relevance: passive (regardless of freshness) |
Rules mode:
| Action | Criteria |
|---|---|
| KEEP | relevance: high or medium (regardless of freshness) |
| KEEP | relevance: low AND freshness: fresh (recently added rule, give it time) |
| REVIEW | relevance: low AND freshness: aging or stale |
| REVIEW | relevance: passive (regardless of freshness) |
While scoring, note entries that reference similar concepts to other entries (in the same file or across agents). For each such pair, add a related: annotation in the output. These links improve knowledge discovery and feed /promote's cross-agent detection. Example: an entry about "frontmatter fields" in agent A relates to "skill YAML conventions" in agent B.
Compare the file scope and domain areas from Phase 1b against the topics covered by entries scored HIGH or MEDIUM.
In bootstrap mode: All domain areas are uncovered by definition (0 entries). Mark every upcoming work domain as GAP and proceed to steps 3c and 3d to find fill candidates.
In normal mode: For each distinct domain area in upcoming work, note whether any HIGH or MEDIUM entry covers it:
Domain area: skill authoring
Covered by: entry about frontmatter fields (HIGH), entry about phase structure (MEDIUM)
Status: COVERED
Domain area: hook scripting
Covered by: (none)
Status: GAP
In bootstrap mode: All domain areas from upcoming work are gaps. No filtering needed — proceed directly to 3c and 3d to source fill candidates for every domain.
In normal mode: A gap is a domain area where:
Learnings mode: Gaps mean the agent will operate in this area without relevant learnings loaded.
Rules mode: Gaps mean upcoming work domains lack guardrail rules. Check whether agent learnings in gap areas have promote potential (same pattern in 2+ agents) — these are rule candidates surfaced through gap analysis.
Skip this step in rules mode. Rules are not archived — they are reviewed and manually maintained.
If .claude/tackline/memory/agents/<name>/archive.md exists, read it. Scan for entries that match gap areas. These are archive candidates — entries that were previously archived but are now relevant again.
Skip this step in rules mode.
Scan other agents' learnings files for entries relevant to the gap areas:
ls .claude/tackline/memory/agents/*/learnings.md 2>/dev/null
Read each file that is not the target agent's. For each gap, check whether another agent has an entry covering it. Cross-agent candidates are flagged for potential pull-in. Note the source agent name — this is also relevant for /promote (patterns across agents indicate rule candidacy).
In bootstrap mode: Also scan all other agents' archive.md files for entries relevant to the target agent's role and owns patterns from team.yaml. Filter candidates by relevance to the new agent's responsibilities — use the agent's role description and owns glob patterns as the relevance filter. Do not limit to gaps that have no other fill candidate; in bootstrap mode, multiple candidates per domain area are desirable to give the user choice.
Emit in pipe format per rules/pipe-format.md. The format depends on artifact type.
## Curated Learnings: <agent-name>
**Source**: /curate
**Input**: <agent-name>
**Pipeline**: (none — working from direct input)
### Items (N)
1. **KEEP: <entry title or first clause>** — <full entry text>
- relevance: high | medium
- freshness: fresh | aging | stale
- scope: agent | project | global
- reason: <why this relevance — reference to specific upcoming task or file>
- source: .claude/tackline/memory/agents/<name>/learnings.md
- related: <comma-separated list of related entry titles, if any>
2. **ARCHIVE: <entry title>** — <full entry text>
- relevance: low | passive
- freshness: aging | stale
- scope: agent | project | global
- reason: <why archived — no upcoming tasks, or covered by rule: filename>
- source: .claude/tackline/memory/agents/<name>/learnings.md
3. **ADD (from archive): <entry title>** — <entry text>
- relevance: high | medium
- freshness: <from original added date>
- scope: agent | project | global
- reason: <gap area this fills>
- source: .claude/tackline/memory/agents/<name>/archive.md
4. **ADD (cross-agent): <entry title>** — <entry text>
- relevance: high | medium
- freshness: <from original added date>
- scope: project | global
- reason: <gap area this fills; note if same entry exists in 2+ agents — promote candidate>
- source: .claude/tackline/memory/agents/<other-name>/learnings.md
- cross-agent: true
- related: <comma-separated list of related entry titles from any agent, if detected>
### Gaps (M)
For each gap with no fill candidate:
- **GAP: <domain area>** — upcoming tasks touch <files/patterns> but no entry covers this area. Consider adding a learning after the next dispatch in this area.
### Summary
<One paragraph: relevance distribution (how many high/medium/low/passive), freshness distribution (fresh/aging/stale), scope distribution (agent/project/global), how many gaps found, fill candidates from archive or cross-agent sources, and promote signals (cross-agent entries in 2+ agents with project/global scope). Note overall learnings health — is the file well-targeted or diffuse?>
Order items: ADD first (most valuable new context), then KEEP (sorted by relevance high before medium), then ARCHIVE (low before passive). This puts what changes at the top for easy review.
Use this format when bootstrap mode is active (no existing learnings.md):
## Bootstrap Learnings: <agent-name>
**Source**: /curate (bootstrap mode)
**Input**: <agent-name>
**Pipeline**: (none — working from direct input)
### Items (N)
1. **SEED: <entry title>** — <entry text>
- relevance: high | medium
- freshness: <from original added date>
- scope: agent | project | global
- reason: <gap area this fills — reference to agent role or upcoming work domain>
- source: .claude/tackline/memory/agents/<other-name>/learnings.md | .claude/tackline/memory/agents/<other-name>/archive.md
- cross-agent: true
- related: <comma-separated list of related entry titles, if any>
### Gaps (M)
For each domain area in upcoming work with no SEED candidate found:
- **GAP: <domain area>** — no cross-agent or archive entries found for this area. Add a learning after the first dispatch in this domain.
### Summary
<One paragraph: total SEED candidates found, source agents and archives consulted, domains covered vs. gaps remaining, how well the seeded entries match the agent's role and owns patterns, and any promote signals detected. Note this is a starting point — the agent should refine these entries after its first few dispatches.>
Order SEED items by relevance (high before medium), then by source (learnings.md before archive.md).
## Curated Rules: Project Rule Health
**Source**: /curate
**Input**: rules
**Pipeline**: (none — working from direct input)
### Items (N)
1. **KEEP: <rule title>** — <one-line summary of what the rule constrains>
- relevance: high | medium
- freshness: fresh | aging | stale
- scope: project | global
- reason: <covers upcoming work domain or known failure mode>
- source: <rules/filename.md or .claude/rules/filename.md>
- lines: <line count>
2. **REVIEW: <rule title>** — <summary>
- relevance: low | passive
- freshness: aging | stale
- scope: project | global
- reason: <no upcoming tasks in covered domain; or duplicated by / internalized by>
- source: <path>
- lines: <line count>
### Gaps (M)
For each gap:
- **GAP: <domain area>** — upcoming tasks touch <files/patterns> but no rule provides guardrails. [If agent learnings in this area appear in 2+ agents: "Promote candidate: pattern X found in agent1, agent2."]
### Passive Context Budget
| Metric | Value |
|--------|-------|
| Total rule files | <count> |
| Total lines (all rules) | <sum of all rule file line counts> |
| HIGH + MEDIUM lines | <sum> |
| LOW + PASSIVE lines | <sum> |
| Potential savings | <LOW + PASSIVE line count> lines across <count> files |
### Summary
<One paragraph: relevance distribution (high/medium/low/passive), freshness distribution (fresh/aging/stale), scope distribution (project/global), total passive context load in lines, potential savings from reviewing low-relevance or stale rules, any gaps where upcoming work lacks guardrails, and whether any gaps have promote candidates from agent learnings.>
Order items: KEEP (sorted by relevance high before medium), then REVIEW (low before passive). Rules mode uses REVIEW instead of ARCHIVE — rules are flagged for human review, never auto-removed.
After presenting the output, ask the user:
"Apply these changes to learnings.md and archive.md? (y/n)"
If the user approves:
5a. Update learnings.md
Write the updated file with:
When re-organizing:
5b. Update archive.md
Append each ARCHIVE entry to .claude/tackline/memory/agents/<name>/archive.md:
- <entry text> (archived: YYYY-MM-DD, reason: relevance low, freshness stale — no upcoming tasks in this area)
- <entry text> (archived: YYYY-MM-DD, reason: relevance passive — covered by rule: rules/commits.md)
If archive.md does not exist, create it with a header line:
# Archived Learnings: <agent-name>
5c. Verify
Read the updated learnings.md and confirm:
After presenting the bootstrap output, ask the user:
"Create .claude/tackline/memory/agents/<name>/learnings.md with these seeded entries? (y/n)"
If the user approves:
5a. Create learnings.md
Create .claude/tackline/memory/agents/<name>/learnings.md (and the directory if it does not exist) with:
# Learnings: <agent-name>
## Core
<All SEED entries classified as Core — high-reuse fundamentals applicable across most tasks>
## Task-Relevant
<All SEED entries classified as Task-Relevant — entries tied to current upcoming work scope>
Classify SEED entries into Core vs Task-Relevant using the same heuristic as normal mode:
Keep the total under 60 lines. If SEED candidates would exceed the cap, prioritize by relevance (HIGH before MEDIUM) and note what was deferred.
5b. Verify
Read the newly created learnings.md and confirm:
Never auto-delete or auto-modify rule files. Rules mode is report-only.
Write a review manifest to .claude/tackline/memory/scratch/rules-review.md:
# Rules Review Manifest
Generated by /curate rules on YYYY-MM-DD
## Review Checklist
- [ ] **REVIEW: <rule title>** (<path>, <lines> lines) — relevance: <low|passive>, freshness: <aging|stale> — <reason>
...
## Gaps to Address
- [ ] **GAP: <domain area>** — <description> [promote candidate: <yes/no>]
...
## Passive Context Budget
Total: <lines> lines across <count> files
Potential savings: <lines> lines if LOW+PASSIVE rules are consolidated or removed
This manifest is a checklist for human review, not an automation target.
Relevance scores against upcoming work, not general value. A learning can be deeply true and still have low relevance because nothing in the upcoming sprint will trigger it. That's fine — archive it and re-evaluate after the next sprint.
Passive relevance is not wrong — it's covered. Marking an entry passive is a compliment: the pattern graduated to a rule. Archive it without judgment.
Gaps are diagnostic, not failures. A gap means the agent will work without relevant learnings in that area. It does not mean the agent will fail. Flag it so a learning can be captured after the dispatch.
Cross-agent entries appearing in 2+ agents are promote signals. When the same pattern appears independently in multiple agents' learnings, it likely belongs in a rule. Flag these with cross-agent: true so /promote can find them.
Ask before writing. Never modify learnings.md or archive.md without explicit user approval. The pipe-format output is the primary artifact — the write-back is optional.
Cite specific rules for passive entries. When scoring relevance as passive, name the exact rule file. "covered by a rule" is vague; "covered by rule: rules/commits.md" is actionable.
Respect the 60-line cap. After write-back, learnings.md must fit within 30 Core + 30 Task-Relevant lines. If ADD candidates would push over the cap, prioritize by score (HIGH before MEDIUM) and note what was deferred.
7b. Add related: cross-references. While scoring, annotate entries that reference similar concepts with related: links. These are optional metadata — omit when no meaningful relationship exists. Cross-references improve /promote's ability to detect cross-agent patterns and help agents discover related knowledge.
Rules mode is conservative. Flag rules for review, never delete or modify them automatically. REVIEW replaces ARCHIVE — rules are human-maintained artifacts with higher change cost than learnings.
PASSIVE for rules means dedup or internalization. A rule is PASSIVE when its constraints are either restated by another rule file (dedup) or when 3+ agents have internalized the pattern in their learnings (the knowledge is distributed). Both reduce the value of the passive context cost.
Compose with /evolution for richer rule scoring. If a rule's churn or stability is unclear, run /evolution <rule-file> first. The stability signal from /evolution improves LOW vs MEDIUM scoring accuracy.
Suggest pattern coalescence. When related: cross-references reveal 3+ entries across 2+ agents that reference the same concept cluster, suggest creating a shared document in docs/patterns/<topic>.md. This is an organic growth path — heavily cross-referenced patterns naturally coalesce into shared documentation without mandating restructuring. Include the suggestion in the Summary section: "Pattern coalescence candidate: [topic] — [n] related entries across [agents]. Consider creating docs/patterns/[topic].md to consolidate shared knowledge." Only suggest, never auto-create.
Consult the domain index. If docs/domains.md exists, read it during Phase 1 to improve gap detection. The domain index maps work areas to their relevant rules and learnings — gaps in the index are gaps in knowledge coverage.
Bootstrap Mode. Bootstrap mode activates when the target agent has no learnings.md but IS listed in team.yaml. In this mode: scoring is skipped (nothing to score), gap detection treats all upcoming work domains as uncovered, and the output uses SEED actions instead of ADD/KEEP/ARCHIVE. Seeds are sourced from other agents' learnings.md and archive.md files, filtered by the new agent's role and owns patterns. Limitations: seeds are borrowed context — they may not fully match how the new agent will actually work. Treat them as a starting point; replace or refine entries after the first few dispatches.
See also: /tend (orchestrates curate + promote in sequence), /promote (graduates cross-agent patterns to rules), /retro (generates learnings that curate then optimizes), /sprint (dispatches agents whose learnings curate keeps sharp), /evolution (file change history for richer rule scoring), docs/domains.md (domain cross-reference index for knowledge discovery).