Memory vault management - create, search, classify, and index memories. Invoke for /learn command memory operations.
Direct execution skill for memory vault management. Handles memory creation, similarity search, classification, and index maintenance through content mapping, MCP-based deduplication, and three memory operations (UPDATE, EXTEND, CREATE).
MANDATORY INTERACTIVE REQUIREMENT -- DO NOT SKIP:
Reference (do not load eagerly):
@.memory/30-Templates/memory-template.md - Memory template@.memory/20-Indices/index.md - Memory index@.memory/memory-index.json - Machine-queryable memory index@.claude/context/project/memory/learn-usage.md - Usage guide| Mode | Input | Description |
|---|---|---|
text | Text content | Add quoted text as memory |
file | File path | Add single file content as memory |
directory | Directory path | Scan directory for learnable content |
task | Task number | Review task artifacts and create memories |
All non-task modes flow through: Content Mapping -> Memory Search -> Memory Operations
Content mapping is the intermediate representation between input acquisition and memory operations. It segments input into topic-aligned chunks that can be matched against existing memories.
{
"source": {
"type": "text|file|directory",
"path": "/path/to/input",
"total_tokens": 2500
},
"segments": [
{
"id": "seg-001",
"topic": "neovim/plugins/telescope",
"source_file": "/path/to/file.md",
"source_lines": "15-42",
"summary": "Telescope custom picker creation pattern",
"estimated_tokens": 350,
"key_terms": ["telescope", "picker", "finders", "sorters", "attach_mappings"]
}
]
}
| Field | Type | Description |
|---|---|---|
id | string | Unique segment identifier (seg-NNN) |
topic | string | Inferred topic path (slash-separated hierarchy) |
source_file | string | Original file path (for file/directory modes) |
source_lines | string | Line range in source file (e.g., "15-42") |
summary | string | 1-2 sentence summary of segment content |
estimated_tokens | number | Approximate token count for this segment |
key_terms | array | 3-5 significant terms for matching |
Split at heading boundaries:
1. Identify all headings (# ## ### ####)
2. Each heading starts a new segment
3. Segment includes all content until next same-or-higher level heading
4. Top-level content before first heading becomes its own segment
Split at blank-line-separated blocks:
1. Identify function/class definitions
2. Group related comments with their definitions
3. Separate standalone comment blocks as documentation segments
4. Keep import/require blocks together
Split at paragraph boundaries with topic grouping:
1. Split at double-newline (paragraph boundaries)
2. Group adjacent paragraphs with keyword overlap >40%
3. Single-sentence paragraphs merge with adjacent
Each file becomes an initial segment, then large files are split:
1. Each file is an initial segment
2. Files >800 tokens are split at section boundaries
3. Files <100 tokens are candidates for merging with related files
Inputs under 500 tokens skip segmentation and become a single segment:
if total_tokens < 500:
segments = [{
"id": "seg-001",
"topic": inferred_topic,
"summary": first_line_or_60_chars,
"estimated_tokens": total_tokens,
"key_terms": extract_keywords(content, 5)
}]
| Condition | Action |
|---|---|
| Segment <100 tokens | Merge with adjacent same-topic segment |
| Segment 200-500 tokens | Ideal size, no action |
| Segment >800 tokens | Split at next heading/paragraph boundary |
Extract 3-5 significant terms per segment:
1. Remove stop words (the, a, is, are, etc.)
2. Extract nouns and technical terms (>4 characters)
3. Prioritize: proper nouns > technical terms > common nouns
4. Deduplicate (case-insensitive)
5. Return top 5 by frequency within segment
After content mapping, each segment is matched against existing memories to determine the appropriate operation (UPDATE, EXTEND, or CREATE).
When MCP server is available, use the execute pattern:
For each segment in content_map.segments:
query = segment.key_terms.join(" ")
results = execute("search", {
"query": query,
"vault": ".memory",
"limit": 5
})
When MCP is unavailable, use keyword-based file search:
# For each segment
for keyword in $key_terms; do
grep -l -i "$keyword" .memory/10-Memories/*.md 2>/dev/null
done | sort | uniq -c | sort -rn | head -5
Score keyword overlap between segment and each matching memory:
overlap_score = |segment_terms intersect memory_terms| / |segment_terms|
Where:
- segment_terms = segment.key_terms
- memory_terms = keywords extracted from memory content (same algorithm)
| Overlap Score | Classification | Action |
|---|---|---|
| >60% | HIGH | UPDATE - Replace memory content |
| 30-60% | MEDIUM | EXTEND - Append new section |
| <30% | LOW | CREATE - New memory |
YOU MUST call AskUserQuestion for EACH segment before writing anything. Do NOT infer what the user wants. Do NOT skip segments. Do NOT write memory files without explicit user confirmation per segment.
Present each segment with related memories via AskUserQuestion:
Segment: {segment.summary}
Topic: {segment.topic}
Key terms: {segment.key_terms.join(", ")}
Related Memories:
1. MEM-telescope-custom-pickers (72% overlap) -> Recommended: UPDATE
2. MEM-neovim-plugin-patterns (45% overlap) -> Recommended: EXTEND
3. MEM-lua-module-structure (18% overlap) -> Recommended: CREATE (no strong match)
What would you like to do with this segment?
[ ] UPDATE MEM-telescope-custom-pickers (replace content)
[ ] EXTEND MEM-neovim-plugin-patterns (append section)
[ ] CREATE new memory
[ ] SKIP - don't save this segment
Users can override any recommendation:
Three distinct operations for memory management:
Replace memory content while preserving structure:
1. Read existing memory file
2. Preserve frontmatter: created (original), tags, topic
3. Update frontmatter: modified = today
4. Move current content to ## History section with date marker
5. Replace main content with new segment content
6. Preserve ## Connections section
7. Write updated memory
Template for UPDATE:
---