Retrieve relevant documentation, runbooks, and knowledge from Muninn long-term memory
Before drafting answers or making decisions, retrieve relevant documentation, runbooks, and knowledge from Muninn (long-term memory).
Don't assume. Retrieve.
Before querying, determine:
memories = await muninn_recall(
query="traefik configuration and routing setup",
k=5,
memory_type="semantic", # or "episodic", "procedural"
domain="ravenhelm"
)
Muninn returns memory fragments with:
content: The actual text from the documentdomain: Which domain it came fromweight: How relevant/trusted this memory isreferences: How often it's been usedWhen using retrieved information:
docs/runbooks/RUNBOOK-024-add-shared-service.md..."If memory retrieval returns nothing or insufficient info:
Use when: Planning new features or architecture
memories = await muninn_recall(
query="SPIRE mTLS certificate rotation design",
memory_type="semantic",
domain="ravenhelm"
)
Expect to find: ADRs, architecture docs, design specs
Use when: Executing operational tasks
memories = await muninn_recall(
query="adding a new shared service with traefik",
memory_type="semantic",
domain="ravenhelm"
)
Expect to find: Step-by-step runbooks, HOWTOs
Use when: Understanding system structure
memories = await muninn_recall(
query="event-driven architecture with NATS and Kafka",
memory_type="semantic",
domain="ravenhelm"
)
Expect to find: Architecture diagrams, component docs, system overviews
Use when: Diagnosing issues
memories = await muninn_recall(
query="docker network connectivity issues",
memory_type="episodic", # Past incidents
domain="ravenhelm"
)
Expect to find: Past incidents, known issues, fixes
# Query for platform-wide knowledge
memories = await muninn_recall(
query="shared services postgres redis zitadel",
domain="ravenhelm"
)
Common topics: Traefik, SPIRE, platform_net, shared services
# Query for GitLab-specific knowledge
memories = await muninn_recall(
query="gitlab webhook configuration norns automation",
domain="gitlab-sre"
)
Common topics: GitLab API, webhook payloads, issue taxonomy
# Query for voice/telephony knowledge
memories = await muninn_recall(
query="LiveKit SIP trunk configuration",
domain="telephony"
)
Common topics: LiveKit, Twilio, RavenVoice, SIP
# Find operational procedures
memories = await muninn_recall(
query="backup postgres database to s3",
memory_type="procedural" # Runbooks are procedural memories
)
Use for: Step-by-step procedures, operational tasks
# Find past architectural decisions
memories = await muninn_recall(
query="why did we choose traefik over nginx",
memory_type="semantic" # ADRs are semantic knowledge
)
Use for: Understanding design rationale, technology choices
# Find conceptual documentation
memories = await muninn_recall(
query="raven cognitive architecture overview",
memory_type="semantic"
)
Use for: Concepts, overviews, explanations
# Find project plans or specifications
memories = await muninn_recall(
query="phase 6 ai infrastructure roadmap",
memory_type="semantic"
)
Use for: Roadmaps, feature specs, milestones
For complex tasks, make multiple focused queries:
# Step 1: Get the design rationale
design = await muninn_recall(
query="SPIRE SPIFFE identity design decisions",
memory_type="semantic"
)
# Step 2: Get the operational procedure
procedure = await muninn_recall(
query="register new SPIRE workload identity",
memory_type="procedural"
)
# Step 3: Get past incidents for context
incidents = await muninn_recall(
query="SPIRE certificate rotation failures",
memory_type="episodic"
)
# Now you have: WHY (design), HOW (procedure), WHAT WENT WRONG (incidents)
for memory in memories:
content = memory["content"]
# Look for file references
if "docs/runbooks/RUNBOOK-" in content:
# Extract and cite
print(f"Reference: {extract_runbook_id(content)}")
for memory in memories:
if memory["weight"] < 0.3:
print("⚠️ Low-weight memory, may be outdated")
elif memory["weight"] > 0.7:
print("✓ High-weight memory, frequently referenced")
# Muninn orders by relevance + weight
# First result is usually best
best_match = memories[0]
# But check if it's actually relevant
if best_match["domain"] != expected_domain:
print("⚠️ Cross-domain result, verify applicability")
✓ "Per RUNBOOK-024 (Add Shared Service), the steps are:
1. Register in port_registry.yaml
2. Add Traefik labels
3. Update dynamic.yml"
✓ "According to docs/architecture/SPIRE_DESIGN.md, all workloads
must register SVIDs using spiffe-helper configs."
✓ "ADR-003 documents the decision to use Traefik:
- Single ingress point
- Automatic service discovery
- Built-in Let's Encrypt support"
❌ "I think we use Traefik..."
(Use memory retrieval instead of guessing)
❌ "The runbook says to..."
(Which runbook? Cite the ID)
❌ "Documentation somewhere mentions..."
(Be specific with file paths)
"I searched Muninn for 'X' but found no existing documentation.
Options:
1. Create new runbook/doc for this
2. Check if this is documented elsewhere (different domain?)
3. Verify the feature actually exists"
"Muninn returned 2 conflicting approaches:
- Memory A (weight 0.8): Use approach X
- Memory B (weight 0.4): Use approach Y
Memory A has higher weight and more references, suggesting it's the current standard."
"Retrieved memory references 'nginx' but current architecture uses 'traefik'.
This memory may be outdated (weight 0.3, last referenced 6 months ago).
Recommend searching for more recent traefik documentation."
For procedural knowledge (skills), use the specialized skills tools:
skills = await skills_retrieve(
query="deploy docker compose with traefik labels",
role="sre",
k=3
)
Skills are a type of procedural memory optimized for agent instructions.
Problem: Too many irrelevant results
Problem: No results but you know docs exist
Problem: Retrieved memory seems outdated
Memory Retrieval Pattern:
Always retrieve before inventing. Muninn holds the authoritative knowledge.