Memory integration layer for agent coordination and task dispatch.
Primary Directive: Use CKS for persistent knowledge and MemoryCacheManager for session coordination. Enable agents to learn from past work and coordinate without explicit messaging.
Enable agent coordination and learning through shared memory systems.
Always retrieve relevant context before dispatching subagents when:
Problem-solving tasks - Search for similar past solutions
from src.lib.memory.coordinator import MemoryCoordinator
memory = MemoryCoordinator()
context = memory.get_context(task_description)
# Returns: corrections, patterns, learnings from src.cks
Multi-agent tasks - Check for previous coordination patterns
# Get session state
state = memory.cache_get("multi_agent:state")
# Returns: current coordination state (if exists)
Use MemoryCacheManager for in-process coordination:
from lib.context.persistence.memory_cache import MemoryCoordinator
cache = MemoryCacheManager()
# Agent 1: Finds available API quota
cache.set("quota:available", {"anthropic": 1000, "openrouter": 50000})
# Agent 2: Routes based on quota (no messaging needed)
quota = cache.get("quota:available")
if task_size > quota["anthropic"]:
return "openrouter"
Session coordination patterns:
cache.set(f"agent:{agent_name}:status", "working")cache.set(f"task:{task_id}:progress", {"step": 3, "total": 5})cache.get("current_user_preference")Store learnings in CKS when:
| Situation | Entry Type | Example |
|---|---|---|
| Fixed a bug | correction | "FAISS lazy-load fix: use streaming embeddings" |
| Found a pattern | pattern | "TDD: RED→GREEN→REFACTOR, never mix phases" |
| Made a choice | decision | "Chose FTS5 over external search for embedded" |
| Learned something | learning | "Exponential backoff: 5s→10s→20s→300s" |
| Had insight | insight | "Subagents need memory to avoid redundant work" |
from csf.cks.unified import CKS
cks = CKS()
# Store a correction
cks.ingest_correction(
title="Don't concatenate /TN flag in schtasks",
content="The /TN flag and task name must be separate arguments: ['/TN', task_name] not [f'/TN{task_name}']",
context="task_manager.py fix"
)
# Store a pattern
cks.ingest_pattern(
title="Session Coordination with MemoryCacheManager",
content="Agents coordinate via shared cache without messaging. First agent sets state, second agent reads state. No explicit coordination needed.",
category="multi-agent"
)
┌─────────────────────────────────────────────────────────────┐
│ Need to remember something... │
└─────────────────────────────────────────────────────────────┘
│
┌───────────────┴───────────────┐
│ │
Should survive restart? Only needed now?
│ │
YES NO
│ │
┌───────┴───────┐ ┌───────┴───────┐
│ │ │ │
CKS CKS MemoryCache MemoryCache
(persistent) (semantic) (session) (session)
│
┌───────────────┴───────────────┐
│ │
Is it a learning? Is it state?
│ │
YES NO
│ │
CKS entry_type: Simple key-value
correction/learning set(key, value)
When dispatching subagents, inject relevant memories:
from src.lib.memory.coordinator import MemoryCoordinator
memory = MemoryCoordinator()
# Get context for the subagent
context = memory.get_context(task_description)
# Add to subagent prompt
subagent_prompt = f"""
{task_description}
## Relevant Past Work:
{format_memories(context)}
Use these insights to avoid repeating mistakes.
"""
Before complex tasks, retrieve patterns:
# Search for relevant patterns
patterns = cks.search_patterns(task_type, limit=5)
if patterns:
print(f"Found {len(patterns)} relevant patterns for {task_type}")
After completing work, store learnings:
# What did we learn?
cks.ingest_learning(
title=f"{project}: {task_type} optimization",
content=f"Optimized {component} using {technique}. Result: {outcome}. ROI: {improvement}%",
context=f"{project} {task_type}"
)
# ORCHESTRATOR: Detects similar problem
memory = MemoryCoordinator()
corrections = memory.cks.search_corrections("FAISS slow", limit=3)
if corrections:
# Direct solution from memory
apply_fix(corrections[0]['content'])