Configure and use Honcho memory with Hermes -- cross-session user modeling, multi-profile peer isolation, observation config, dialectic reasoning, session summaries, and context budget enforcement. Use when setting up Honcho, troubleshooting memory, managing profiles with Honcho peers, or tuning observation, recall, and dialectic settings.
Honcho provides AI-native cross-session user modeling. It learns who the user is across conversations and gives every Hermes profile its own peer identity while sharing a unified view of the user.
hermes honcho setup
# select "cloud", paste API key from https://app.honcho.dev
hermes honcho setup
# select "local", enter base URL (e.g. http://localhost:8000)
hermes honcho status # shows resolved config, connection test, peer info
When Honcho injects context into the system prompt (in hybrid or context recall modes), it assembles the base context block in this order:
The session summary is generated automatically by Honcho at the start of each turn (when a prior session exists). It gives the model a warm start without replaying full history.
Honcho automatically selects between two prompt strategies:
| Condition | Strategy | What happens |
|---|---|---|
| No prior session or empty representation | Cold start | Lightweight intro prompt; skips summary injection; encourages the model to learn about the user |
| Existing representation and/or session history | Warm start | Full base context injection (summary → representation → card); richer system prompt |
You do not need to configure this -- it is automatic based on session state.
Honcho models conversations as interactions between peers. Hermes creates two peers per session:
peerName): represents the human. Honcho builds a user representation from observed messages.aiPeer): represents this Hermes instance. Each profile gets its own AI peer so agents develop independent views.Each peer has two observation toggles that control what Honcho learns from:
| Toggle | What it does |
|---|---|
observeMe | Peer's own messages are observed (builds self-representation) |
observeOthers | Other peers' messages are observed (builds cross-peer understanding) |
Default: all four toggles on (full bidirectional observation).
Configure per-peer in honcho.json:
{
"observation": {
"user": { "observeMe": true, "observeOthers": true },
"ai": { "observeMe": true, "observeOthers": true }
}
}
Or use the shorthand presets:
| Preset | User | AI | Use case |
|---|---|---|---|
"directional" (default) | me:on, others:on | me:on, others:on | Multi-agent, full memory |
"unified" | me:on, others:off | me:off, others:on | Single agent, user-only modeling |
Settings changed in the Honcho dashboard are synced back on session init -- server-side config wins over local defaults.
Honcho sessions scope where messages and observations land. Strategy options:
| Strategy | Behavior |
|---|---|
per-directory (default) | One session per working directory |
per-repo | One session per git repository root |
per-session | New Honcho session each Hermes run |
global | Single session across all directories |
Manual override: hermes honcho map my-project-name
How the agent accesses Honcho memory:
| Mode | Auto-inject context? | Tools available? | Use case |
|---|---|---|---|
hybrid (default) | Yes | Yes | Agent decides when to use tools vs auto context |
context | Yes | No (hidden) | Minimal token cost, no tool calls |
tools | No | Yes | Agent controls all memory access explicitly |
Honcho's dialectic behavior is controlled by three independent dimensions. Each can be tuned without affecting the others:
Controls how often dialectic and context calls happen.
| Key | Default | Description |
|---|---|---|
contextCadence | 1 | Min turns between context API calls |
dialecticCadence | 3 | Min turns between dialectic API calls |
injectionFrequency | every-turn | every-turn or first-turn for base context injection |
Higher cadence values reduce API calls and cost. dialecticCadence: 3 (default) means the dialectic engine fires at most every 3rd turn.
Controls how many rounds of dialectic reasoning Honcho performs per query.
| Key | Default | Range | Description |
|---|---|---|---|
dialecticDepth | 1 | 1-3 | Number of dialectic reasoning rounds per query |
dialecticDepthLevels | -- | array | Optional per-depth-round level overrides (see below) |
dialecticDepth: 2 means Honcho runs two rounds of dialectic synthesis. The first round produces an initial answer; the second refines it.
dialecticDepthLevels lets you set the reasoning level for each round independently:
{
"dialecticDepth": 3,
"dialecticDepthLevels": ["low", "medium", "high"]
}
If dialecticDepthLevels is omitted, rounds use proportional levels derived from dialecticReasoningLevel (the base):
| Depth | Pass levels |
|---|---|
| 1 | [base] |
| 2 | [minimal, base] |
| 3 | [minimal, base, low] |
This keeps earlier passes cheap while using full depth on the final synthesis.
Controls the intensity of each dialectic reasoning round.
| Key | Default | Description |
|---|---|---|
dialecticReasoningLevel | low | minimal, low, medium, high, max |
dialecticDynamic | true | When true, the model can pass reasoning_level to honcho_reasoning to override the default per-call. false = always use dialecticReasoningLevel, model overrides ignored |
Higher levels produce richer synthesis but cost more tokens on Honcho's backend.
Each Hermes profile gets its own Honcho AI peer while sharing the same workspace (user context). This means:
hermes profile create coder --clone
# creates host block hermes.coder, AI peer "coder", inherits config from default
What --clone does for Honcho:
hermes.coder host block in honcho.jsonaiPeer: "coder" (the profile name)workspace, peerName, writeFrequency, recallMode, etc. from defaulthermes honcho sync # creates host blocks for all profiles that don't have one yet
Override any setting in the host block:
{
"hosts": {
"hermes.coder": {
"aiPeer": "coder",
"recallMode": "tools",
"dialecticDepth": 2,
"observation": {
"user": { "observeMe": true, "observeOthers": false },
"ai": { "observeMe": true, "observeOthers": true }
}
}
}
}
The agent has 5 bidirectional Honcho tools (hidden in context recall mode):
| Tool | LLM call? | Cost | Use when |
|---|---|---|---|
honcho_profile | No | minimal | Quick factual snapshot at conversation start or for fast name/role/pref lookups |
honcho_search | No | low | Fetch specific past facts to reason over yourself — raw excerpts, no synthesis |
honcho_context | No | low | Full session context snapshot: summary, representation, card, recent messages |
honcho_reasoning | Yes | medium–high | Natural language question synthesized by Honcho's dialectic engine |
honcho_conclude | No | minimal | Write or delete a persistent fact; pass peer: "ai" for AI self-knowledge |
honcho_profileRead or update a peer card — curated key facts (name, role, preferences, communication style). Pass card: [...] to update; omit to read. No LLM call.
honcho_searchSemantic search over stored context for a specific peer. Returns raw excerpts ranked by relevance, no synthesis. Default 800 tokens, max 2000. Good when you need specific past facts to reason over yourself rather than a synthesized answer.
honcho_contextFull session context snapshot from Honcho — session summary, peer representation, peer card, and recent messages. No LLM call. Use when you want to see everything Honcho knows about the current session and peer in one shot.
honcho_reasoningNatural language question answered by Honcho's dialectic reasoning engine (LLM call on Honcho's backend). Higher cost, higher quality. Pass reasoning_level to control depth: minimal (fast/cheap) → low → medium → high → max (thorough). Omit to use the configured default (low). Use for synthesized understanding of the user's patterns, goals, or current state.
honcho_concludeWrite or delete a persistent conclusion about a peer. Pass conclusion: "..." to create. Pass delete_id: "..." to remove a conclusion (for PII removal — Honcho self-heals incorrect conclusions over time, so deletion is only needed for PII). You MUST pass exactly one of the two.
All 5 tools accept an optional peer parameter:
peer: "user" (default) — operates on the user peerpeer: "ai" — operates on this profile's AI peerpeer: "<explicit-id>" — any peer ID in the workspaceExamples:
honcho_profile # read user's card
honcho_profile peer="ai" # read AI peer's card
honcho_reasoning query="What does this user care about most?"
honcho_reasoning query="What are my interaction patterns?" peer="ai" reasoning_level="medium"
honcho_conclude conclusion="Prefers terse answers"
honcho_conclude conclusion="I tend to over-explain code" peer="ai"
honcho_conclude delete_id="abc123" # PII removal
Guidelines for Hermes when Honcho memory is active.
1. honcho_profile → fast warmup, no LLM cost
2. If context looks thin → honcho_context (full snapshot, still no LLM)
3. If deep synthesis needed → honcho_reasoning (LLM call, use sparingly)
Do NOT call honcho_reasoning on every turn. Auto-injection already handles ongoing context refresh. Use the reasoning tool only when you genuinely need synthesized insight the base context doesn't provide.
honcho_conclude conclusion="<specific, actionable fact>"
Good conclusions: "Prefers code examples over prose explanations", "Working on a Rust async project through April 2026" Bad conclusions: "User said something about Rust" (too vague), "User seems technical" (already in representation)
honcho_search query="<topic>" → fast, no LLM, good for specific facts
honcho_context → full snapshot with summary + messages
honcho_reasoning query="<question>" → synthesized answer, use when search isn't enough
peer: "ai"Use AI peer targeting to build and query the agent's own self-knowledge:
honcho_conclude conclusion="I tend to be verbose when explaining architecture" peer="ai" — self-correctionhoncho_reasoning query="How do I typically handle ambiguous requests?" peer="ai" — self-audithoncho_profile peer="ai" — review own identity cardIn hybrid and context modes, base context (user representation + card + session summary) is auto-injected before every turn. Do not re-fetch what was already injected. Call tools only when:
honcho_reasoning on the tool side shares the same cost as auto-injection dialectic. After an explicit tool call, the auto-injection cadence resets — avoiding double-charging the same turn.
Config file: $HERMES_HOME/honcho.json (profile-local) or ~/.honcho/config.json (global).
| Key | Default | Description |
|---|---|---|
apiKey | -- | API key (get one) |
baseUrl | -- | Base URL for self-hosted Honcho |
peerName | -- | User peer identity |
aiPeer | host key | AI peer identity |
workspace | host key | Shared workspace ID |
recallMode | hybrid | hybrid, context, or tools |
observation | all on | Per-peer observeMe/observeOthers booleans |
writeFrequency | async | async, turn, session, or integer N |
sessionStrategy | per-directory | per-directory, per-repo, per-session, global |
messageMaxChars | 25000 | Max chars per message (chunked if exceeded) |
| Key | Default | Description |
|---|---|---|
dialecticReasoningLevel | low | minimal, low, medium, high, max |
dialecticDynamic | true | Auto-bump reasoning by query complexity. false = fixed level |
dialecticDepth | 1 | Number of dialectic rounds per query (1-3) |
dialecticDepthLevels | -- | Optional array of per-round levels, e.g. ["low", "high"] |
dialecticMaxInputChars | 10000 | Max chars for dialectic query input |
| Key | Default | Description |
|---|---|---|
contextTokens | uncapped | Max tokens for the combined base context injection (summary + representation + card). Opt-in cap — omit to leave uncapped, set to an integer to bound injection size. |
injectionFrequency | every-turn | every-turn or first-turn |
contextCadence | 1 | Min turns between context API calls |
dialecticCadence | 3 | Min turns between dialectic LLM calls |
The contextTokens budget is enforced at injection time. If the session summary + representation + card exceed the budget, Honcho trims the summary first, then the representation, preserving the card. This prevents context blowup in long sessions.
Honcho sanitizes the memory-context block before injection to prevent prompt injection and malformed content:
messageMaxCharsThis fix addresses edge cases where raw user conclusions containing markup or special characters could corrupt the injected context block.
Run hermes honcho setup. Ensure memory.provider: honcho is in ~/.hermes/config.yaml.
Check hermes honcho status -- verify saveMessages: true and writeFrequency isn't session (which only writes on exit).
Use --clone when creating: hermes profile create <name> --clone. For existing profiles: hermes honcho sync.
Observation config is synced from the server on each session init. Start a new session after changing settings in the Honcho UI.
Messages over messageMaxChars (default 25k) are automatically chunked with [continued] markers. If you're hitting this often, check if tool results or skill content is inflating message size.
If you see warnings about context budget exceeded, lower contextTokens or reduce dialecticDepth. The session summary is trimmed first when the budget is tight.
Session summary requires at least one prior turn in the current Honcho session. On cold start (new session, no history), the summary is omitted and Honcho uses the cold-start prompt strategy instead.
| Command | Description |
|---|---|
hermes honcho setup | Interactive setup wizard (cloud/local, identity, observation, recall, sessions) |
hermes honcho status | Show resolved config, connection test, peer info for active profile |
hermes honcho enable | Enable Honcho for the active profile (creates host block if needed) |
hermes honcho disable | Disable Honcho for the active profile |
hermes honcho peer | Show or update peer names (--user <name>, --ai <name>, --reasoning <level>) |
hermes honcho peers | Show peer identities across all profiles |
hermes honcho mode | Show or set recall mode (hybrid, context, tools) |
hermes honcho tokens | Show or set token budgets (--context <N>, --dialectic <N>) |
hermes honcho sessions | List known directory-to-session-name mappings |
hermes honcho map <name> | Map current working directory to a Honcho session name |
hermes honcho identity | Seed AI peer identity or show both peer representations |
hermes honcho sync | Create host blocks for all Hermes profiles that don't have one yet |
hermes honcho migrate | Step-by-step migration guide from OpenClaw native memory to Hermes + Honcho |
hermes memory setup | Generic memory provider picker (selecting "honcho" runs the same wizard) |
hermes memory status | Show active memory provider and config |
hermes memory off | Disable external memory provider |