Guided installation and health check for Anamnesis. First run walks through full setup; subsequent runs verify system health.
Detects the current installation state and adapts: guides new users through setup step-by-step, or runs a health check on existing installations.
Run ALL of these checks in parallel to determine what's already set up:
node --version — need 18+pg_isready -h localhost -p 5432 (or configured host/port)psql -d anamnesis -c "SELECT extversion FROM pg_extension WHERE extname = 'vector';" 2>/dev/nullcurl -s http://localhost:11434/api/tags — check for bge-m3 in responsepsql -d anamnesis -c "\dt anamnesis_*" 2>/dev/nullanamnesis.config.json exists in the project rootdist/mcp/index.js exists~/.claude.json and check for anamnesis in mcpServers~/.claude/settings.json and check for Anamnesis hook entriesnode dist/index.js stats 2>/dev/null (only if build exists)psql -d anamnesis -c "SELECT indexname FROM pg_indexes WHERE indexname LIKE 'idx_%_embedding';" 2>/dev/nullpython -c "import psycopg2" 2>/dev/nullClassify the state:
Print a status summary before proceeding:
Anamnesis Installation Status
─────────────────────────────
Node.js 22.x ✓
PostgreSQL 16 ✓
pgvector 0.8.0 ✓
Ollama (bge-m3) ✓
Database tables ✓
Config file ✓
Build ✓
MCP registered ✗ ← next step
Hooks ✗
Data (sessions) 0
HNSW indexes ✗
Then either proceed to the next incomplete step or run the health check.
Complete each step, verify it worked, then move to the next. Do NOT skip ahead — each step depends on the previous.
Check Node.js, PostgreSQL, and Ollama. For any that are missing, provide platform-specific install instructions:
Detect platform:
platform from the environment info (win32, darwin, linux)Node.js (if missing or <18):
brew install nodesudo apt install nodejs npmwinget install OpenJS.NodeJS.LTSPostgreSQL (if not running):
brew install postgresql@16 && brew services start postgresql@16sudo apt install postgresql postgresql-contribpgvector (if missing):
brew install pgvectorsudo apt install postgresql-16-pgvector (match PG version)Ollama (if not running):
curl -fsSL https://ollama.com/install.sh | shAfter installing prerequisites, pull the embedding model:
ollama pull bge-m3
Ask the user before pulling optional models:
bge-m3 (required, ~1.5 GB) is installed. Do you also want gemma3:12b (~7 GB) for automatic topic extraction? This is optional — Anamnesis works without it.
Verify all prerequisites pass before proceeding.
If the anamnesis database doesn't exist:
createdb anamnesis
If tables don't exist:
psql -d anamnesis -f src/db/schema.sql
Verify: psql -d anamnesis -c "\dt anamnesis_*" should show 4 tables.
If createdb fails with "role does not exist":
postgres on Windows, OS username on macOS/Linux)psql -U postgres -c "CREATE ROLE <username> WITH LOGIN CREATEDB;"If psql can't connect:
pg_isreadypsql is in PATHIf anamnesis.config.json doesn't exist:
cp anamnesis.config.example.json anamnesis.config.jsondatabase.user:
psql -c "SELECT current_user;"
database.user in the config to matchtranscripts_root — check that the path exists:
ls ~/.claude/projects/
ollama.urlIf the config already exists, verify it connects:
node dist/index.js stats 2>/dev/null || echo "Config issue — check database settings"
Ask about preserve_words for topic extraction. The topic extractor compresses session text by stripping common English words (pronouns, prepositions, verbs, etc.) before sending it to the LLM. If the user has project names or key terms that collide with common English words, they should be added to topic_model.preserve_words so they survive compression.
Ask the user:
Topic extraction strips common English words to compress text before analysis. If any of your project names or key terms are common English words (e.g., "Dash", "Key", "Home", "State", "General"), they should be preserved. What are your project names?
Then scan the project names for collisions with common words and add any matches to preserve_words:
"topic_model": {
"preserve_words": ["dash", "home", "state"]
}
If no project names collide, skip this step. Short names (1-2 characters like "HG" or "AI") should always be added since the compressor strips short words by default.
Keep it minimal for first-time setup. Only configure required fields. Advanced options (reporting, tasks) can be added later.
npm install
npm run build
Verify: ls dist/mcp/index.js should exist.
If the build fails, check for TypeScript errors in the output. Common causes:
npm installRead ~/.claude.json to check current state. If anamnesis is not in mcpServers:
Determine the absolute path to dist/mcp/index.js:
node -e "const p=require('path'); console.log(p.resolve('dist/mcp/index.js'))"
On Windows, use an atomic read-modify-write since Claude Code constantly writes to this file:
node -e "const fs=require('fs'); const p=process.env.HOME||process.env.USERPROFILE; const f=p+'/.claude.json'; const d=JSON.parse(fs.readFileSync(f,'utf8')); d.mcpServers=d.mcpServers||{}; d.mcpServers.anamnesis={command:'node',args:['THE_ABSOLUTE_PATH'],env:{}}; fs.writeFileSync(f,JSON.stringify(d,null,2));"
On macOS/Linux, the Edit tool usually works for ~/.claude.json.
Tell the user:
MCP server registered. Restart Claude Code for the new MCP server to load. After restarting, run
/anamnesis_installagain to continue setup.
If the session is already in Claude Code and the MCP tools are already showing (because this is a re-run), skip the restart notice.
node dist/index.js backfill
Before running, tell the user what to expect:
This scans your Claude Code transcripts, generates embeddings, and stores them in the database. It's resumable — safe to interrupt and re-run. Time depends on how many sessions you have.
After backfill completes, show stats:
node dist/index.js stats
Report the results:
Ingested {N} sessions with {M} turns. Your conversation history is now searchable.
If the HNSW indexes don't exist and there's data in the database:
psql -d anamnesis -c "CREATE INDEX idx_turns_embedding ON anamnesis_turns USING hnsw (embedding vector_cosine_ops);"
psql -d anamnesis -c "CREATE INDEX idx_sessions_embedding ON anamnesis_sessions USING hnsw (session_embedding vector_cosine_ops);"
These speed up search queries. They're created after backfill so the index can see all the data upfront.
Ask the user which hooks they want:
Anamnesis has four optional hooks that automate ingestion and add proactive recall:
- SessionEnd auto-ingest — Ingests transcripts when you end a session (recommended)
- SessionStart recall — Injects recent project context when you start a session (recommended, needs Python + psycopg2)
- Plan-mode recall — Searches history when you enter plan mode (recommended, needs Python + psycopg2)
- PreCompact state capture — Captures state + ingests transcript before context compaction (recommended, needs Python)
Which hooks would you like to install? (1, 2, 3, 4, all, or none)
For each selected hook:
Check if psycopg2 is available (for Python hooks): python -c "import psycopg2"
pip install psycopg2-binaryRead existing ~/.claude/settings.json to preserve other hooks
SessionEnd: Add to hooks.SessionEnd array:
{
"type": "command",
"command": "node /absolute/path/to/Anamnesis/dist/index.js ingest-session $SESSION_ID",
"timeout": 30000
}
SessionStart: Copy hooks/session-start-recall.py to ~/.claude/hooks/anamnesis-recall.py, then add to hooks.SessionStart array:
{
"type": "command",
"command": "python ~/.claude/hooks/anamnesis-recall.py",
"timeout": 10000
}
Plan-mode recall: Copy hooks/plan-recall.py to ~/.claude/hooks/plan-recall.py, then add to hooks.PreToolUse array:
{
"type": "command",
"command": "python ~/.claude/hooks/plan-recall.py",
"timeout": 10000,
"matcher": { "tool_name": "EnterPlanMode" }
}
PreCompact state capture: Copy hooks/pre-compact-ingest.py to ~/.claude/hooks/pre-compact-ingest.py. Then determine the absolute path to the Anamnesis installation:
node -e "console.log(require('path').resolve('.'))"
Edit ANAMNESIS_DIR at the top of the copied file to use this path (forward slashes, even on Windows). Then add to hooks.PreCompact array:
{
"type": "command",
"command": "python ~/.claude/hooks/pre-compact-ingest.py",
"timeout": 10000
}
IMPORTANT: Merge into existing arrays — do NOT replace existing hook entries from other tools.
Run a search to confirm everything works end-to-end:
node dist/index.js search "test"
If in a Claude Code session with MCP tools loaded, also test:
Use
anamnesis_searchto search for "test" — this verifies the MCP server is working.
Print a final summary:
Anamnesis Installation Complete
────────────────────────────────
Database: {N} sessions, {M} turns
Search: hybrid (vector + full-text)
MCP server: registered
Hooks: {list installed hooks}
HNSW: active
Next steps:
- Topic extraction: node dist/index.js backfill-topics
(Optional — uses gemma3:12b to tag/summarize sessions)
- Daily reporting: Add "reporting" section to config
(See anamnesis.config.example.json)
- Full docs: INSTALL.md
When all components are already installed, run a health check instead of installation:
psql -d anamnesis -c "SELECT 1;"curl -s http://localhost:11434/api/tags and verify bge-m3 presentnode dist/index.js stats — report session/turn counts~/.claude.json, verify path exists and build is current~/.claude/settings.json, list which hooks are installednode -e "require('./dist/util/config.js').getConfig()" to catch config errorsnode dist/index.js ingest-all --dry-run 2>/dev/null or check for uninigested transcriptsnode dist/index.js search "test" — verify results come backAnamnesis Health Check
──────────────────────
PostgreSQL ✓ connected (localhost:5432)
Ollama ✓ bge-m3 loaded
Database ✓ {N} sessions, {M} turns, {L} links
MCP server ✓ registered, build current
Hooks ✓ SessionEnd, SessionStart, PreToolUse, PreCompact
HNSW indexes ✓ 2 active
Config ✓ valid
Search ✓ returning results
Last ingestion: {date from most recent session}
Topics covered: {count} of {total} sessions
If any check fails, report the issue and offer to fix it.
~/.claude.json or ~/.claude/settings.json, always read first and merge — never overwrite.brew commands on Linux or apt commands on macOS.