Extract facts from meetings and update your knowledge base — person profiles, chronological log, and index. Use when the user asks "ingest my meetings", "update my knowledge base", "extract facts from meetings", "sync meetings to wiki", "backfill knowledge", or wants their PARA/Obsidian/wiki profiles updated from conversation data.
Process meetings through the knowledge extraction pipeline to update person profiles, append to the knowledge log, and maintain the index.
The [knowledge] section must be configured in ~/.config/minutes/config.toml:
[knowledge]
enabled = true
path = "/path/to/knowledge/base"
adapter = "wiki" # or "para", "obsidian"
engine = "none" # or "agent" for LLM extraction
min_confidence = "strong"
If not configured, explain what's needed and offer to help set it up.
minutes ingest ~/meetings/2026-04-03-strategy-call.md
minutes ingest --all
minutes ingest --all --dry-run
log.md with a timestamped entry for each ingested meetingengine = "none" (default): Only extracts from parsed YAML frontmatter. No LLM involved, zero hallucination risk.min_confidence are counted as "skipped" but never written.--dry-run first if the user hasn't used ingest before.Ingesting 73 meeting(s) into knowledge base at /path/to/kb
2026-04-03-strategy.md — 4 written, 1 skipped — Mat, Dan
2026-04-05-standup.md — 2 written, 0 skipped — Alice
SKIP 2026-03-18-test.md: no frontmatter
Done. 6 fact(s) written, 1 skipped, 1 error(s), 3 people updated.
action_items or decisions. The ingest will correctly extract 0 facts. This is expected, not an error.engine = "agent" requires an AI CLI — If the user wants richer LLM-based extraction from transcript body text, they need claude, codex, or gemini on PATH.items.json — If the user's knowledge base uses the PARA format, facts go into areas/people/{slug}/items.json with atomic fact schema (id, status, supersededBy).minutes ingest --all --dry-run before the first real run so the user can see what would be extracted.