Document a recently solved problem to compound your team's knowledge
Coordinate multiple subagents working in parallel to document a recently solved problem.
Captures problem solutions while context is fresh, creating structured documentation in docs/solutions/ with YAML frontmatter for searchability and future reference. Uses parallel subagents for maximum efficiency.
Why "compound"? Each documented solution compounds your team's knowledge. The first time you solve a problem takes research. Document it, and the next occurrence takes minutes. Knowledge compounds.
/ce:compound # Document the most recent fix
/ce:compound [brief context] # Provide additional context hint
These files are the durable contract for the workflow. Read them on-demand at the step that needs them — do not bulk-load at skill start.
<preconditions enforcement="advisory"> <check condition="problem_solved"> Problem has been solved (not in-progress) </check> <check condition="solution_verified"> Solution has been verified working </check> <check condition="non_trivial"> Non-trivial problem (not simple typo or obvious error) </check> </preconditions>references/schema.yaml — canonical frontmatter fields and enum values (read when validating YAML)references/yaml-schema.md — category mapping from problem_type to directory (read when classifying)assets/resolution-template.md — section structure for new docs (read when assembling)When spawning subagents, pass the relevant file contents into the task prompt so they have the contract without needing cross-skill paths.
Present the user with two options before proceeding, using the platform's blocking question tool (AskUserQuestion in Claude Code, request_user_input in Codex, ask_user in Gemini). If no question tool is available, present the options and wait for the user's reply.
1. Full (recommended) — the complete compound workflow. Researches,
cross-references, and reviews your solution to produce documentation
that compounds your team's knowledge.
2. Lightweight — same documentation, single pass. Faster and uses
fewer tokens, but won't detect duplicates or cross-reference
existing docs. Best for simple fixes or long sessions nearing
context limits.
Do NOT pre-select a mode. Do NOT skip this prompt. Wait for the user's choice before proceeding.
If the user chooses Full, ask one follow-up question before proceeding. Detect which harness is running (Claude Code, Codex, or Cursor) and ask:
Would you also like to search your [harness name] session history
for relevant knowledge to help the Compound process? This adds
time and token usage.
If the user says yes, dispatch the Session Historian in Phase 1. If no, skip it. Do not ask this in lightweight mode.
<critical_requirement> The primary output is ONE file - the final documentation.
Phase 1 subagents return TEXT DATA to the orchestrator. They must NOT use Write, Edit, or create any files. Only the orchestrator writes files: the solution doc in Phase 2, and — if the Discoverability Check finds a gap — a small edit to a project instruction file (AGENTS.md or CLAUDE.md). The instruction-file edit is maintenance, not a second deliverable; it ensures future agents can discover the knowledge store. </critical_requirement>
Before launching Phase 1 subagents, check the auto-memory block injected into your system prompt for notes relevant to the problem being documented.
## Supplementary notes from auto memory
Treat as additional context, not primary evidence. Conversation history
and codebase findings take priority over these notes.
[relevant entries here]
If no relevant entries are found, proceed to Phase 1 without passing memory context.
Launch research subagents. Each returns text data to the orchestrator.
Dispatch order:
Context Analyzer, Solution Extractor, and Related Docs Finder in parallel (background)session-historian in foreground — it reads session files outside the working directory that background agents may not have access to<parallel_tasks>
references/schema.yaml for enum validation and track classificationreferences/yaml-schema.md for category mapping into docs/solutions/[sanitized-problem-slug]-[date].mdcategory: field mapped from problem_type), category directory path, suggested filename, and which track appliesreferences/schema.yaml for track classification (bug vs knowledge)Bug track output sections:
Knowledge track output sections:
docs/solutions/ for related documentationSearch strategy (grep-first filtering for efficiency):
docs/solutions/<category>/ directorytitle:.*<keyword>tags:.*(<keyword1>|<keyword2>)module:.*<module name>component:.*<component>GitHub issue search:
Prefer the gh CLI for searching related issues: gh issue list --search "<keywords>" --state all --limit 5. If gh is not installed, fall back to the GitHub MCP tools (e.g., unblocked data_retrieval) if available. If neither is available, skip GitHub issue search and note it was skipped in the output.
</parallel_tasks>
compound-engineering:research:session-historian~/.claude/projects/, ~/.codex/sessions/, ~/.cursor/projects/) which background agents may not have access toA specific description of the problem being documented — not a generic topic, but the concrete issue (error messages, module names, what broke and how it was fixed). This is what the agent filters its findings against.
The current git branch and working directory
The instruction: "Only surface findings from prior sessions that are directly relevant to this specific problem. Ignore unrelated work from the same sessions or branches."
The output format:
Structure your response with these sections (omit any with no findings):
- What was tried before: prior approaches to this specific problem
- What didn't work: failed attempts at this problem from prior sessions
- Key decisions: choices made about this problem and their rationale
- Related context: anything else from prior sessions that directly informs this problem's documentation
mode parameter so the user's configured permission settings applymodel: "sonnet" in Claude Code) — the synthesis feeds into compound assembly and doesn't need frontier reasoning<sequential_tasks>
WAIT for all Phase 1 subagents to complete before proceeding.
The orchestrating agent (main conversation) performs these steps:
Collect all text results from Phase 1 subagents
Check the overlap assessment from the Related Docs Finder before deciding what to write:
| Overlap | Action |
|---|---|
| High — existing doc covers the same problem, root cause, and solution | Update the existing doc with fresher context (new code examples, updated references, additional prevention tips) rather than creating a duplicate. The existing doc's path and structure stay the same. |
| Moderate — same problem area but different angle, root cause, or solution | Create the new doc normally. Flag the overlap for Phase 2.5 to recommend consolidation review. |
| Low or none | Create the new doc normally. |
The reason to update rather than create: two docs describing the same problem and solution will inevitably drift apart. The newer context is fresher and more trustworthy, so fold it into the existing doc rather than creating a second one that immediately needs consolidation.
When updating an existing doc, preserve its file path and frontmatter structure. Update the solution, code examples, prevention tips, and any stale references. Add a last_updated: YYYY-MM-DD field to the frontmatter. Do not change the title unless the problem framing has materially shifted.
Incorporate session history findings (if available). When the Session History Researcher returned relevant prior-session context:
Assemble complete markdown file from the collected pieces, reading assets/resolution-template.md for the section structure of new docs
Validate YAML frontmatter against references/schema.yaml
Create directory if needed: mkdir -p docs/solutions/[category]/
Write the file: either the updated existing doc or the new
When creating a new doc, preserve the section order from assets/resolution-template.md unless the user explicitly asks for a different structure.
</sequential_tasks>
After writing the new learning, decide whether this new solution is evidence that older docs should be refreshed.
ce:compound-refresh is not a default follow-up. Use it selectively when the new learning suggests an older learning or pattern doc may now be inaccurate.
It makes sense to invoke ce:compound-refresh when one or more of these are true:
It does not make sense to invoke ce:compound-refresh when:
Use these rules:
ce:compound-refresh with a narrow scope hint after the new learning is writtence:compound-refresh as the next step with a scope hintWhen invoking or recommending ce:compound-refresh, be explicit about the argument to pass. Prefer the narrowest useful scope:
docs/solutions/patterns/Examples:
/ce:compound-refresh plugin-versioning-requirements/ce:compound-refresh payments/ce:compound-refresh performance-issues/ce:compound-refresh critical-patternsA single scope hint may still expand to multiple related docs when the change is cross-cutting within one domain, category, or pattern area.
Do not invoke ce:compound-refresh without an argument unless the user explicitly wants a broad sweep.
Always capture the new learning first. Refresh is a targeted maintenance follow-up, not a prerequisite for documentation.
After the learning is written and the refresh decision is made, check whether the project's instruction files would lead an agent to discover and search docs/solutions/ before starting work in a documented area. This runs every time — the knowledge store only compounds value when agents can find it.
Identify which root-level instruction files exist (AGENTS.md, CLAUDE.md, or both). Read the file(s) and determine which holds the substantive content — one file may just be a shim that @-includes the other (e.g., CLAUDE.md containing only @AGENTS.md, or vice versa). The substantive file is the assessment and edit target; ignore shims. If neither file exists, skip this check entirely.
Assess whether an agent reading the instruction files would learn three things:
module, tags, problem_type)This is a semantic assessment, not a string match. The information could be a line in an architecture section, a bullet in a gotchas section, spread across multiple places, or expressed without ever using the exact path docs/solutions/. Use judgment — if an agent would reasonably discover and use the knowledge store after reading the file, the check passes.
If the spirit is already met, no action needed — move on.
If not: a. Based on the file's existing structure, tone, and density, identify where a mention fits naturally. Before creating a new section, check whether the information could be a single line in the closest related section — an architecture tree, a directory listing, a documentation section, or a conventions block. A line added to an existing section is almost always better than a new headed section. Only add a new section as a last resort when the file has clear sectioned structure and nothing is even remotely related. b. Draft the smallest addition that communicates the three things. Match the file's existing style and density. The addition should describe the knowledge store itself, not the plugin — an agent without the plugin should still find value in it.
Keep the tone informational, not imperative. Express timing as description, not instruction — "relevant when implementing or debugging in documented areas" rather than "check before implementing or debugging." Imperative directives like "always search before implementing" cause redundant reads when a workflow already includes a dedicated search step. The goal is awareness: agents learn the folder exists and what's in it, then use their own judgment about when to consult it.
Examples of calibration (not templates — adapt to the file):
When there's an existing directory listing or architecture section — add a line:
WAIT for Phase 2 to complete before proceeding.
<parallel_tasks>
Based on problem type, optionally invoke specialized agents to review the documentation:
compound-engineering:review:performance-oraclecompound-engineering:review:security-sentinelcompound-engineering:review:data-integrity-guardiancompound-engineering:review:code-simplicity-reviewer, and additionally run the kieran reviewer that matches the repo's primary stack:
compound-engineering:review:kieran-rails-reviewercompound-engineering:review:kieran-python-reviewercompound-engineering:review:kieran-typescript-reviewer</parallel_tasks>
<critical_requirement> Single-pass alternative — same documentation, fewer tokens.
This mode skips parallel subagents entirely. The orchestrator performs all work in a single pass, producing the same solution document without cross-referencing or duplicate detection. </critical_requirement>
The orchestrator (main conversation) performs ALL of the following in one sequential pass:
references/schema.yaml and references/yaml-schema.md, then determine track (bug vs knowledge), category, and filenamedocs/solutions/[category]/[filename].md using the appropriate track template from assets/resolution-template.md, with:
Lightweight output:
✓ Documentation complete (lightweight mode)
File created:
- docs/solutions/[category]/[filename].md
[If discoverability check found instruction files don't surface the knowledge store:]
Tip: Your AGENTS.md/CLAUDE.md doesn't surface docs/solutions/ to agents —
a brief mention helps all agents discover these learnings.
Note: This was created in lightweight mode. For richer documentation
(cross-references, detailed prevention strategies, specialized reviews),
re-run /compound in a fresh session.
No subagents are launched. No parallel tasks. One file written.
In lightweight mode, the overlap check is skipped (no Related Docs Finder subagent). This means lightweight mode may create a doc that overlaps with an existing one. That is acceptable — ce:compound-refresh will catch it later. Only suggest ce:compound-refresh if there is an obvious narrow refresh target. Do not broaden into a large refresh sweep from a lightweight session.
Organized documentation:
docs/solutions/[category]/[filename].mdCategories auto-detected from problem:
Bug track:
Knowledge track:
| ❌ Wrong | ✅ Correct |
|---|---|
Subagents write files like context-analysis.md, solution-draft.md | Subagents return text data; orchestrator writes one final file |
| Research and assembly run in parallel | Research completes → then assembly runs |
| Multiple files created during workflow | One solution doc written or updated: docs/solutions/[category]/[filename].md (plus an optional small edit to a project instruction file for discoverability) |
| Creating a new doc when an existing doc covers the same problem | Check overlap assessment; update the existing doc when overlap is high |
✓ Documentation complete
Auto memory: 2 relevant entries used as supplementary evidence
Subagent Results:
✓ Context Analyzer: Identified performance_issue in brief_system, category: performance-issues/
✓ Solution Extractor: 3 code fixes, prevention strategies
✓ Related Docs Finder: 2 related issues
✓ Session History: 3 prior sessions on same branch, 2 failed approaches surfaced
Specialized Agent Reviews (Auto-Triggered):
✓ performance-oracle: Validated query optimization approach
✓ kieran-rails-reviewer: Code examples meet Rails conventions
✓ code-simplicity-reviewer: Solution is appropriately minimal
File created:
- docs/solutions/performance-issues/n-plus-one-brief-generation.md
This documentation will be searchable for future reference when similar
issues occur in the Email Processing or Brief System modules.
What's next?
1. Continue workflow (recommended)
2. Link related documentation
3. Update other references
4. View documentation
5. Other
After displaying the success output, present the "What's next?" options using the platform's blocking question tool (AskUserQuestion in Claude Code, request_user_input in Codex, ask_user in Gemini). If no question tool is available, present the numbered options and wait for the user's reply before proceeding. Do not continue the workflow or end the turn without the user's selection.
Alternate output (when updating an existing doc due to high overlap):
✓ Documentation updated (existing doc refreshed with current context)
Overlap detected: docs/solutions/performance-issues/n-plus-one-queries.md
Matched dimensions: problem statement, root cause, solution, referenced files
Action: Updated existing doc with fresher code examples and prevention tips
File updated:
- docs/solutions/performance-issues/n-plus-one-queries.md (added last_updated: 2026-03-24)
This creates a compounding knowledge system:
The feedback loop:
Build → Test → Find Issue → Research → Improve → Document → Validate → Deploy
↑ ↓
└──────────────────────────────────────────────────────────────────────┘
Each unit of engineering work should make subsequent units of work easier—not harder.
<auto_invoke> <trigger_phrases> - "that worked" - "it's fixed" - "working now" - "problem solved" </trigger_phrases>
<manual_override> Use /ce:compound [context] to document immediately without waiting for auto-detection. </manual_override> </auto_invoke>
Writes the final learning directly into docs/solutions/.
Based on problem type, these agents can enhance documentation:
/research [topic] - Deep investigation (searches docs/solutions/ for patterns)/ce:plan - Planning workflow (references documented solutions)docs/solutions/[category]/[filename].mddocs/solutions/ # documented solutions to past problems (bugs, best practices, workflow patterns), organized by category with YAML frontmatter (module, tags, problem_type)
When nothing in the file is a natural fit — a small headed section is appropriate:
## Documented Solutions
`docs/solutions/` — documented solutions to past problems (bugs, best practices, workflow patterns), organized by category with YAML frontmatter (`module`, `tags`, `problem_type`). Relevant when implementing or debugging in documented areas.
c. In full mode, explain to the user why this matters — agents working in this repo (including fresh sessions, other tools, or collaborators without the plugin) won't know to check docs/solutions/ unless the instruction file surfaces it. Show the proposed change and where it would go, then use the platform's blocking question tool (AskUserQuestion in Claude Code, request_user_input in Codex, ask_user in Gemini) to get consent before making the edit. If no question tool is available, present the proposal and wait for the user's reply. In lightweight mode, output a one-liner note and move on