Use when completing significant work to extract learnings. Use when user corrects your approach or when you discover important patterns during agent interactions. Use when agent learns something new that should be captured for future reference. Use when user says "reflect", "what did we learn", "capture learnings". Use after resolving complex problems or discovering patterns.
Reflecting IS converting experience into a structured report for the planning pipeline.
Analyze the conversation, extract learnings, and produce a reflection report. Route the report to planning-agent-systems — do not classify or create components directly.
Core principle: Capture before context is lost. Classify just enough for planning to act on.
Violating the letter of the rules is violating the spirit of the rules.
Pattern: Chain
Handoff: auto-invoke
Next: planning-agent-systems
Before ANY action, create task list using TaskCreate:
TaskCreate for EACH task below:
- Subject: "[reflecting] Task N: <action>"
- ActiveForm: "<doing action>"
Tasks:
Announce: "Created 6 tasks. Starting execution..."
Execution rules:
TaskUpdate status="in_progress" BEFORE starting each taskTaskUpdate status="completed" ONLY after verification passesTaskList to confirm all completedIf the user provides a pain point via $ARGUMENTS (e.g., /reflect hooks keep breaking on Windows), treat it as a priority lens for the entire reflection:
This ensures user-reported friction gets captured even when it's not visible in the conversation trace.
Goal: Review the conversation to identify significant events.
If pain point provided: Create an event entry for it first, using the user's description as context. Then proceed with normal analysis.
Look for:
Safety bypass patterns to detect (scan commands run, edits made, user interjections):
git push --force, git reset --hard, git checkout --, git clean -f, git branch -D without explicit user confirmation--no-verify, --no-gpg-sign, bypassing pre-commit or validation hooksrm -rf, dropping tables, overwriting uncommitted changesrsync --delete or deploys without verifying exclusionsTrace the skill router for each event: For corrections and errors, identify which component routed the agent to that behavior:
Locate the router's actual file path — use Glob to find it:
Glob "**/skills/{name}/SKILL.md" (covers plugins/**/skills/ and .claude/skills/)Glob "**/.claude/rules/{name}.md"Glob "**/agents/{name}.md" (covers plugins/**/agents/ and .claude/agents/)CLAUDE.mdIf Glob returns multiple matches, record all paths — the conversation context usually disambiguates which one was active.
Record the resolved path. This path is used in Task 2 dedup gate.
This determines where fixes land:
Document each event:
Event: [What happened]
Context: [When/where it occurred]
Outcome: [Result]
Type: correction / error / discovery / repetition / safety_bypass
Router: [skill/rule/law/none that caused this behavior]
Router path: [resolved file path, or "none"]
Verification: Listed at least 3 significant events. If fewer than 3 occurred, document why. Each event has a router identified (or explicitly "none").
Goal: Derive actionable learnings from each event.
For each event, ask:
Simplicity principle: Prefer the simplest component type that works.
rule, not a skillskill, not a doclaw, not a ruleSafety bypass overrides simplicity: If event type is safety_bypass, fix_target MUST be rule or law — never skill alone. Rationale: skills are opt-in routers, but safety constraints need always-on enforcement. Law for absolute prohibitions (force push main), rule for path/context-scoped enforcement (no --no-verify in this repo). Every safety_bypass learning also requires one explicit preventive instruction naming the exact command/flag to block.
Learning format:
Learning:
context: [When this applies]
insight: [What was learned]
evidence: [Specific event that taught this]
router: [Which component routed the behavior, from event trace]
fix_target: [Same component as router, or new component if router=none]
suggested_component: rule / law / skill / hook / doc
rationale: [Why this component type fits, informed by router analysis]
Verification: Each event has at least one learning with router, fix_target, suggested component, and rationale.
Goal: Write a structured report for the planning pipeline.
references/report-template.md for format and completeness checklistYYYY-MM-DD format.rcc/{timestamp}-reflection.mdThe report must follow the template exactly, including:
Verification: Report file exists at the expected path with no placeholder text.
Goal: Verify the report is complete before routing.
Use the completeness checklist from references/report-template.md:
If missing learnings → return to Task 2, extract more, then re-run Task 3. If format issues → return to Task 3, fix the report.
Verification: All checklist items pass.
Goal: Ensure recommendations consolidate into existing components rather than bloating the system.
Only review components with diff — invoke reviewer agents on each recommendation's fix_target (using router path from Task 1).
For each recommendation, invoke the corresponding reviewer agent:
| fix_target type | Reviewer agent |
|---|---|
| skill | skill-reviewer (has overlap check) |
| rule | rule-reviewer (has duplication check) |
| CLAUDE.md | claudemd-reviewer (has duplication check) |
| agent/subagent | subagent-reviewer (has overlap check) |
| hook | hook-reviewer |
| none (new component) | Invoke the reviewer matching suggested_component type |
Pass the fix_target path to the reviewer. The reviewer will check for overlap with existing components.
Based on reviewer output, adjust recommendations:
Edit the report: Directly modify the report file from Task 3, adjusting the recommendations section.
Verification: Every recommendation has been reviewed by the appropriate agent. No redundant recommendations remain.
Goal: Hand off the reflection report to planning-agent-systems.
planning-agent-systems with the report path as inputDo not classify components yourself. Do not create components directly. Planning handles classification and execution.
Verification: planning-agent-systems invoked with correct report path.
When to self-trigger reflection:
Integration with Memory System:
Reflection reports automatically feed into Claude Code's built-in memory system at /home/weihung/.claude/projects/-home-weihung-Reflexive-Claude-Code/memory/. No additional memory management needed.
Trigger reflection after:
Don't wait for "later" — context fades quickly.
These thoughts mean you're rationalizing. STOP and reconsider:
All of these mean: You're about to short-circuit the pipeline. Follow the process.
| Excuse | Reality |
|---|---|
| "Nothing learned" | Every session has learnings. Look harder. |
| "I'll remember" | You won't. Context fades. Capture now. |
| "Too small" | Small learnings compound. Capture them. |
| "Overhead" | 10 minutes now saves hours later. |
| "Create directly" | Bypasses conflict checks, simplicity gates, and reviews. |
| "I know where it goes" | Planning has component-planning criteria you don't carry inline. |
| "Each learning = one recommendation" | Multiple learnings often belong in the same component. Consolidate. |
| "Destructive op worked, no harm" | Outcome luck ≠ safe process. Capture the bypass pattern. |
| "User didn't push back" | Silence ≠ consent. If action was irreversible without confirmation, flag it. |
| "Bypass saved time" | Shortcuts compound. Unreported bypasses teach the agent to bypass again. |
digraph reflecting {
rankdir=TB;
start [label="Reflect on work", shape=doublecircle];
analyze [label="Task 1: Analyze\nconversation", shape=box];
extract [label="Task 2: Extract\nknowledge", shape=box];
report [label="Task 3: Produce\nreflection report", shape=box];
review [label="Task 4: Review\nreport quality", shape=box];
quality_ok [label="Complete?", shape=diamond];
consolidate [label="Task 5: Consolidation\nreview", shape=box];
route [label="Task 6: Route\nto planning", shape=box];
done [label="Handoff complete", shape=doublecircle];
start -> analyze;
analyze -> extract;
extract -> report;
report -> review;
review -> quality_ok;
quality_ok -> consolidate [label="yes"];
quality_ok -> extract [label="missing\nlearnings"];
quality_ok -> report [label="format\nissues"];
consolidate -> route;
route -> done;
}
references/report-template.md — report format and completeness checklistplanning-agent-systems — classification and component creation.rcc/config.yml decisions_log — append new decisions here (see migrating-agent-systems/references/config-schema.md)