End-of-session retrospective that analyzes conversation feedback and proposes changes to .claude/rules/, skills, or CLAUDE.md. Use when the user says /retro, asks to capture session learnings, or wants to turn repeated corrections into durable project configuration.
Analyze the current conversation to extract feedback that should become durable project rules or CLAUDE.md changes. The goal is to maintain concise, effective rules — making no changes is better than making low-value suggestions. If nothing in the conversation meets the bar, say so and move on.
The bar is high. Only surface feedback that meets both criteria:
Project-specific — Something Claude would get wrong again in a future session without explicit guidance. Architecture decisions, layering boundaries, naming conventions, domain concepts, API quirks, workflow requirements specific to this codebase.
Broadly applicable — Applies across large parts of the codebase or many future tasks, not a one-off fix for a single function. A rule about how all services should handle caching qualifies. A correction about one function's return type does not.
Read through the full conversation and identify every instance where the user:
For each, note the user's words (not your interpretation) and the underlying principle.
Group related corrections into themes. A user who said "don't reach through the service to the DB," "the CLI shouldn't manage sync timestamps," and "service methods should build real API requests" is expressing one principle: the service layer is the abstraction boundary.
Generalize from specific instances to reusable rules. The rule should make sense to a future Claude session that has never seen this conversation.
Before proposing anything, read the existing configuration:
.claude/CLAUDE.md.claude/rules/If an existing rule already covers the feedback, skip it — or propose a refinement if the existing rule is incomplete.
Write the draft proposals to .nocommit/retro-proposals.md so Codex can review them. Use this format:
# Retro Proposals (Draft)
## Proposal N: <short title>
**Target:** `.claude/rules/<topic>.md` (new) | `.claude/rules/<existing>.md` (edit) | `.claude/CLAUDE.md` (edit)
**Source:** "<quote from user's feedback>"
**Why it's durable:** <why a future session would get this wrong without the rule>
**Change:**
<full file content for new rules, or the specific edit for updates>
Writing the draft to disk serves two purposes:
If there are no proposals worth making, write that conclusion to the file ("No proposals — nothing in this session met the bar") and skip Steps 5–6. Tell the user and stop.
Invoke the Codex adversarial review skill to get a cross-model challenge of the draft proposals from GPT-5.4:
Skill(skill: "codex:adversarial-review", args: "--wait --scope auto focus on whether these retro proposals are truly durable, project-specific, and broadly applicable — challenge overly narrow rules, rules that restate generic best practices, rules that conflict with existing configuration, and rules whose feedback source doesn't support the generalization in .nocommit/retro-proposals.md")
The focus text steers Codex toward retro-quality concerns: are these rules worth adding? The --wait flag ensures results return before the next step.
The output is structured JSON with:
verdict — overall assessmentsummary — brief narrativefindings array — each with severity, title, body, file, line_start, line_end, confidence, recommendationnext_steps array — suggested follow-up actionsHold the full response for the incorporation step.
If the Codex skill fails (authentication error, CLI unavailable, timeout), note the failure and continue to Step 7 (present proposals), skipping Step 6. Tell the user the Codex review was unavailable.
Process the Codex adversarial review findings and refine or cull the draft proposals. This step runs in the main conversation — no sub-agents.
For each finding, by severity:
For next_steps from Codex: Scan for actionable items. Those that suggest checking existing rules for conflicts — do the check. Those that suggest the proposal is fine — move on.
Provenance: When modifying a proposal based on Codex feedback, add a brief [Codex] note explaining what changed and why so the user can evaluate the reasoning.
After incorporating all findings, finalize the proposals in memory for presentation.
Present each final proposal using this format:
### Proposal N: <short title>
**Target:** `.claude/rules/<topic>.md` (new) | `.claude/rules/<existing>.md` (edit) | `.claude/CLAUDE.md` (edit)
**Source:** "<quote from user's feedback>"
**Why it's durable:** <why a future session would get this wrong without the rule>
**Change:**
<full file content for new rules, or the specific edit for updates>
If any proposals were dropped or modified due to Codex feedback, briefly note this after the proposals:
.claude/rules/<topic>.md. Use kebab-case filenames that describe the topic (e.g., api-services.md, database-access.md). Match the format of existing rule files — a heading, context, then specific guidelines..claude/rules/..claude/CLAUDE.md.Wait for the user to approve, reject, or modify each proposal. Apply only what's approved. Present all proposals at once so the user can review them together and approve/reject individually.
After applying changes, clean up the draft file (delete .nocommit/retro-proposals.md).