Patterns for extracting reusable knowledge from problem-solving, documenting discoveries, building project-specific context, and accumulating knowledge across sessions
Every problem solved is a learning opportunity. The value of solving a problem doubles when the solution is captured in a reusable form. Do not solve the same problem twice.
Extract Reusable Patterns
After Solving a Problem
Identify the general pattern. Strip away project-specific details. What is the abstract problem and solution?
Name the pattern. A good name makes it findable and referenceable in future conversations.
Document the trigger. What symptoms or situations indicate this pattern applies?
Record the solution steps. Concrete, actionable steps someone (or future you) can follow.
Note the pitfalls. What went wrong on the first attempt? What edge cases matter?
Pattern Template
Related Skills
## Pattern: [Name]
### When to Apply
[Symptoms or situations that indicate this pattern is relevant]
### Solution
[Step-by-step approach]
### Pitfalls
[Common mistakes and how to avoid them]
### Examples
[Links to code or commits where this was applied]
What Qualifies as a Reusable Pattern
A debugging technique that saved significant time
A configuration that resolved a subtle issue
An architectural approach that simplified a complex problem
A performance optimization with measurable results
A workaround for a known bug in a tool or library
Document Discoveries
Types of Discoveries Worth Recording
Surprising behavior: When a tool, library, or API does not behave as documented or expected.
## Discovery: PostgreSQL JSONB index behavior
Date: 2025-03-15
Context: Query performance on JSONB columns
Finding: GIN indexes on JSONB columns do not support ordering.
Queries with ORDER BY on JSONB fields fall back to sequential scan
even with an index present.
Impact: Must extract frequently-sorted JSONB fields into proper columns.
Root cause of non-obvious bugs: When the symptom and cause are far apart.
## Discovery: Silent WebSocket disconnects under load
Date: 2025-03-20
Symptom: Clients randomly disconnecting after 60 seconds of inactivity.
Root cause: AWS ALB default idle timeout is 60s. The app's keepalive
interval was 90s, so the ALB closed the connection before the
keepalive fired.
Fix: Set keepalive interval to 30s (less than half the ALB timeout).
Performance characteristics: When you measure something and the results are non-intuitive.
Integration quirks: When connecting two systems requires specific configuration not covered in either system's docs.
Where to Store Discoveries
Project-level: docs/discoveries/ or in the project MEMORY file
Personal level: ~/.claude/ memory files
Team level: Internal wiki or shared documentation
Build Project-Specific Knowledge
What to Capture Per Project
Architecture decisions and their rationale (link to ADRs)
Deployment quirks and environment-specific configuration
Common failure modes and their diagnostic steps
Team conventions that differ from industry defaults
Historical context for why the code is structured a certain way
Knowledge Accumulation Workflow
When you encounter an undocumented convention, ask why and record the answer
When you fix a bug, record the diagnostic path that led to the fix
When you make an architecture decision, write an ADR immediately (not later)
When you onboard someone, note every question they ask -- those are documentation gaps
Project Context File
Maintain a living document with essential project context:
# Project Context: [Name]
## Architecture Overview
[High-level description and key design decisions]
## Key Patterns
- [Pattern]: [Where it's used and why]
## Known Issues
- [Issue]: [Workaround and status of fix]
## Environment Notes
- [Dev/Staging/Prod differences worth knowing]
## Common Tasks
- How to [task]: [steps]
Reference Past Decisions
Decision Log
When making significant choices, record:
What was decided
Why it was chosen over alternatives
When to revisit (if applicable)
Who was consulted
Avoiding Decision Amnesia
Before making a decision, search for prior ADRs or discussions on the same topic
If reversing a past decision, write a new ADR that explicitly supersedes the old one
Link related decisions together so the chain of reasoning is traceable
Pattern: Decision Heuristics
Distill recurring decision-making into heuristics:
## Heuristic: When to add a new microservice
Add a new service when:
- The domain boundary is clear and stable
- The team is large enough to own it independently
- The deployment cadence differs significantly from existing services
Keep it in the monolith when:
- The feature shares data models with existing features
- The team cannot support another service's operational burden
- The boundary is still evolving
Accumulate Context Across Sessions
Session Handoff
At the end of a significant work session, summarize:
What was accomplished
What is in progress (and the current state)
What blockers or open questions remain
What the next steps should be
Memory Hierarchy
Organize knowledge by scope and longevity:
Level
Location
Content
Lifespan
Universal
Global CLAUDE.md / memory
Cross-project patterns and prefs
Permanent
Project
Project CLAUDE.md / docs
Architecture, conventions, decisions
Project life
Task
PR description, commit message
Specific change context
Reviewable
Ephemeral
Session context
Current debugging state
Single session
Continuous Improvement Loop
Do -- solve the problem
Reflect -- what was hard, what was surprising, what was slow?
Extract -- capture the reusable insight
Systematize -- update tooling, templates, or documentation to prevent recurrence
Share -- make it accessible to others (or future sessions)
Anti-Patterns
Solving the same problem from scratch every time
Hoarding knowledge in private notes that nobody else can access
Writing discoveries but never organizing or pruning them
Accumulating context without acting on it (knowledge without action is waste)
Treating documentation as a one-time task rather than a living practice
Feedback Loops
Build-Measure-Learn for Code
Build: Implement a solution based on current understanding
Measure: Observe how it performs (tests, production metrics, user feedback)
Learn: Update your mental model and documentation based on outcomes
Retrospective Questions
After completing a significant task, ask:
What took longer than expected and why?
What would I do differently next time?
What tool, script, or template would have saved time?
What assumption turned out to be wrong?
What knowledge was missing at the start that I have now?