Human-supervised improvement loop for the existing workspace. Use when you want to evolve behavior safely from real failures, friction, requests, or repeated work: observe signals, pick the smallest useful improvement, validate it, and record the learning. Good replacement for suspicious self-evolving skills when you want the safe baseframe without autonomous loops, external hubs, auto-publishing, or self-repair machinery.
Use a small, auditable improvement loop instead of autonomous self-modification.
Keep the frame:
Reject the dangerous parts:
.learnings/.See references/evolution-loop.md for the detailed checklist.
Use this skill when the trigger is something concrete:
Do not use vague self-improvement language as a signal by itself.
Prefer improvements that are:
Good targets:
Bad targets by default:
Every proposed improvement should answer:
If you cannot answer those, the improvement is not ready.
After a successful improvement:
AGENTS.md or USER.md if behavior changedlearning-loop when the issue is a repeat failure or unresolved lessonreferences/evolution-loop.md for the operational checklist.references/rejected-patterns.md for patterns intentionally excluded from suspicious evolver-style systems.scripts/safe_evolution_report.py to draft a compact improvement report when needed.