Meta-skill for continuous AI self-improvement through structured feedback analysis. Creates domain-specific learning skills that accumulate actionable principles over time. Use this skill when: (1) The user wants the AI to learn and improve on a specific domain (e.g., VSL creation, sales coaching, content curation, code review). (2) The user provides examples of good vs bad outputs and wants the AI to extract reusable principles. (3) The user says things like "learn from this", "improve based on feedback", "remember this for next time", "analyze what works", "create a learning profile", "get better at X". (4) The user wants to create structured knowledge from repeated feedback loops. This skill does NOT replace domain expertise — it provides the methodology for extracting and storing transferable principles from user feedback.
A meta-skill that enables continuous self-improvement by creating domain-specific learning skills. Each domain skill accumulates validated, actionable principles from user feedback.
META-SKILL (this file) DOMAIN SKILLS (generated)
Contains the METHODOLOGY ---> Contains ACCUMULATED KNOWLEDGE
- Discovery questions - Domain profile
- Analysis framework - Validated principles
- Skill generation - Learning history
When the user wants to improve on a new domain, run this structured interview. Ask questions in batches of 2-3 MAX to avoid overwhelming the user.
Ask:
Goal: Identify the domain name and its context.
Ask:
Goal: Identify the deliverable type, format, and technical stack. The technical stack is critical — every principle must have an implementation in this stack.
Ask:
Goal: Establish clear success criteria and feedback format.
Based on the domain, PROPOSE a list of analysis categories and ask the user to validate, reorder, add, or remove.
Example categories by domain:
Ask the user to prioritize them (priority #1 = most important for success).
Summarize all answers and ask for validation. Then generate the domain skill.
See references/domain-skill-template.md for the exact structure to generate.
The domain skill must be placed in the current workspace or the user's skills directory.
When the user provides feedback (examples, scores, comments), follow this process:
Gather all feedback: scores, comments, comparisons, specific annotations.
List every raw observation from the feedback. Be exhaustive.
For EACH observation, apply these 3 filters in order:
Observation --> "Winning video uses red"
|
+- FILTER 1: CONTEXTUAL?
| "Would this change if the product/subject changed?"
| YES --> REJECT (color depends on product)
| NO --> Continue
|
+- FILTER 2: GENERALIZABLE?
| "Can this be expressed as a rule WITHOUT referencing
| specific content?"
| NO --> REJECT (too specific)
| YES --> Continue
|
+- FILTER 3: STRUCTURAL?
"Is this about HOW (structure, timing, hierarchy)
rather than WHAT (subject, color, specific image)?"
NO --> REJECT
YES --> ACCEPT as principle
For each accepted observation, create a principle with TWO LEVELS:
Conceptual level: The rule in plain language
Implementation level: The concrete code/config in the domain's tech stack
<Sequence durationInFrames={60}> (2 sec at 30fps)For each principle:
Update references/principles.md in the domain skill:
references/learning-log.mdWhen producing output in a domain where a learning skill exists:
references/principles.md from the domain skill