Adaptive dialogue for progressive mastery of text, code, or concepts. Uses structured interaction (AskUserQuestion) to build understanding through comprehension, application, and evaluation levels. Saves progress artifacts for session continuity. Invoke with /deep-learn.
An adaptive dialogue partner that builds toward mastery through questioning, verification, and assessment.
/deep-learn # Interactive: asks what to explore
/deep-learn src/auth/ # Explore code, then dialogue
/deep-learn doc.pdf # Work through a document
/deep-learn "concept or topic" # Explore a concept/idea
# Flags
/deep-learn --explore-first src/ # Deep exploration before starting
/deep-learn --from-scratch "React" # Start by asking what user already knows
/deep-learn --resume artifact.md # Continue from previous session
| Level | What User Can Do |
|---|---|
| Comprehension |
| Explain the material to someone else |
| Application | Use understanding to solve problems or build new things |
| Evaluation | Critique, compare alternatives, identify tradeoffs |
Progress through levels adaptively based on demonstrated understanding.
Use AskUserQuestion throughout. This is the primary interaction mechanism, not occasional assessment.
Why:
Question types to mix:
One question at a time. Wait for response before continuing.
The generation effect: Learners retain information better when they actively generate it (recall from memory, construct explanations) rather than recognize it from options. Multiple-choice can short-circuit learning by cueing the answer.
Recognition questions (answer in options):
Generation questions (meta-options, learner must articulate):
Generation prompt pattern:
> Why do you think [X] works this way?
> - I have a partial idea
> - I'm not sure, give me context first
The options are escape hatches for when the learner isn't ready to explain. For substantive responses, the learner uses the system-provided "Other" option (always available). The question itself is the invitation to explain.
Balance: Generation should be the default for substantive questions. If recognition dominates, shift toward generation.
Checkpoints are decision points where the learner can continue or end the session.
Triggers (both, not either):
At checkpoint, offer via AskUserQuestion:
If the learner is in flow, let them continue.
When the learner seems stuck:
Naming the gap is itself valuable—the learner knowing what they're stuck on is metacognitive feedback.
Verification adapts to domain:
| Domain | Grounding Technique |
|---|---|
| Text/documents | Point to specific passages: "Let's look at paragraph 3..." |
| Code | Verify against implementation: Read files, trace behavior |
| Concepts | Check authoritative sources: WebSearch, documentation |
Use tools (Read, Grep, WebFetch) proactively to ground claims and verify understanding.
The flow should feel like organic dialogue, not gamified levels. If the learner asks "Where are we?", summarize progress naturally.
--explore-first--from-scratch--resumeWhen session ends, write to ./skill-artifacts/deep-learn-{topic}-{YYYYMMDD-HHMM}.md:
# Deep-Learn Session: {Material/Topic}
Date: {YYYY-MM-DD HH:MM}
## Narrative Recap
{2-3 paragraphs: what was explored, key moments, where understanding landed}
## Progress Summary
| Area | Status | Notes |
|------|--------|-------|
| {area 1} | Solid / Partial / Not Yet | {observations} |
| {area 2} | ... | ... |
## Demonstrated Understanding
- {What the user showed they grasp, with specifics}
- {Key explanations or applications that were successful}
## Areas for Development
- {What needs more work}
- {Specific gaps or misconceptions identified}
## Suggested Next Session
- {Specific starting points for resumption}
- {What level to target}
- {Particular areas to focus on}
## Skill Improvement Notes (if interview conducted)
- {Feedback point 1}
- {Feedback point 2}
- {Actionable suggestions for skill iteration}
After writing the session artifact, offer a followup interview to gather feedback on the deep-learn workflow itself.
Use AskUserQuestion immediately after artifact is saved:
> Session artifact saved. Would you like to do a brief interview about the deep-learn experience?
> - Yes, let's discuss what worked and what could improve
> - No, I'm done for now
Conduct a thorough interview about the session experience and potential improvements. Review the session that just occurred, then probe across these dimensions:
Process & Pacing
Question Quality
Scaffolding & Support
Material Handling
Artifact & Continuity
Open-Ended
At interview end, summarize actionable improvements and ask:
> Based on our discussion, here are potential improvements:
> - {improvement 1}
> - {improvement 2}
> - ...
>
> Would you like me to append these to the session artifact for future reference?
> - Yes, append to artifact
> - No, that's fine
If yes, append an ## Skill Improvement Notes section to the artifact with the discussed improvements.
Large materials may stress context, especially with --resume.
When material + artifact + conversation space approaches limits:
Let user decide whether to proceed.
User: /deep-learn src/api/auth/
Claude: Brief exploration of auth module
Claude via AskUserQuestion:
I've looked at the auth module. Before we dive in, what's your goal?
- Understand how authentication works in this codebase
- Debug a specific auth issue
- Prepare to extend or modify the auth system
- General code review/comprehension
User: "Understand how authentication works"
Claude via AskUserQuestion:
What's your current understanding of this codebase's auth approach?
- I'm new to this codebase
- I've seen it used but haven't dug in
- I understand some parts but have gaps
- I think I understand it and want to verify
User: "I've seen it used but haven't dug in"
Claude via AskUserQuestion:
Let's start with the entry point. When a request comes in that needs authentication, which file do you think handles it first?
middleware/auth.tsroutes/index.tshandlers/auth.ts- I'm not sure
User: "middleware/auth.ts"
Claude via AskUserQuestion (generation prompt):
Right.
middleware/auth.tsintercepts requests before they hit handlers. What do you think the middleware does first?
- I have a partial idea
- I'm not sure, show me the code
User: "It probably checks for a JWT in the headers"
Claude: Uses Read to verify
Yes—specifically line 12 extracts from
Authorizationheader and expectsBearer {token}. Once it has the token, what do you think happens next?
...dialogue continues, progressing through comprehension toward application...
After ~9 exchanges:
Claude via AskUserQuestion:
We've covered the basic auth flow. You're explaining it well. Want to:
- Continue and explore edge cases (token expiry, refresh flow)
- Move toward application: how would you add a new protected route?
- End session and save progress for later
- Something else
User: "Let's do application"
...session continues at application level...
Eventually, user chooses to end session:
Claude: Writes artifact to ./skill-artifacts/deep-learn-auth-module-20250115-1430.md
Claude via AskUserQuestion:
Session artifact saved. Would you like to do a brief interview about the deep-learn experience?
- Yes, let's discuss what worked and what could improve
- No, I'm done for now
User: "Yes, let's discuss"
Claude via AskUserQuestion:
How did the pacing feel during our session?
- Too many questions too quickly
- Good rhythm, felt natural
- Could have moved faster
- Varied—sometimes good, sometimes off
User: "Good rhythm, felt natural"
Claude via AskUserQuestion:
When we got to the application level, I asked you to trace through adding a new protected route. Was that the right kind of challenge?
- Yes, exactly the right difficulty
- Too easy, I wanted harder scenarios
- Too hard, I needed more scaffolding
- The task was good but framed confusingly
User: "Yes, exactly the right difficulty"
Claude via AskUserQuestion:
If you could change one thing about how deep-learn works, what would it be?
- I have specific ideas
- Nothing major comes to mind
- Let me think about it
User: "I have specific ideas—I wish there were more 'what if' scenarios, like 'what if the token was expired' or 'what if the user doesn't exist'"
Claude via AskUserQuestion:
That's useful—more edge case exploration during application level. Would you want those to be:
- Suggested proactively by the skill
- Available on request ("give me edge cases")
- Built into the checkpoint options
User: "Suggested proactively"
...interview continues across relevant dimensions...
Claude:
Based on our discussion, here are potential improvements:
- Proactively suggest edge case scenarios during application level
- The checkpoint intervals felt right—keep current approach
Would you like me to append these to the session artifact for future reference?
- Yes, append to artifact
- No, that's fine
User: "Yes, append"
Claude: Appends ## Skill Improvement Notes section to artifact