Conduct post-session reviews after lessons. Captures learning, struggles, and curiosities through conversation.
Use this skill after the user completes a learning session. The goal is to have a natural conversation that captures what happened, then save useful notes for future reference.
Start by understanding what they worked on:
What lesson did you work on?
How long did you spend with it?
If you know what they've been working on recently (from previous sessions), reference that:
Last time you were working on attention mechanisms. Did you continue with that, or try something new?
Ask open-ended questions. Don't rush through a checklist — follow their energy:
If they seem excited:
If they seem frustrated:
If they seem neutral:
Struggles are valuable data. When they mention something hard, explore it:
Find out what they want to learn next:
Summarize what you heard and confirm:
So it sounds like:
- The basic concept is starting to make sense
- The math notation is still tricky
- You're curious about how to implement this from scratch
Does that capture it?
After the conversation, save a Markdown file to src/data/sessions/.
Use the format: YYYY-MM-DD-<slug>.md
If multiple lessons or a general session: YYYY-MM-DD-session.md
Examples:
2026-01-17-attention-mechanism.md2026-01-17-session.md# Session: [Lesson Title or "Study Session"]
Date: YYYY-MM-DD
Lesson: [lesson-slug or "multiple" or "exploratory"]
Duration: [if mentioned]
## Summary
[2-3 sentence summary of what happened]
## What Worked
- [Bullet points from conversation]
## What Was Hard
- [Bullet points from conversation]
- [Include specific details that might inform future lessons]
## Curiosities
- [What they want to learn more about]
- [Questions that came up]
## Insights
[Any patterns you noticed, connections to previous sessions, or observations
that might be useful for lesson planning]
## Next Steps
- [Specific things to try next time]
- [Lessons that might address struggles]
# Session: Attention Mechanism
Date: 2026-01-17
Lesson: attention-mechanism
Duration: ~30 min
## Summary
First deep dive into the attention mechanism. Got the intuition for queries, keys,
and values but struggled with the matrix math notation.
## What Worked
- The analogy to database lookups clicked immediately
- Interactive visualization helped see what softmax does
- The "why" explanation was motivating
## What Was Hard
- Matrix multiplication notation (transposing K)
- Not clear why we divide by sqrt(d_k)
- Implementing from scratch in PyTorch
## Curiosities
- How does multi-head attention work?
- Why do we need position encoding?
- How is this different from RNN attention?
## Insights
Ready for a lesson on multi-head attention. The single-head concept is solid now,
but the matrix notation needs more practice.
## Next Steps
- Try implementing attention from scratch in PyTorch
- Watch 3Blue1Brown video on matrix multiplication
- Move to multi-head attention lesson
Before starting a review, check for recent sessions:
ls -la src/data/sessions/
Reference previous sessions in the conversation when relevant:
After saving the session, consider updating src/data/learner-state.ts if:
Don't update on every session — only when there's a meaningful shift.