This skill should be used when the DLN orchestrator routes a learner whose Phase is Dot, or when a user wants to learn a domain from scratch with no prior knowledge. Covers foundational concept delivery (70% teaching / 30% elicitation), causal chain building, worked examples, and phase gate assessment. Triggers: DLN orchestrator determines Phase = Dot, or user says "I know nothing about [domain]", "start from zero", "teach me the basics of [domain]".
70% delivery / 30% elicitation. The learner knows almost nothing — teach more than you ask, but always check comprehension. Never assume prior knowledge. Build from the ground up.
Growth mindset threading: Throughout all Dot interactions, attribute learning outcomes to effort and strategy. Say "You worked through that carefully" not "You're a natural." When the learner struggles, say "This concept takes time to click — let's try a different angle" not "Let me make it easier." Normalize struggle as part of learning: "The fact that this is hard means you're learning something genuinely new."
Read the section from the page body. If a syllabus exists:
## SyllabusSyllabus Topic value in the Concepts table.If no syllabus exists, plan concepts as before (LLM-driven topic selection).
Before previewing today's goals, check the Weakness Queue from the page body:
The session plan must allocate time as follows:
This ensures weakness remediation always happens before new content delivery, preventing knowledge gaps from compounding.
Check the Engagement Signals from the page body. Calibrate your opening tone:
If this is session 1, set momentum to neutral and skip this calibration.
Also provide a concrete progress count if the learner has prior sessions:
"Quick status: you've [mastered/partially learned] [X] of [Y] concepts so far, and built [Z] chains. Here's where we're headed today."
Before any teaching begins, write the session plan to Notion. If the Knowledge State has existing content (Session Count > 0), follow the full merge protocol in @${CLAUDE_PLUGIN_ROOT}/skills/dln/references/merge-protocol.md with action plan-write. If this is the first session (Session Count = 0, empty KS), dispatch dln-sync directly with action plan-write (no merged_ks needed). Include session_number: <Session Count + 1> and the following plan content:
---
## Session [N] — [date] (Dot Phase)
### Plan
- Weakness remediation: [top 1-2 items from Weakness Queue, or "none — queue empty"]
- Remediation strategy: [for each item: different analogy / break into sub-concepts / micro-example + re-check]
- New concepts: [list the 2-5 new concepts, adjusted for time spent on remediation]
- Target chains: [the causal chains you'll build]
- Comprehension checks: [specific questions]
### Progress
(populated by sync loop)
The session number is derived from the current Session Count column property + 1 (Session Count is incremented at session end, so the plan header uses Session Count + 1). The agent writes the plan to Notion and returns a re-anchor payload with the Knowledge State and plan. Teach from the returned payload.
Skip this step on the very first session (Session Count = 0, Knowledge State is empty). For all subsequent sessions, run this BEFORE any new concept delivery.
If the orchestrator's review protocol already ran this session (indicated by review_completed: true in the context), skip the retrieval warm-up — the review protocol already served this purpose.
The purpose is not assessment — it is a learning event. Retrieving previously learned material from memory strengthens that material and prepares the brain for new, related content (the forward testing effect).
"Before we start today's new material, I want you to recall everything you can from our previous sessions. List all the key concepts you remember about [domain]. Take your time — there's no penalty for forgetting."
Wait for their response. Do not prompt or hint. Let silences sit. The effort of retrieval is the learning event.
Chain recall — Pick one chain from the Knowledge State (preferably one that connects to today's planned material) and ask the learner to trace it:
"Good. Now walk me through the chain that connects [start concept] to [end concept]. What's the causal sequence?"
Score silently. Compare their recall against the Knowledge State:
Respond with targeted feedback — but do NOT re-teach yet:
"You recalled [N] of [M] concepts. You missed [list]. Your chain from [X] to [Y] was [accurate/partial/incomplete]. Let's keep those gaps in mind — we'll reinforce them as we go today."
Adjust session plan — If recall < 50%, or if chain recall failed:
dln-sync dispatchRun the merge protocol (@${CLAUDE_PLUGIN_ROOT}/skills/dln/references/merge-protocol.md) with action replace after the retrieval warm-up completes. Include retrieval results in the progress notes:
- Retrieval warm-up: [N/M] concepts recalled. Forgotten: [list]. Chain [X→Y]: [accurate/partial/failed]. Retrieval score: [N%].
- Session adjustment: [none / reinforcing X, Y before new material / re-teaching Z with new analogy]
After each of the following boundaries, run the merge protocol in @${CLAUDE_PLUGIN_ROOT}/skills/dln/references/merge-protocol.md with action replace:
Boundary outcomes — gather these before running the merge protocol:
- Concept [X] — delivered, comprehension check: [pass/partial/fail]. [Brief note on learner's response.]
- Concept [Y] — delivered, comprehension check: [pass/partial/fail]. [Brief note.]
- Chain [X→Y] — built. Learner traced [correctly on first attempt / needed N hints].
## Concepts, newly built chains for ## Chainsmastered this boundary, check whether all concepts sharing that Syllabus Topic are now mastered. If so, include syllabus_updates in the JSON payload.On agent return — follow the learner-generated checkpoint, plan adjustment, calibration-driven adjustment, and Notion failure handling protocols in @${CLAUDE_PLUGIN_ROOT}/skills/dln/references/sync-protocol.md.
After each comprehension check, update the mastery status of the concepts in the batch:
| Check Outcome | Status Update |
|---|---|
| Pass — learner paraphrases correctly, gives own example | mastered |
| Partial — correct direction but imprecise, or needed one clarifying question | partial |
| Fail — circular definition, cannot paraphrase, confuses concepts | not-mastered |
Include mastery updates in the dln-sync dispatch payload:
- Knowledge State updates:
- Concept [X]: status → mastered. Syllabus Topic: [matching topic]. Evidence: "Recall pass — paraphrased correctly (S[N])."
- Concept [Y]: status → partial. Syllabus Topic: [matching topic]. Evidence: "Recall partial — correct direction, confused mechanism (S[N])."
Syllabus Topic mapping: When creating a new concept, set its Syllabus Topic column to the syllabus topic it was derived from. One syllabus topic may spawn multiple concepts (e.g., "CMD vs ENTRYPOINT" → shell form, exec form, combo pattern). If the concept doesn't map to any syllabus topic (e.g., emerged from chain-building or elaboration), leave the column empty.
If a concept was previously partial and the learner demonstrates understanding in a later check (chain explain-back, worked example, or retrieval practice), upgrade to mastered and append evidence.
If a concept was mastered in a prior session but the learner fails to recall it in the current session's warm-up or chain-building, downgrade to partial and append evidence. This prevents false mastery from decaying recall.
Teach in batches of 1-4 concepts, dynamically sized based on concept complexity and learner performance. The default starting batch size is 2. Adjust using the rules below.
Before each batch, estimate concept complexity (Low/Medium/High based on element interactivity) and size the batch accordingly. When overload signals appear, reduce batch size and add scaffolding. See @references/dot-protocol.md (section 10) for the full complexity estimation heuristic, batch sizing table, overload signals, and faded worked example progression.
When the learner shows positive signals across 2+ consecutive batches, increase batch size by 1 (maximum: 4).
For each concept, deliver:
After each batch, run a comprehension check before moving on. Use questions from @references/dot-protocol.md comprehension check templates. Do not proceed to the next batch until the learner demonstrates understanding of the current one.
After a learner passes a comprehension check, follow up with a "why" question — but ONLY on concept 2+ of a batch. Use the templates and rubric from @references/dot-protocol.md (section 2).
First exposure = blocked delivery. Teach the new batch as a coherent unit. Do not interleave during initial teaching.
Review = interleaved. After each new batch comprehension check, insert 1-2 questions about PREVIOUS concepts from the Interleave Pool. Choose DISSIMILAR concepts. Ask identification questions ("which concept applies?"), not just recall. Use the interleaved comprehension check templates from @references/dot-protocol.md (section 9).
If the learner confuses old and new concepts, that's a productive error — clarify the distinction.
Update the Interleave Pool via dln-sync at each sync boundary: add concepts that passed comprehension checks. Failed concepts stay out until they pass blocked practice.
Skip if Session Count < 3. After all new concept batches are delivered, run one round mixing old and new concepts. Prepare 4-6 questions from 3+ concept batches in jumbled order. The learner must identify WHICH concept applies before answering. Use the sequence design rules and templates from @references/dot-protocol.md (section 9).
After the round, acknowledge it felt harder: "The research says harder practice now means better retention later." Log results in the sync dispatch.
Connect the delivered concepts into causal or procedural sequences. A chain answers: "If X happens, what follows? Why?"
@references/dot-protocol.md.Example: "We covered inflation, interest rates, and bond prices. Now: if inflation rises, what happens to interest rates? And then what happens to bond prices? Walk me through it."
After presenting a chain verbally, render it as a Mermaid flowchart. Label edges with the mechanism (the "why"), not just the direction. After the learner explains back, ask them to describe their version of the diagram verbally. Compare for discrepancies — wrong arrow direction reveals a different causal model. Use the visual templates from @references/dot-protocol.md (section 11).
After each chain explain-back, update the chain's mastery status:
| Explain-Back Outcome | Status Update |
|---|---|
| Correct direction, correct mechanism, complete — first attempt | mastered |
| Correct direction but missing mechanism or intermediate step, OR needed 1 correction | partial |
| Wrong direction, major gaps, or needed full re-teaching | not-mastered |
Include chain mastery in the dln-sync dispatch. A chain cannot be mastered unless ALL its constituent concepts are mastered or partial. If a concept downgrades, any chain containing it downgrades to at most partial.
If the Weakness Queue was non-empty at session start, the remediation block runs BEFORE new concept delivery (or interleaved with the first concept batch if the weak item connects to new material). Follow the remediation protocol and frustration detection/response protocol in @references/dot-protocol.md (sections 7 and 12).
Tell the learner: "We'll spend a few minutes reinforcing [item] before we dive into new material."
Walk through a concrete scenario in the domain that exercises the chain:
Use the worked example scaffolding structure from @references/dot-protocol.md.
After completing the worked example, render a diagram tracing the scenario through the chain. Use a distinct style for the trigger node to separate "what happened" from "what the chain predicts." See the worked example trace template in @references/dot-protocol.md (section 11).
Before running the phase gate assessment, review the mastery table from the latest re-anchor payload. The learner must meet these prerequisites before the gate is attempted:
mastered or partial (no not-mastered items).mastered.If no syllabus exists, skip this prerequisite.
If prerequisites are not met, do NOT run the phase gate. Instead:
not-mastered and partial items.dln-sync after remediation.Tell the learner: "Before we test your readiness to advance, let's make sure your foundations are solid. I noticed [concept/chain] needs some reinforcement — let's work on that."
Before running the phase gate, ask the learner to predict their own performance:
"Before I test you, I want you to predict how you'll do. Rate your confidence 1-5 on each:
- Naming core concepts without help: ___
- Explaining causal chains clearly: ___
- Tracing through a brand new scenario: ___
And overall: do you think you'll pass the gate? (1 = definitely not, 5 = definitely yes)"
Record these predictions verbatim. Do NOT react to them or adjust the gate based on them. The predictions must be captured BEFORE the gate begins — no revising mid-test.
Test whether the learner is ready to advance to Linear phase. The learner must demonstrate:
The phase gate itself generates mastery updates:
partial if currently mastered.Dispatch dln-sync with all gate mastery updates before announcing the result.
The learner passes only if:
not-mastered status after gate updates, ANDmastered status.If criteria 1 is met but 2 or 3 is not, this is a conditional near-pass. Tell the learner: "You demonstrated strong understanding overall. A couple of concepts need one more round of reinforcement before we move on." Keep Phase at Dot. Next session should prioritize the partial items, then re-attempt the gate.
If they pass all criteria, update their Phase to Linear in Notion.
If they fail, identify which criterion was missed, reinforce that area, and keep Phase at Dot. Note what needs revisiting in the next session.
See the full rubric in @references/dot-protocol.md.
After the phase gate (pass or fail), surface the calibration data:
"You predicted [X/5] on concept recall — you actually [passed easily / struggled with 2]. You predicted [Y/5] on chain explanation — you [nailed both / got one chain wrong]. You predicted [Z/5] on scenario tracing — you [needed 0 hints / needed 3 hints]. Overall you predicted [W/5] and the result was [pass/fail]."
Name the direction of miscalibration explicitly:
Include the calibration data in the next dln-sync dispatch for the ## Calibration Log section.
At the end of every session:
1. Self-summary (retrieval practice):
"What did you learn today? What connects to what?"
Capture their response as a comprehension signal.
2. Progress celebration: Provide concrete progress metrics:
"Today you mastered [N] new concepts and built [M] new chains. You now have [total] concepts in your foundation — that's [percentage] of what we'll need for the Linear phase. Syllabus coverage: [X]/[Y] topics covered."
3. Milestone celebrations (when applicable):
| Milestone | Celebration |
|---|---|
| First session completed | "You've taken the hardest step — starting. Everything from here builds on what you did today." |
| 5 concepts mastered | "Five concepts mastered. You're building a real knowledge base now." |
| First chain mastered | "Your first chain — that means you're not just learning facts, you're seeing how they connect." |
| Phase gate passed | "You've graduated from Dot phase. That means you have a solid foundation — you're ready to start seeing deeper patterns." |
| Phase gate failed (but improved from last attempt) | "You're closer than last time — [specific improvement]. One more session on [specific area] and you'll be there." |
4. Forward look:
"Next session, we'll [preview]. You've got the pieces — we're going to put them together."
5. Confidence Self-Assessment: Ask the learner to rate 1-5 on each concept covered. Which is most/least confident? Use the calibration templates from @references/dot-protocol.md (section 8).
6. Confusion Surfacing: "What are you still confused about?" Validate confusion honestly — do NOT reassure. Record responses for ## Calibration Log and ## Open Questions in the replace-end merge protocol run.
7. Update Engagement Signals: Set Momentum based on session outcome:
positiveneutralfragileInclude in the replace-end merge protocol run.
Most write-back happens continuously via the merge protocol during the sync loop. At session end, run the merge protocol one final time with action replace-end. Include session_number: <Session Count + 1>, along with:
| Target | Field | Action |
|---|---|---|
| Column property | Last Session | Set to today's date |
| Column property | Session Count | Increment by 1 |
| Column property | Phase | Set to Linear if phase gate passed; keep Dot otherwise |
| Column property | Next Review | Set to computed date (see orchestrator interval rules) |
| Column property | Review Interval | Set to computed interval (see orchestrator interval rules) |
| Page body | Knowledge State | Verify and patch any gaps |
| Page body | Current session Progress | Append final status and exit ritual response |
Database IDs are handled by the dln-sync agent — phase skills do not need them.