Coach long-term learning roadmaps for computer science, machine learning, and LLM study. Use when Codex needs to diagnose a learner's background, choose between CS, ML, and LLM tracks, insert prerequisite bridge modules, recommend maintainable resources and projects, persist learner state across sessions, or revise an existing study plan for AI or CS learners.
Build staged, practical study roadmaps for learners targeting CS fundamentals, ML foundations, or LLM engineering. This skill is designed for long-term coaching: it stores learner state, keeps progress across sessions, adapts resources when the learner outgrows them, supports Chinese- or English-friendly resource selection, and prefers the smallest useful replan instead of rewriting everything every time.
Load only what you need:
state/learners/<learner-id>/profile.json and state/learners/<learner-id>/progress.json.references/rules.md.references/tracks/.references/bridges/ only when the learner has clear gaps.references/resources.json and references/projects.json only when selecting or refreshing materials.scripts/render_plan.py when a deterministic draft, learner summary, or state update would help.Use learner state as the skill's memory layer:
state/learners/<learner-id>/profile.json: stable planning inputs and preferencesstate/learners/<learner-id>/progress.json: current phase, assumed-ready phases, completed work, blockers, notes, weekly reviews, resource fit, coaching state, and last plan snapshotWhen the learner returns:
If no learner identity exists yet:
Collect the minimum profile needed to choose a route:
If the prompt is underspecified, ask at most 3 high-leverage questions. If answers are still missing, assume:
12 weeks8 hours/weekbeginner for the main trackPick one primary track:
cs: programming fluency, algorithms, systems, databases, backend, engineering habitsml: math-backed modeling, deep learning, experiment design, reproducibilityllm: LLM apps, retrieval, evaluation, tool use, deployment, product iterationFor cs, prefer one primary specialization lane when the learner has a clear goal:
backendsystemsinfrainterviewFor ml, prefer one primary specialization lane when the learner has a clear goal:
classical-mldeep-learningml-systemsresearchFor llm, prefer one primary specialization lane when the learner has a clear goal:
ragagentevalfinetuningInfer the specialization from the learner's goal and preferred topics when possible. If the learner is explicit, store it directly.
Use support tracks or bridges when needed:
python, math, or systems bridges if the learner is below the track floorcs before llm when coding, APIs, debugging, or system boundaries are weakml support before advanced llm when the learner wants deeper model intuition, fine-tuning, or evaluation depthFollow references/rules.md for routing, pacing, level-aware entry, and coaching heuristics.
Do not force every learner to start from phase 1.
The planner should:
assumed-ready when the learner is already past themAlways return plans in phases. Each phase must include:
Scale breadth to weekly time:
<= 5 hours/week: one core lane and one lightweight deliverable6-10 hours/week: one main lane plus moderate reading11-15 hours/week: one main lane, one implementation lane, one review lane16+ hours/week: deeper specialization or a second projectPrefer 3 to 4 phases for 8 to 16 week plans. Use short bridge phases instead of stuffing prerequisites into every phase.
If the canonical track has more phases than the learner can realistically absorb in the current time horizon, show only the near-term phase horizon and keep later phases in saved state for future replans.
Default to this structure:
When continuing a saved learner:
Treat resource adaptation as a first-class workflow.
Run a resource refresh check when:
intermediate or advanced learnerWhen refresh is needed:
references/resources.json before the next serious roadmap when possibleDo not silently keep beginner resources for stronger learners unless they fill a prerequisite gap.
Use weekly reviews as the default lightweight update path.
Each weekly review should capture:
1 to 5good, too_easy, too_hard, or staleWhen the learner sends a short progress update:
progress.jsonReject empty reviews. A weekly review should contain at least one substantive signal such as hours, wins, challenges, next focus, or confidence.
Use coaching_state in learner progress to keep the skill coach-like instead of static.
Escalate gently when you see:
too_easy, too_hard, or stalePreferred interventions:
too_easy: raise difficulty, move to the next phase, or swap in stronger materialstoo_hard: shrink scope, move to the previous non-assumed phase, or add a bridgeUse a full replan only when:
Treat these files as the source of truth for changing recommendations:
references/resources.jsonreferences/projects.jsonWhen the user asks to add, remove, or refresh recommendations:
id valuesstatus and last_reviewed currentIf the user explicitly asks for the latest resources, browse and verify before updating the JSON entries.
If the learner is clearly working in Chinese and no language is specified, prefer zh resources when the library has strong matches.
Use scripts/render_plan.py when the user wants a quick draft, learner summary, or state update.
Create or refresh a learner:
python3 scripts/render_plan.py \
--learner-id alice-llm \
--track llm \
--level intermediate \
--llm-specialization rag \
--programming-level intermediate \
--math-level beginner \
--systems-level intermediate \
--goal "Build and ship reliable RAG products" \
--target-role "AI engineer" \
--preferred-topic eval \
--preferred-topic rag \
--preferred-format course \
--hours-per-week 8 \
--weeks 12 \
--save-state
For a backend-oriented CS learner:
python3 scripts/render_plan.py \
--learner-id bob-backend \
--track cs \
--level intermediate \
--cs-specialization backend \
--goal "Become a stronger backend engineer" \
--preferred-topic api \
--preferred-topic database \
--hours-per-week 8 \
--weeks 12
Record progress and regenerate the next step:
python3 scripts/render_plan.py \
--learner-id alice-llm \
--completed-phase llm-retrieval-and-eval \
--active-project llm-rag-eval-platform \
--note "Retrieval quality is stable; eval coverage is now the main bottleneck." \
--save-state
Record a weekly review:
python3 scripts/render_plan.py \
--learner-id alice-llm \
--weekly-review \
--review-label "2026-W16" \
--study-hours-done 7 \
--win "Built a first regression set" \
--challenge "Retriever quality is unstable on long docs" \
--focus-next "Improve chunking and expand the eval set to 25 examples" \
--confidence 3 \
--resource-fit too_easy \
--resource-focus-tag eval \
--refresh-resources \
--save-state
Treat script output as a draft. Tighten it using the learner's exact goals and the latest review evidence.
Avoid these failure modes:
cs or ml sub-route when the learner's goal is already specificrag, agent, eval, or finetuningzh or en option