This skill generates interactive multiple-choice quizzes for each chapter of an intelligent textbook, with questions aligned to specific concepts from the learning graph and distributed across Bloom's Taxonomy cognitive levels to assess student understanding effectively. Uses serial execution (one agent) for token efficiency. Use this skill after chapter content has been written and the learning graph exists.
Version: 0.4
This skill automates quiz creation for intelligent textbooks by analyzing chapter content to generate contextually relevant multiple-choice questions. Each quiz is aligned to specific concepts from the learning graph, distributed across Bloom's Taxonomy cognitive levels, and formatted using mkdocs-material question admonition format with upper-alpha (A, B, C, D) answer choices. The skill ensures quality distractors, balanced answer distribution, and comprehensive explanations for educational value.
Use this skill after:
Trigger this skill when:
!!! warning "NEVER Use Parallel Agents Unless the User Explicitly Requests It" Always use a single serial agent for quiz generation. This is a hard requirement, not a suggestion. Do not offer parallel execution as an option.
Each spawned agent incurs ~12,000 tokens of startup overhead (system prompt, tool schemas, project context). Parallel execution multiplies this overhead with zero quality benefit. Measured data from a 17-chapter quiz generation:
| Approach | System Overhead | Total Tokens | Waste |
|---|---|---|---|
| Serial (1 agent) | ~12,000 | ~310,000 | — |
| Parallel (4 agents) | ~48,000 | ~358,000 | +48,000 (13%) |
Additional problems with parallel execution observed in production:
The skill supports two modes:
This phase runs once before quiz generation, reading shared context.
date "+%Y-%m-%d %H:%M:%S"
Log the start time for the session report.
Notify the user: "Quiz Generator Skill v0.4 running in serial mode."
Read and cache these files for all agents:
Course Description (docs/course-description.md)
Learning Graph (docs/learning-graph/learning-graph.csv or similar)
Glossary (docs/glossary.md)
Chapter List (scan docs/chapters/ directory)
Calculate content readiness score (1-100) for each target chapter:
Quality Checks:
Content Readiness Ranges:
User Dialog Triggers:
Launch ONE agent that processes all chapters sequentially. The agent reads each chapter, generates 10 questions, and writes the quiz file before moving to the next chapter. This pays the ~12K system prompt overhead only once.
Agent Prompt Template:
You are generating quizzes for an intelligent textbook. Generate quizzes for
ALL of the following chapters, processing them one at a time.
COURSE CONTEXT:
- Course: [course name]
- Target audience: [audience]
- Reading level: [level]
BLOOM'S TAXONOMY TARGETS:
- Introductory chapters (1-3): 40% Remember, 40% Understand, 15% Apply, 5% Analyze
- Intermediate chapters (4-N): 25% Remember, 30% Understand, 30% Apply, 15% Analyze
- Advanced chapters: 15% Remember, 20% Understand, 25% Apply, 25% Analyze, 10% Evaluate, 5% Create
CHAPTERS TO PROCESS:
[List ALL chapter directories with full paths]
FOR EACH CHAPTER:
1. Read the chapter content at the index.md file
2. Identify the key concepts covered in that chapter
3. Generate exactly 10 questions following the format below
4. Ensure answer balance: A (2-3), B (2-3), C (2-3), D (2-3)
5. Write the quiz to docs/chapters/[chapter-dir]/quiz.md
QUIZ FORMAT - Each question MUST follow this exact format:
#### [N]. [Question text ending with ?]
<div class="upper-alpha" markdown>
1. [Option A text]
2. [Option B text]
3. [Option C text]
4. [Option D text]
</div>
??? question "Show Answer"
The correct answer is **[LETTER]**. [Explanation 50-100 words]
**Concept Tested:** [Concept Name]
---
QUIZ FILE STRUCTURE:
# Quiz: [Chapter Title]
Test your understanding of [topic] with these review questions.
---
[Questions 1-10 following the format above]
REPORT when done:
- Chapter name
- Number of questions
- Bloom's distribution (R:#, U:#, Ap:#, An:#)
- Answer distribution (A:#, B:#, C:#, D:#)
Based on chapter type (introductory, intermediate, advanced), set target Bloom's Taxonomy distribution:
Introductory Chapters (typically chapters 1-3):
Intermediate Chapters:
Advanced Chapters:
Determine chapter type by:
Target question count: 8-12 per chapter (default: 10)
Analyze chapter content and learning graph to prioritize concepts:
Priority 1 (Must Test):
Priority 2 (Should Test):
Priority 3 (May Test):
Aim for 80%+ coverage of Priority 1 concepts.
For each concept selected for testing, generate question at appropriate Bloom's level following target distribution.
IMPORTANT FORMATTING REQUIREMENT:
All questions MUST use the mkdocs-material question admonition format with upper-alpha list styling:
#### 1. What is the primary purpose of a learning graph?
<div class="upper-alpha" markdown>
1. To create visual decorations for textbooks
2. To map prerequisite relationships between concepts
3. To generate random quiz questions
4. To organize files in a directory structure
</div>
??? question "Show Answer"
The correct answer is **B**. A learning graph is a directed graph that maps prerequisite relationships between concepts, showing which concepts must be learned before others. This ensures proper scaffolding in educational content.
**Concept Tested:** Learning Graph
**See:** [Learning Graph Concept](../concepts/learning-graph.md)
Formatting Rules:
<div class="upper-alpha" markdown> wrapper??? question "Show Answer" admonitionQuestion Writing Guidelines:
Remember Level:
Understand Level:
Apply Level:
Analyze Level:
Evaluate Level:
Create Level:
For each incorrect answer option (distractors), ensure:
Plausibility:
Educational Value:
Common Distractor Patterns:
Avoid:
For each question, write explanation that:
Confirms Correct Answer:
Teaches (Optional but Recommended):
Example Explanation:
The correct answer is **B**. A learning graph is a directed graph that maps
prerequisite relationships between concepts. Option A is incorrect because
learning graphs serve a structural purpose, not decorative. Option C is
incorrect because quiz generation is not the primary purpose. Option D
confuses learning graphs with file systems.
**Concept Tested:** Learning Graph
**See:** [Learning Graph Concept](../concepts/learning-graph.md#definition)
Check that correct answers are distributed evenly across A, B, C, D:
Target Distribution:
Avoid Patterns:
Randomization Strategy:
Generate quiz file with proper structure:
Separate Quiz File (docs/chapters/[chapter-name]/quiz.md):
# Quiz: [Chapter Name]
Test your understanding of [chapter topic] with these questions.
---
#### 1. [Question text]?
<div class="upper-alpha" markdown>
1. [Option 1]
2. [Option 2]
3. [Option 3]
4. [Option 4]
</div>
??? question "Show Answer"
The correct answer is **[LETTER]**. [Explanation]
**Concept Tested:** [Concept Name]
---
#### 2. [Question text]?
[Continue for all questions...]
Formatting Requirements:
After the serial agent completes, collect its results:
Create docs/learning-graph/quizzes/[chapter-name]-quiz-metadata.json for each chapter:
{
"chapter": "Chapter Name",
"chapter_file": "docs/chapters/chapter-name/index.md",
"quiz_file": "docs/chapters/chapter-name/quiz.md",
"generated_date": "YYYY-MM-DD",
"total_questions": 10,
"content_readiness_score": 85,
"overall_quality_score": 78,
"questions": [
{
"id": "ch1-q001",
"number": 1,
"question_text": "What is the primary purpose of a learning graph?",
"correct_answer": "B",
"bloom_level": "Understand",
"difficulty": "medium",
"concept_tested": "Learning Graph",
"source_link": "../concepts/learning-graph.md",
"distractor_quality": 0.85,
"explanation_word_count": 67
}
],
"answer_distribution": {
"A": 2,
"B": 3,
"C": 3,
"D": 2
},
"bloom_distribution": {
"Remember": 2,
"Understand": 4,
"Apply": 3,
"Analyze": 1,
"Evaluate": 0,
"Create": 0
},
"concept_coverage": {
"total_concepts": 12,
"tested_concepts": 10,
"coverage_percentage": 83
}
}
Create or update docs/learning-graph/quiz-bank.json with all questions:
{
"textbook_title": "Building Intelligent Textbooks",
"generated_date": "YYYY-MM-DD",
"total_chapters": 20,
"total_questions": 187,
"questions": [
{
"id": "ch1-q001",
"chapter": "Introduction to Learning Graphs",
"question_text": "What is the primary purpose of a learning graph?",
"options": {
"A": "To create visual decorations for textbooks",
"B": "To map prerequisite relationships between concepts",
"C": "To generate random quiz questions",
"D": "To organize files in a directory structure"
},
"correct_answer": "B",
"explanation": "A learning graph is a directed graph...",
"bloom_level": "Understand",
"difficulty": "medium",
"concept": "Learning Graph",
"chapter_file": "docs/concepts/learning-graph.md",
"source_section": "#definition",
"tags": ["graph", "prerequisites", "scaffolding"]
}
]
}
Use Cases for Quiz Bank:
Create docs/learning-graph/quiz-generation-report.md:
# Quiz Generation Quality Report
Generated: YYYY-MM-DD
Execution Mode: Serial (1 agent)
Wall-clock Time: X minutes Y seconds
## Overall Statistics
- **Total Chapters:** 20
- **Total Questions:** 187
- **Avg Questions per Chapter:** 9.4
- **Overall Quality Score:** 76/100
## Per-Chapter Summary
| Chapter | Questions | Quality Score | Bloom's Score | Coverage |
|---------|-----------|---------------|---------------|----------|
| Ch 1: Introduction | 10 | 82/100 | 24/25 | 83% |
| Ch 2: Learning Graphs | 12 | 78/100 | 22/25 | 90% |
| ... | ... | ... | ... | ... |
## Bloom's Taxonomy Distribution (Overall)
| Level | Actual | Target | Deviation |
|-------|--------|--------|-----------|
| Remember | 22% | 25% | -3% ✓ |
| Understand | 28% | 30% | -2% ✓ |
| Apply | 27% | 25% | +2% ✓ |
| Analyze | 18% | 15% | +3% ✓ |
| Evaluate | 4% | 4% | 0% ✓ |
| Create | 1% | 1% | 0% ✓ |
**Bloom's Distribution Score:** 24/25 (excellent)
## Answer Balance (Overall)
- A: 24% (45/187)
- B: 26% (49/187)
- C: 25% (47/187)
- D: 25% (46/187)
**Answer Balance Score:** 15/15 (perfect distribution)
## Recommendations
[Include recommendations based on aggregated data]
Perform comprehensive validation across all generated quizzes:
1. No Ambiguity:
2. Distractor Quality:
3. Grammar & Clarity:
4. Answer Balance:
5. Bloom's Distribution:
6. Concept Coverage:
7. No Duplicates:
8. Explanation Quality:
9. Link Validation:
10. Bias Check:
Success Criteria:
Update mkdocs.yml to include quizzes in each chapter directory: