Retrieve relevant chunks and generate grounded quizzes with answers and evidence.
When invoked:
outputs/ using a slug derived from the topic
(<sanitized_prefix>_<8hex>). Use ls outputs/quizzes/ to find the
exact filename, e.g.:
outputs/quizzes/gpu_monitoring_3f8a1c2e.{json,md,csv}outputs/answer_keys/gpu_monitoring_3f8a1c2e_key.mdoutputs/rationales/gpu_monitoring_3f8a1c2e_rationales.mdRun via:
bash nanoclaw/tasks/generate_quiz.sh "GPU monitoring" 10 medium
Or step by step:
uv run python3 -m src.quiz.generate --topic "GPU monitoring" --num 10 --difficulty medium
uv run python3 -m src.quiz.validate --topic "GPU monitoring"
uv run python3 -m src.quiz.export --topic "GPU monitoring" --formats md json csv
LLM verbosity controls (see nanoclaw/config/settings.yaml):
think: false — disables Qwen3 chain-of-thoughttemperature: 0.2 — low randomness for factual outputnum_predict: 4096 — allows complete JSON quiz output