Generate a formalized rubric for scoring, grading, or evaluation in the current domain. Use when a judging task needs locked dimensions, pass-partial-fail boundaries, evidence requirements, tie-breakers, or confidence guidance before candidate comparison.
Use this skill to build a formal judging rubric before scoring responses, candidates, trajectories, benchmark outputs, or other evaluation artifacts.
Read references/rubric-techniques.md when you need current judging guidance on locked rubrics, task-adaptive rubric design, verifier-backed evidence, robustness checks, calibration, and chain-of-thought skepticism.
When the judging inputs are already available as a structured contract, use scripts/render_rubric.py to render the rubric package deterministically instead of drafting it manually.
Use scripts/render_rubric.py when the rubric inputs are already structured and you want a deterministic first draft.
Use scripts/render_rubric.py --validate-only when you need to verify that a structured contract is complete and consistent before rendering or handing it to another judging workflow.
Accepted contract fields:
domaintaskevidence_modedecisiondimensions: array of objects with name, why_it_matters, pass_boundary, partial_boundary, fail_boundary, and allowed_evidenceaggregation_rules: object with weighting_or_priority, non_negotiable_failures, and tie_breakersrobustness_checks: object with order_bias_check, evidence_quality_check, benchmark_overfitting_check, and confidence_guidanceblockers: object with missing_inputs, weak_evidence_areas, and clarifications_neededExample:
python scripts/render_rubric.py --input-file contract.json --output-file rubric.md
Validation example:
python scripts/render_rubric.py --input-file contract.json --validate-only
assets/rubric-template.md provides a reusable template for the final rubric package.