Design, log, compare, and score prompt experiments so users can systematically improve outputs instead of guessing.
Design, log, compare, and score prompt experiments so users can systematically improve outputs instead of guessing.
{baseDir}/scripts/prompt_experiment_logger.py{baseDir}/resources/eval_rubric.mdUse the bundled script when it helps the user produce a structured file, manifest, CSV, or first-pass draft. Use the resource file as the default schema, checklist, or preset when the user does not provide one.
metadata.openclaw.requires.scripts/prompt_experiment_logger.py.resources/eval_rubric.md.