Programmatic validation of figure data and generation code. Since LLMs cannot interpret rendered images, this skill ensures figure correctness by analysing the underlying data and plotting code — catching identical-value plots, missing labels, broken references, and other common defects.
Use this skill whenever figures are generated or modified — typically after the Revision Agent produces new plots, and before the EIC finalises the paper.
LLMs cannot see rendered figures. A plot where every bar is the same height, an
axis that shows NaN, or a \includegraphics pointing at a missing file will all
pass unnoticed in text-only review. This audit catches those problems
programmatically so the agents never need to interpret pixels.
| Phase | Trigger |
|---|---|
| Post-revision | Revision agent finished generating/updating figures |
| Pre-finalize | Before EIC declares Accept — final quality gate |
| Re-audit | After any code change that touches figures/, experiments/, or plotting scripts |
python scripts/audit-figures.py <submission_dir> --out <submission_dir>/.review/artifacts/figure-audit.yaml
Exit codes:
0 — all clear1 — warnings only (non-blocking, note in review)2 — critical issues found (blocks acceptance)savefig with DPI < 150 → warning (below publication quality)\includegraphics{...} pointing to missing file → critical\ref{fig:X} without matching \label{fig:X} → warning\label{fig:X} never referenced (orphan) → warningfigure_audit:
submission: "my-paper"
status: "fail" | "warning" | "pass"
critical_count: 2
warning_count: 3
critical_issues:
- "results.csv:accuracy: all 5 values are identical (0.5)"
- "paper.tex: \includegraphics{figures/ablation.pdf} — file not found"
warnings:
- "plot_results.py: saved figure but no ylabel found"
- "plot_results.py: uses random data without setting a seed"
- "paper.tex: \label{fig:overview} is never referenced (orphan figure)"
After generating or updating any figure, run the audit and fix all critical issues before committing. Add to the completion checklist:
16. ✅ Figure audit passed (python scripts/audit-figures.py <submission>)
Before declaring Accept, run the audit as a final gate. If critical issues remain, dispatch the Revision Agent to fix them before finalising.
When reviewing figures, reference the audit report in
.review/artifacts/figure-audit.yaml for objective data quality signals
alongside your subjective assessment of presentation quality.