Synthesize investigated katz issues into a narrative referee report
Synthesizes the structured issue data from a katz review into a narrative referee report suitable for submission to a journal editor or sharing with authors.
/referee-report
katz paper status should return "valid": true).draft).Run the helper script to produce a structured summary of the review:
python <katz-skills-path>/referee-report/scripts/gather_review_data.py
This writes .katz/review_data.json containing:
Read the output file to understand the full scope of the review.
Read the abstract and introduction from the canonical manuscript to understand the paper's contribution and framing. Use katz paper section <id> to get line ranges, then read the relevant portions.
Write the report as markdown to .katz/referee_report.md. Follow this structure:
# Referee Report: [Paper Title]
**Date:** [today]
**Paper:** [source file]
**Review method:** Automated multi-model review via katz, with manual investigation
One paragraph summarizing what the paper does, its main contribution, and its overall quality. Be specific about the methodology and findings — do not be generic.
Your high-level evaluation. Address:
Ground every judgment in specific evidence from the review. Do not make claims you cannot trace to a confirmed issue or a reading of the manuscript.
List the confirmed issues that represent substantive problems — things that should be addressed before publication. Group them thematically, not by section. Each concern should:
Typical themes to group by:
Do NOT list every confirmed issue individually if several point to the same underlying concern. Synthesize them into a coherent narrative. Reference the underlying issue IDs in parentheses for traceability, e.g., (issues 00b5b04c, 0e02ab37).
Confirmed issues that are real but less consequential — unclear writing, small inconsistencies, presentation issues. These can be listed more briefly. Again, group by theme rather than listing individually.
Any uncertain/open issues that the authors could clarify. Frame these as questions, not accusations.
If the review revealed strengths — e.g., many overclaiming flags were rejected because the paper hedges carefully, or the methodology is sound against multiple challenges — note these. Reviewers who only list problems are less useful than those who also identify what works.
Re-read the report. Check:
Report the path to the finished file.