Pre-publication manuscript audit producing a section-level refactoring report with citation hygiene and submission-readiness checks. Triggers on: "review my paper", "check before submission", "is this ready to submit", "pre-pub checklist", "refactor my paper", "check my references", "does the abstract work".
Execute a comprehensive, multi-pass diagnostic audit of an academic or technical manuscript, producing a structured improvement report that identifies issues across 24 audit dimensions — from macro-coherence and argumentative architecture through claims-evidence calibration, narrative flow, prose microstructure, rendered visual inspection, and cross-element coherence, down to citation hygiene and reproducibility.
The output is a prioritized, actionable improvement plan — not a line edit. The goal is to surface structural, logical, and clarity issues that authors systematically miss because they're too close to the text.
Optimized for arXiv/preprint submissions with flexible compliance standards.
Companion skill: manuscript-provenance audits whether manuscript content
(numbers, tables, figures, ordering, terminology) is computationally derived
from code and scripts. This skill audits the document as prose; that skill
audits computational grounding. Run both for complete pre-publication coverage.
| Concern | This skill (manuscript-review) | manuscript-provenance |
|---|---|---|
| Reproducibility | Does the paper describe enough to reproduce? (§6) | Does the code actually produce what the paper claims? (§1, §7) |
| Figures/Tables | Legible, accessible, well-formatted? (§12) | Generated by scripts, not manual entry? (§2, §3) |
| Rendered visuals | Readable at print scale? Floats near references? (§23) | Figure generation script produces correct format? (§3) |
| Hyperparameters | Listed in the paper with rationale? (§6) | Values trace to config files, not hardcoded? (§1, §8) |
| Code availability | Statement exists in the paper? (§17) | Repo URL valid, README accurate, pipeline works? (§11) |
| Terminology | Abbreviations consistent within document? (§14) | Terms match code identifiers? (§5) |
| Significant figures | Consistent precision within document? (§12) | Precision matches script output? (§2) |
| Figure format | Appropriate format for document quality? (§12) | Format generated by script, not manually exported? (§3) |
| Computational cost | Reported in the paper? (§7) | Values trace to benchmarking scripts? (§1) |
| Macro-prose coherence | Prose framing appropriate for injected value? (§24) | Value traced to code, macro manifest produced? (§4) |
| Cross-element consistency | Prose, captions, figures, tables mutually consistent? (§24) | All elements from same run/pipeline output? (§9) |
Rule: This skill never opens the codebase. manuscript-provenance never judges prose quality. Each reads the other's report when available.
Integration point — Macro Manifest: manuscript-provenance produces a
macro manifest as part of its §4 audit: a structured list of every
macro-injected value, its resolved numeric value, its source (script + output
file), and its location(s) in the manuscript text. This skill's Pass 13
(Cross-Element Coherence) consumes that manifest to check whether the prose
surrounding each injected value is appropriate for the actual value. If no
provenance report exists, this skill extracts macro values directly from
.tex source (less precise — no source tracing, but coherence check still
runs).
Read the uploaded manuscript. Accept PDF, DOCX, LaTeX source, or Markdown. If multiple files are uploaded (e.g., main text + supplementary), process all of them.
Identify:
For arXiv submissions, compliance checks are advisory. Focus on technical quality, reproducibility, and clarity rather than strict formatting rules.
Read references/checklist.md — the comprehensive 24-section, ~175-checkpoint
refactoring checklist. Every audit pass is structured against this checklist.
Read references/checklist.md
Execute the following passes sequentially. Each pass maps to one or more checklist sections. Work systematically — for each checkpoint:
Pass 1 — Structural Integrity (Checklist §1, §4, §5, §10)
Pass 2 — Abstract & Title Calibration (Checklist §2, §3)
Pass 3 — Technical Rigor (Checklist §6, §7)
Pass 4 — Argumentation Quality (Checklist §8, §9)
Pass 5 — Citation & Reference Hygiene (Checklist §11)
Pass 6 — Visual & Tabular Quality (Checklist §12)
Pass 7 — Prose Mechanics (Checklist §13, §14, §15)
Pass 7b — AI-Pattern Detection (advisory)
Scan prose sections for residual AI-writing patterns using detection rules
from references/detection-patterns.md. Academic manuscripts
drafted or polished with AI assistants often retain detectable tells.
Focus on patterns relevant to academic writing:
Skip patterns that are acceptable in academic prose:
This pass is MEDIUM priority. Flag findings but do not over-correct — academic conventions overlap with some AI patterns. Severity: report individual instances as LOW, flag clusters of 3+ patterns in a single paragraph as MEDIUM.
Pass 8 — Best Practices & Reproducibility (Checklist §16, §17, §18, §19)
Pass 9 — Claims-Evidence Calibration (Checklist §20)
This is a dedicated pass through every assertion in the manuscript.
For each claim:
This pass is HIGH priority. Claims-evidence mismatch is the single most common reason reviewers reject papers. An overclaim in the abstract poisons the entire reading.
Pass 10 — Narrative Flow & Coherence (Checklist §21)
Read the manuscript linearly, tracking the reader's cognitive state. At each sentence and paragraph boundary, check:
Flag any location where a domain-expert reader would need to re-read, scroll back, or pause to reconstruct the logical connection. These are flow breaks.
This pass is HIGH priority. Papers with strong results but poor narrative flow exhaust reviewers. A reader who has to fight the text stops trusting the author.
Pass 11 — Prose Microstructure (Checklist §22)
Sentence-level and paragraph-level patterns that compound into readability