Use this skill for "review this paper", "review this manuscript", "peer review", "review my paper", "critique this manuscript", "review this submission", "give me feedback on my paper", "check my methods", "review my statistics", "review as a peer reviewer", "evaluate this manuscript", "review this PDF", or mentions manuscript review, peer review, paper critique, or methodological review.
neuromechanist15 estrellas2 abr 2026
Ocupación
Categorías
Académico
Contenido de la habilidad
Provides structured, rigorous peer review of academic manuscripts. Reviews prioritize methodological soundness, statistical validity, logical consistency, and reproducibility.
Note on review calibration: This skill reflects an opinionated review style that prioritizes methodological precision, statistical rigor, and reproducibility. It is direct, evidence-based, and holds manuscripts to high standards. The severity calibration (Critical, Major, Minor) follows a strict hierarchy: Critical issues block publication; Major issues require significant revision; Minor issues improve polish. Reviewers using this skill should adapt the tone and depth to their own standards and the target journal's expectations.
When to Use
Activate when the user wants peer-review feedback on a manuscript (journal article, conference paper, preprint) evaluated for methodological soundness, statistical validity, and clarity of presentation. The output is a structured review with categorized concerns and constructive suggestions.
Manuscript Intake
Skills relacionados
Manuscripts for peer review are typically provided as PDFs from journal submission systems. Convert to both markdown and PNG for a complete review: markdown for efficient text analysis, PNG for exact page/line citations and figure inspection.
PDF (most common): Use a hybrid approach: convert to both markdown and PNG. Markdown gives efficient searchable text for content analysis; PNG preserves exact page layout, line numbers, and figure positions for precise citations.
Step 2: Convert to PNG for page/line references and figure inspection:
uv run --with pdf2image --with pillow python -c "
from pdf2image import convert_from_path
pages = convert_from_path('manuscript.pdf', dpi=200)
for i, page in enumerate(pages):
page.save(f'manuscript_page_{i+1}.png', 'PNG')
"
Note: requires poppler (brew install poppler on macOS, apt install poppler-utils on Linux). Alternatively, use pdftoppm -png -r 200 manuscript.pdf manuscript_page.
Workflow: Read the markdown for content review (methods, statistics, logic, literature). When citing a specific issue, refer to the PNG pages to provide exact page and line numbers (e.g., "page 4, line 23" or "p4 l23"). Use the PNGs to inspect figures, tables, and overall layout.
For large PDFs (>10 pages), read PNGs in batches as needed.
Markdown or LaTeX: Read directly; no conversion needed.
Read all sections including supplementary materials, appendices, and figures. Note the target journal if known, as expectations differ across venues (transactions vs. letters vs. conference proceedings).
This is the core of the review. Evaluate using the checklist in references/methodology-checklist.md. Key areas:
Experimental design:
Is the design appropriate for the research question?
Are controls adequate?
Is the sample size justified (power analysis, or at minimum acknowledged)?
Are inclusion/exclusion criteria clearly stated and justified?
Are there potential confounds that are not addressed?
Signal processing and data analysis (when applicable):
Are filtering parameters appropriate? Check Nyquist constraints: the analysis bandwidth must not exceed half the sampling rate (Nyquist frequency) and should not exceed the low-pass filter cutoff.
Are artifact rejection/correction methods validated for the specific data type?
Are analysis parameters (e.g., window lengths, frequency bands) justified?
Is there any "double-dipping" where the same data features used for selection/clustering are also the analysis target?
Statistical methods:
Are the chosen tests appropriate for the data distribution and design?
Are parametric assumptions tested (normality, homogeneity of variance)?
For paired vs. unpaired comparisons, is the correct test variant used?
Are main effects tested before post-hoc comparisons?
Are multiple comparisons corrected?
Are effect sizes reported, not just p-values?
Do small sample sizes warrant the statistical conclusions drawn?
Are figures appropriate for the data? (Bar plots with error bars for N<5 are misleading; use individual data points instead.)
3. Check logical consistency
Trace the argument from introduction through methods to results and discussion:
Do the methods actually test the stated hypothesis?
Do the results support the claims made in the discussion?
Are conclusions proportional to the evidence? (Do not overreach.)
If the introduction frames a problem, do the methods address that exact problem?
Are terms and definitions used consistently throughout?
If a concept is introduced in the introduction, is it operationalized the same way in the methods?
Watch for contradictions: claims in the introduction that the authors' own methods cannot test, or discussion points that go beyond what the data show.
4. Evaluate literature coverage
Is the literature review current? (Check if key papers from the last 2-3 years are missing.)
Are the authors' claims supported by the cited literature, or do the cited papers actually argue otherwise?
Is related work from other groups or approaches acknowledged?
For the specific techniques used, are validation/limitation papers cited?
Are there relevant studies the authors should compare their results against?
Use opencite to verify literature claims and search for potentially missing references: