Paper reproduction skill for computer systems, networking, and AI research. Use when the user provides a paper PDF, appendix, or repository and wants Codex to reconstruct the method, extract claims and experiments, identify missing details, plan a faithful reproduction, and judge whether the paper is reproducible, partially reproducible, blocked, or contradicted.
Read ../references/workflow.md, ../references/venues.md, and ../references/review-rubric.md.
Use the bundled templates:
Treat reproduction as a structured audit, not a vague attempt to "see if it runs."
Core outputs:
Work in this order:
Be explicit about failure sources:
Do not equate "result mismatch" with "paper is wrong" unless the evidence is strong enough to support that conclusion.
When a repository is available, connect this skill to:
researchstack-experiment-design for the reproduced experiment matrix,researchstack-experiment-ops for run tracking,researchstack-artifact-audit for provenance and credibility,researchstack-peer-review for reviewer-style reproducibility critique.