Write publication-ready ML/AI papers for NeurIPS, ICML, ICLR, ACL, AAAI, COLM. Use when drafting papers from research repos, conducting literature reviews, finding related work, verifying citations, or preparing camera-ready submissions. Includes LaTeX templates, citation verification workflows, and paper discovery/evaluation criteria.
Expert-level guidance for writing publication-ready papers targeting NeurIPS, ICML, ICLR, ACL, AAAI, and COLM. This skill combines writing philosophy from top researchers (Nanda, Farquhar, Karpathy, Lipton, Steinhardt) with practical tools: LaTeX templates, citation verification APIs, and conference checklists.
Use this skill in the following order unless the task is unusually narrow:
references/OPERATING-MODES.md,references/citation-workflow.md as the canonical citation authority,Google Scholar may still help with manual discovery, but it is not the canonical verification authority in this skill. Default verification should use programmatic sources such as Semantic Scholar, CrossRef, and arXiv.
Paper writing is collaborative, but Claude should be proactive in delivering drafts.
The typical workflow starts with a research repository containing code, results, and experimental artifacts. Claude's role is to:
Key Principle: Be proactive. If the repo and results are clear, deliver a full draft. Don't block waiting for feedback on every section—scientists are busy. Produce something concrete they can react to, then iterate based on their response.
This is the most important rule in academic writing with AI assistance.
AI-generated citations have a ~40% error rate. Hallucinated references—papers that don't exist, wrong authors, incorrect years, fabricated DOIs—are a serious form of academic misconduct that can result in desk rejection or retraction.
NEVER generate BibTeX entries from memory. ALWAYS fetch programmatically.
| Action | ✅ Correct | ❌ Wrong |
|---|---|---|
| Adding a citation | Search API → verify → fetch BibTeX | Write BibTeX from memory |
| Uncertain about a paper | Mark as [CITATION NEEDED] | Guess the reference |
| Can't find exact paper | Note: "placeholder - verify" | Invent similar-sounding paper |
If you cannot programmatically verify a citation, you MUST:
% EXPLICIT PLACEHOLDER - requires human verification
\cite{PLACEHOLDER_author2024_verify_this} % TODO: Verify this citation exists
Always tell the scientist: "I've marked [X] citations as placeholders that need verification. I could not confirm these papers exist."
For the best paper search experience, install Exa MCP which provides real-time academic search:
Claude Code:
claude mcp add exa -- npx -y mcp-remote "https://mcp.exa.ai/mcp"
Cursor / VS Code (add to MCP settings):
{
"mcpServers": {
"exa": {
"type": "http",
"url": "https://mcp.exa.ai/mcp"
}
}
}
Exa MCP enables searches like:
Then verify results with Semantic Scholar API and fetch BibTeX via DOI.
When beginning paper writing, start by understanding the project:
Project Understanding:
- [ ] Step 1: Explore the repository structure
- [ ] Step 2: Read README, existing docs, and key results
- [ ] Step 3: Identify the main contribution with the scientist
- [ ] Step 4: Find papers already cited in the codebase
- [ ] Step 5: Search for additional relevant literature
- [ ] Step 6: Outline the paper structure together
- [ ] Step 7: Draft sections iteratively with feedback
Step 1: Explore the Repository
# Understand project structure
ls -la
find . -name "*.py" | head -20
find . -name "*.md" -o -name "*.txt" | xargs grep -l -i "result\|conclusion\|finding"
Look for:
README.md - Project overview and claimsresults/, outputs/, experiments/ - Key findingsconfigs/ - Experimental settings.bib files or citation referencesStep 2: Identify Existing Citations
Check for papers already referenced in the codebase:
# Find existing citations
grep -r "arxiv\|doi\|cite" --include="*.md" --include="*.bib" --include="*.py"
find . -name "*.bib"
These are high-signal starting points for Related Work—the scientist has already deemed them relevant.
Step 3: Clarify the Contribution
Before writing, explicitly confirm with the scientist:
"Based on my understanding of the repo, the main contribution appears to be [X]. The key results show [Y]. Is this the framing you want for the paper, or should we emphasize different aspects?"
Never assume the narrative—always verify with the human.
Step 4: Search for Additional Literature
Use web search to find relevant papers:
Search queries to try:
- "[main technique] + [application domain]"
- "[baseline method] comparison"
- "[problem name] state-of-the-art"
- Author names from existing citations
Then verify and retrieve BibTeX using the citation workflow below.
Step 5: Deliver a First Draft
Be proactive—deliver a complete draft rather than asking permission for each section.
If the repo provides clear results and the contribution is apparent:
If genuinely uncertain about framing or major claims:
Questions to include with the draft (not before):
Use this skill when:
Always remember: First drafts are starting points for discussion, not final outputs.
When conducting literature reviews, finding related work, or discovering recent papers, use this workflow to systematically search, evaluate, and select ML papers.
Literature Research Process:
- [ ] Step 1: Define search scope and keywords
- [ ] Step 2: Search arXiv and academic databases
- [ ] Step 3: Screen papers by title/abstract
- [ ] Step 4: Evaluate paper quality (5 dimensions)
- [ ] Step 5: Select top papers and extract citations
- [ ] Step 6: Verify citations programmatically
Step 1: Define Search Scope
Identify specific research areas, methods, or applications:
transformer architecture, graph neural networks, self-supervised learningmedical image analysis, reinforcement learning for robotics, language model alignmentout-of-distribution generalization, continual learning, fairness in MLStep 2: Search arXiv
Use arXiv search with targeted keywords:
URL Pattern: