Draft the Introduction and Abstract of any research paper following a structured narrative arc: hook → problem → gap → contribution → roadmap. Also generates a reviewer pitch. Reads contributions and research focus from project/ config files. Trigger when the user says any of the following: - "write introduction" - "draft abstract" - "write intro section" - "improve the abstract" - "write paper intro for [venue]" - "write contributions section" - "draft the intro" - "write the opening of the paper" - "write abstract and introduction" - "help me write the intro" - "the introduction needs work" - "draft contributions bullets" - "write the intro for [venue]" - "rewrite the abstract" - "write a hook paragraph" - "fix the intro" - "I need an abstract"
You are helping write the Introduction and Abstract of a research paper. These are the most-read parts of any paper. They must be accurate (grounded in what the paper actually demonstrates), specific (include concrete numbers), and structured (follow a clear arc).
Read the project config files:
Read: project/research-focus.md
Read: project/contributions.md
Read: project/paper-paths.md
Read: project/venue-config.md
If project/contributions.md does not exist, stop and tell the user:
"I need
project/contributions.mdto write the introduction and abstract. Please run theproject-initskill first, or create this file manually.The file format is:
# Paper Contributions
## Contributions
- We introduce [system], which [what it does].
- We demonstrate [capability] achieving [result].
- We release [artifact].
## Headline Result
[The single most important finding with a specific number]
```"
Extract from config:
system_name from project/research-focus.md → {{SYSTEM_NAME}}review_mode from project/venue-config.mdRead project/venue-config.md to get:
review_mode (yes/no)If review_mode: yes:
Inform the user: "Review mode is active — all text will be anonymized." or "Review mode is inactive."
Determine abstract word limit from venue:
Determine intro word budget:
Before drafting, read everything that already exists. The abstract and intro must accurately reflect the paper's actual content.
Read: {{main_tex from project/paper-paths.md}}
Glob: {{sections_dir from project/paper-paths.md}}/*.tex
For each section file found, read it. Extract:
Look for the most recent results analysis file:
Glob("experiments/results_analysis_*.md") # sorted by modification time; use the last one
Read the most recent file. Extract the top 3–4 numbers to use in the abstract and contributions.
If no results file exists, read project/contributions.md which should have a "Headline Result" entry.
If that is also empty, ask the user:
"What are your top 3 headline results? I need specific numbers to make the abstract and contributions concrete. Example: 'Our system achieves 47% patch correctness, a 2× improvement over the best baseline.'"
Before drafting, ask (or infer from context):
If the user doesn't express a preference, default to:
project/contributions.mdUse the word limit from Step 1.
Structure:
[Context — 1 sentence] Domain and why it matters.
[Problem — 1 sentence] Specific challenge this paper addresses.
[Approach — 1–2 sentences] What {{SYSTEM_NAME}} is and does.
[Result — 1 sentence] Headline number or key finding from project/contributions.md.
Template:
[Domain context sentence connecting to a real-world need.] We introduce {{SYSTEM_NAME}}, [1-sentence approach from project/research-focus.md]. [1-2 sentences describing the key method or framework.] Evaluating on [dataset/benchmark], we find that [headline result with number].
Fill placeholders from project/contributions.md (Headline Result) and project/research-focus.md.
Count words:
echo "abstract text" | wc -w
If over the limit: cut from the approach sentences first (keep context and result). Never cut the headline number.
Follow the 6-part arc. Target the word budget from Step 1.
Statistic hook: Opens with a striking number about the domain's scale or cost.
"[Domain metric] — [what the statistic means]. [Why this matters.]"
Problem hook: Describes the painful manual process that exists today.
"When a [practitioner] encounters [problem], they must [manual steps — time-consuming and error-prone]."
Contrast hook: States what AI can do vs. what it still cannot.
"Large language models can [impressive capability], yet they struggle to [gap this paper addresses]."
Choose based on the user's preference from Step 4. Use the domain described in project/research-focus.md.
Translate the "Core Problem" from project/research-focus.md into 2–3 sentences:
Describe what existing work does NOT do that {{SYSTEM_NAME}} does. Ground this in:
project/research-focus.mdproject/related-work-clusters.md (if it exists)Pattern: "Prior work on [area] has [what it does]. However, [limitation that {{SYSTEM_NAME}} addresses]."
Translate the "Approach" from project/research-focus.md into 2–3 sentences.
Be concrete: what is the input, what is the output, what is the key mechanism?
Format as a LaTeX itemize block:
\noindent Our main contributions are:
\begin{itemize}[noitemsep,topsep=2pt]
\item \textbf{[Bold label].} [Description with specific number or capability].
\item \textbf{[Bold label].} [Description].
\item \textbf{[Bold label].} [Description].
\end{itemize}
Pull directly from project/contributions.md. Each bullet should:
In review mode: remove artifact release URLs; replace with "Code and data will be released upon acceptance."
The remainder of this paper is organized as follows:
Section~\ref{sec:background} provides background on [key concepts];
Section~\ref{sec:methodology} describes [the {{SYSTEM_NAME}} design / our approach];
Section~\ref{sec:experiments} presents [experimental results];
Section~\ref{sec:related} surveys related work; and
Section~\ref{sec:conclusion} concludes.
Only include \ref{} labels for sections that actually exist or will exist in the paper.
For each contribution bullet, verify it is actually demonstrated in the paper:
Flag any contribution that cannot be verified from existing sections. Ask the user to confirm or update.
For every \cite{} key used in the intro, check it exists in the bibliography:
Grep(pattern=r"{{cite_key}}", path="{{bibliography from project/paper-paths.md}}")
Typical citations needed:
For missing keys, note as TODO. Never invent DOIs or page numbers.
If review mode is active:
Grep(pattern=r"our lab|our prior work|we previously|our previous|our earlier|our group",
path="{{sections_dir}}/intro.tex", output_mode="content")
Also check for:
Fix all hits before presenting the draft.
Write the introduction to {{sections_dir}}/intro.tex.
The file should begin:
% Introduction section — {{SYSTEM_NAME}} paper
% Generated by write-intro-and-abstract skill
\section{Introduction}
\label{sec:intro}
For the abstract: check if it is inline in the main .tex or in a separate file:
Grep(pattern=r"\\begin\{abstract\}", path="{{main_tex}}", output_mode="content")
Grep(pattern=r"\\input\{.*abstract", path="{{main_tex}}", output_mode="content")
If inline: edit the \begin{abstract}...\end{abstract} block in the main .tex using Edit.
If separate file: write to {{sections_dir}}/abstract.tex.
Check that \input{sections/intro} is in the main .tex:
Grep(pattern=r"\\input\{.*intro", path="{{main_tex}}", output_mode="content")
If not present, tell the user: "Add \input{sections/intro} after the abstract block in your main .tex."
After the draft, generate a reviewer pitch for internal reference (NOT submitted):
REVIEWER PITCH (internal reference — do not include in submission):
1. [Primary empirical claim with number]
Example: "First to [key novelty]; achieves [X]% [metric] — establishes the baseline."
2. [Key novelty claim]
Example: "[Finding] reveals [insight] — a systematic effect, not a model-specific failure."
3. [Artifact/community value, if applicable]
Example: "Full replication package: [artifact description] — enables comparison for future work."
Fill with actual numbers from Step 3.
\cite{} keys verified in bibliography.\ref{} labels match actual section labels in the paper.\input{sections/intro} placement checked in main .tex.SYNC REMINDER:
If your paper/ directory is a git submodule linked to Overleaf:
cd {{paper_root_dir}}
git add latex/sections/intro.tex latex/main.tex latex/custom.bib
git commit -m "Add introduction and abstract"
git push
After pushing, verify on Overleaf:
- Abstract word count in rendered PDF
- All citations resolve (no "?" in the PDF)
- Review mode ruler (line numbers) is present if in review mode