Multi-session research project skill for building literature surveys, research papers, and deep-dive collections. Use this skill whenever the user wants to survey a topic across multiple dimensions, build toward a paper or set of papers, do a literature review, or create a structured research project with section-by-section deep-dives. Triggers for: "survey what people are saying about X", "write a paper on X", "do a literature review of X", "build a research project on X", "create a survey of X", any mention of "literature collection" + "deep-dives" + "paper", references to continuing an existing paper-drive project ("continue the GenAI outlook", "pick up where we left off on the survey"). Also triggers when a user uploads or references multiple papers/sources and wants them synthesized into a structured output. This skill orchestrates long-running multi-session research — it is NOT for single-source reading (use read-aid for that).
A long-running research project skill that builds from literature collection through section deep-dives to papers and presentations. Designed for multi-session work where each session advances the project incrementally.
The source adequacy discipline: Never build a section deep-dive without first assessing whether sources are sufficient. If they aren't, search for what's missing. Only proceed when adequate. This single rule prevents the most common failure mode — writing confidently from insufficient evidence.
Before doing anything, clarify with the user:
Every paper-drive project gets this directory structure. Create it immediately — don't wait until later:
{project-name}/
├── .project/
│ ├── changelog.md # Session-by-session timeline
│ ├── todo.md # Deliverables plan + source adequacy table
│ └── methodology.md # Living document — started at project creation, evolved every session
├── notes/
│ └── 00-synthesis.md # Cross-section synthesis (built incrementally)
├── index.html # Project landing page — created NOW, updated every commit
├── literature-collection.md # Master source list
methodology.md starts on day one. Don't wait until the end. Write it as a living document that captures decisions as they're made:
# Methodology — {Project Title}
## Research Question
{What we're investigating and why}
## Scope & Dimensions
{The N axes this survey covers, and what's deliberately excluded}
## Source Strategy
{How sources are found — web search, arXiv, citation tracking, etc.}
{What types of sources are prioritized and why}
## Reading & Synthesis Approach
{Technical survey vs discourse survey}
{Reading depth tiers if applicable}
## Deliverables Plan
{What papers, deep-dives, slides we intend to produce}
---
*This document evolves as the project progresses. Each session updates
it with decisions made, scope changes, and methodological refinements.*
Update methodology.md and todo.md every session with: scope changes, new decisions about source strategy, sections that required gap-filling and what was found, anything that changed from the original plan, and current completion status.
The TODO must include a source adequacy table from day one:
| Section | Current Sources | Adequate? | Gaps |
|---------|----------------|-----------|------|
| S1: ... | 0 | TBD | |
| S2: ... | 0 | TBD | |
index.html is created NOW — as soon as you know the project dimensions and deliverables. See "Index Page Structure" below for layout. All sections start as <span class="status planned">planned</span>. The index is your single monitoring dashboard for the entire project; it gets updated at every commit as items move from planned → wip → done.
The index page ordering depends on the project. The standard sections are:
planned, flip to done when built.Key deliverable cards (when applicable) use a horizontal grid layout:
<h2>Key Deliverables</h2>
<div class="deliverable-grid">
<div class="deliverable-card">
<div class="section-title">Survey <em>Paper</em> <span class="status planned">planned</span></div>
<div class="section-desc">Comprehensive survey synthesizing all N dimensions.</div>
<div class="section-links">
<a class="wip">Paper (pending)</a>
<a class="wip">Slides (pending)</a>
</div>
</div>
</div>
CSS for the grid (add to the index stylesheet):
.deliverable-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(220px, 1fr));
gap: 12px;
margin-bottom: 12px;
}
.deliverable-card {
border: 1px solid var(--border);
padding: 20px 24px;
transition: all 0.2s;
}
.deliverable-card:hover {
border-color: var(--accent);
background: var(--accent-soft);
}
As papers are completed, update the status badge and swap <a class="wip"> placeholders for real links. Slides link from the same card as their paper:
<div class="section-links">
<a href="survey-paper.html">Paper →</a>
<a href="survey-slides.html">Slides →</a>
</div>
Check sibling project indexes in the repo for style conformity before building. Match the same CSS variables, font stack, and class naming conventions.
Do a broad web search sweep (5-10 searches across all dimensions). Build literature-collection.md:
Commit and push immediately. The lit collection is the first deliverable — don't bundle it with other work.
After the literature collection is committed, go straight into section deep-dives (see below). Do NOT build the overview first. The overview is a synthesis — it requires the depth of understanding that only comes from building each section's deep-dive.
After all planned section deep-dives are built, step back and review:
done, add any new sections as wip.Only after this review, proceed to the overview and papers (see "After All Sections").
For each section (S1, S2, ..., SN), follow this sequence without exception:
Before writing a single line of content, assess:
Present the assessment to the user. State the verdict clearly: "6 sources, adequate" or "3 sources, NOT enough — missing X, Y, Z."
If inadequate: Search for specific gaps. Add to literature collection. Re-assess. Do not proceed until adequate.
Update the TODO source adequacy table with the verdict.
For web sources: fetch via web_fetch, extract key claims and data points.
For papers: download PDF, extract via PyMuPDF, read at appropriate depth.
Write notes/{NN}-{name}.md:
# S{N}: {Section Name}
> One-line summary.
**Sources:** N (list)
**Adequacy:** Verdict
---
## 1. {Sub-topic synthesized across sources}
{NOT source-by-source. Synthesize.}
## 2. {Next sub-topic}
...
## N. What Sources Agree On
- ...
## N+1. Where Sources Disagree
- {Claim}: {Source A says X} vs {Source B says Y}
Build using read-aid's deep-dive template. Read the template reference (/mnt/skills/user/read-aid/references/deep-dive-reference.html) if CSS/JS not cached.
Critical rules:
sections[], summaries[], details[] arrays← CHANGE markers in the template JSnode --check before committing/mnt/skills/user/read-aid/references/deep-dive-reference.html<style> and </style>)<script> and </script>, EXCLUDING the content arrays — just the DOM-building code)/home/claude/survey/template_css.txt/home/claude/survey/template_js.txtThe template JS contains hero markers with ← CHANGE comments:
GENAI 2026 · S1 · 8 SOURCES (or similar)Your Topic <em>Title</em><div class="sidebar-logo">L<em>M</em></div>Each section build patches these 4 markers with section-specific values.
This is where quality is won or lost. Detail panels must be analytical mini-essays, not bullet-point summaries of the markdown notes.
| Quality | Chars | Paragraphs | Description |
|---|---|---|---|
| ❌ Thin | <2K | 2-3 | Data dump, bullet lists |
| ⚠️ OK | 3-5K | 4-6 | Narrative but shallow |
| ✅ Deep | 6-10K | 6-9 | Analytical arguments with evidence |
Target: 6-10K characters per detail panel. Each should contain:
<div class="detail-header"> with section number and em-wrapped title<h3> headers<div class="diagram-wrap">)<div class="callout"> box with a key insightRun the SVG overflow check and link integrity check after building each section. See references/verification.md for the scripts.
After building sections (can be done per-section or in a batch after multiple sections), do a systematic pass:
Every bare S1–S8 mention in content links to the corresponding deep-dive:
<a href="s3-agents.dd.html" title="AI Agents">S3</a>
title attribute with section name (shows on hover)Link first mention of each source per detail panel (not just per file). Readers enter at any panel and should see linked references.
<a href="https://mckinsey.com/..." target="_blank">McKinsey</a>
Key patterns:
McKinsey and McKinsey’sIBM — it appears in IBM Plex Mono font declarations hundreds of times. Only match specific forms: IBM’s framework, IBM’s position, etc.<svg>, <a>, font-family, and HTML tag attributes<a>/</a> tags, zero links inside SVG, zero nested <a> tagsCommit with descriptive message. Push immediately. Provide the live URL to the user.
Commit message pattern:
S{N}: {Name} — deep-dive + markdown ({N} sources, {N} SVGs)
{Brief description of key findings}
The deep-dives and papers are for readers who are smart and curious but may not live in your domain. They shouldn't need a PhD to follow, but they also shouldn't feel talked down to. The goal is the style of a great longform magazine piece — thorough, engaging, confident.
Do:
Don't:
Tone calibration: Imagine explaining this to a curious, well-read person — a scholar in an adjacent field, a software engineer who reads widely, an educated generalist. They want the substance, they can handle complexity, but they won't tolerate jargon without payoff or throat-clearing before the point.
SVG diagrams appear inline in detail panels. They must follow strict rules to avoid overflow, rendering issues, and visual inconsistency.
x="0" y="0" width="400" height="{vb_height}")<div class="diagram-wrap">All text must have >5px margins from viewBox edges AND from containing rect edges.
Character width formulas:
width ≈ font-size × 0.62 × char_countwidth ≈ font-size × 0.52 × char_countFor text-anchor="middle", check BOTH edges:
x - (text_width / 2)x + (text_width / 2)Practical limits:
<text> linesAfter building any section with SVGs, run the verification script in references/verification.md.
class="lbl" (muted), class="lbl-hi" (accent)font-family="'IBM Plex Mono',monospace" for data/labelsfont-family="'Jost',sans-serif" for headings inside SVGUse these consistently across all sections. Map to concepts (e.g., purple for primary category, green for results, violet for contrasts).
Rect fills (transparent overlays):
fill="rgba(91,91,214,0.06)" / stroke="#5b5bd6"fill="rgba(124,58,237,0.08)" / stroke="#7c3aed"fill="rgba(14,143,208,0.06)" / stroke="#0e8fd0"fill="rgba(20,150,110,0.04)" / stroke="#14966e"fill="rgba(0,0,0,0.03)" (callout/annotation boxes)Text CSS classes (inside SVG):
class="lbl" — muted label text (theme-mapped)class="lbl-hi" — accent/highlight textclass="border" — muted borders/lines<div class="diagram-wrap"><svg viewBox="0 0 400 240" xmlns="http://www.w3.org/2000/svg">
<rect x="0" y="0" width="400" height="240" fill="rgba(0,0,0,0.025)" rx="6"/>
<text x="200" y="20" text-anchor="middle" font-family="'IBM Plex Mono',monospace"
font-size="10" class="lbl" letter-spacing="1.5">CHART TITLE</text>
<line x1="30" y1="32" x2="370" y2="32" stroke="currentColor" class="border" stroke-width="0.5"/>
<!-- Content zone: y=44 to y=220 -->
<!-- Bottom callout -->
<rect x="50" y="220" width="300" height="16" fill="rgba(0,0,0,0.03)" rx="3"/>
<text x="200" y="230" text-anchor="middle" font-family="'IBM Plex Mono',monospace"
font-size="7" class="lbl">Bottom annotation — max ~45 chars</text>
</svg></div>
A good SVG has 15-30+ elements. If fewer than 10, it's too sparse — pack more information in. Use labeled boxes, stat callouts, timelines, comparison columns, or flow diagrams.
Build the overview only after all section deep-dives are complete. By this point you have deep, source-grounded understanding of every dimension — the overview benefits from that depth rather than being a shallow pre-read.
Use read-aid's deep-dive template. One section per dimension. Summary level — but informed by the full analysis in each section. The overview should capture what you now know that you didn't know at the literature-collection stage: which tensions are sharpest, which findings surprised you, which dimensions connect in unexpected ways.
Source assessment for the overview is unnecessary — you've already assessed every section.
Synthesize from completed section deep-dives. Every paper-drive project produces at minimum a survey paper. Projects in the repo that have deep-dives but no papers are incomplete — they should be updated when resumed. Paper types:
Build as markdown first, then render as HTML (sidebar nav, academic typography, light/dark toggle).
Paper files live at project root (e.g., survey-paper.html, tensions-paper.html) — NOT in a papers/ subdirectory. Same level as deep-dives.
Papers are standalone final deliverables. A reader should be able to read a paper without ever seeing the deep-dives. Deep-dives are internal/intermediate work products — papers must not reference or link to them. All evidence in a paper comes from the original sources via the literature collection.
Every source name must be a hyperlink on first mention per section. The literature collection has the URLs:
<a href="https://pewresearch.org/..." target="_blank">Pew Research</a> found that...
After first mention in a section, the bare name is fine. Additionally, add a superscript citation for key claims (see Citation & Linking Discipline below).
Stat claims need source attribution. Not just "347 million Muslims" but "Pew's 2025 study found 347 million Muslims." The source name should be linked, and the claim should carry a superscript citation.
Single consolidated References section at the bottom of the paper. No per-section source footers. Use Wikipedia-style <sup class="fn"><a href="#ref-N">[N]</a><span class="tip"><a href="URL" target="_blank">Title →</a></span></sup> inline, with a numbered <ol class="ref-list-num"> at the bottom. See Citation & Linking Discipline for the full HTML/CSS pattern.
No references to deep-dives. Papers don't say "See S1 deep-dive for details." The paper IS the details. Deep-dives are working documents that fed the synthesis; they're not part of the reader's journey.
Build links and citations inline as you compose. Don't plan to "add links later" — it never happens. Number sources in order of first appearance.
Verification after building any paper:
<a href — every source from the literature collection that appears should be linkeddd.html references — these should not exist in papersgrep -c 'class="fn"' paper.html — if under 10 for a multi-section paper, citations are sparsegrep -c '<li id="ref-' paper.html — should match total unique sources citedEach paper should have an accompanying slide deck. Slides are linked from the same deliverable card on the index, not listed separately.
Use read-aid's slide-deck template. 20-30 slides. One idea per slide. SVG diagrams (see SVG Diagram Standards above). Hover citations on key claims. Same citation discipline as papers — source names linked, stats attributed.
Slide decks are also standalone — they don't reference deep-dives.
methodology.md has been evolving since project creation. Do a final pass: add a Limitations section (what was excluded and why, reading depth tradeoffs, source biases), a Reproducibility section (how someone could replicate or extend the work), and a "What Broke" section (bugs, wrong downloads, data issues encountered and how they were resolved). The honesty of this document is itself a contribution.
These are things the skill should do without being asked — they represent decisions the user previously had to make manually:
papers/ or slides/. Flat structure.<em>)The <em> word in headings is the differentiator — the word that distinguishes THIS section from all other sections in the same document. It answers "what is unique about THIS section?" not "what is this document about?"
The rule: Accent the word that, if removed, would make the heading indistinguishable from other headings.
Heuristics:
.map((w, j) => j === last ? '<em>' : w) patterns. Every accent word is a deliberate editorial choice.Examples — GOOD vs BAD:
✓ The *Cognitive* Science of Religion — cognitive distinguishes from anthropological/sociological
✗ The Cognitive Science of *Religion* — religion is every section's subject
✓ AI & *Computational* Humor — computational is the domain
✗ AI & Computational *Humor* — humor is every section's subject
✓ *Literary* Applications — literary is the specific type of application
✗ Literary *Applications* — applications is generic filler
✓ The *Neuroscience* of Laughter — neuroscience is the approach/lens
✗ The Neuroscience of *Laughter* — laughter is the paper's subject, not a differentiator
✓ Depression as *Bidirectional* Loop — bidirectional is the reframe
✗ Depression as Bidirectional *Loop* — loop is structural, not distinctive
After building any deliverable, verify:
<em> falls on the differentiator, not a repeated subject word or generic last wordEvery reader-facing deliverable — deep-dives, papers, slides — must have proper source linking and citations. The literature collection is the URL database. No deliverable should mention a source by name without linking it.
Universal rules (all deliverables):
<a href> link.grep -c '<a href' file.html — if the count is low relative to the number of sources mentioned, something is wrong.In deep-dives: Link source names to their reports (first mention per detail panel).
In papers — Wikipedia-style superscript citations:
Papers use a consolidated citation system modeled on Wikipedia. No per-section source footers. Instead:
Inline superscripts — Use <sup class="fn"><a href="#ref-N">[N]</a><span class="tip"><a href="URL" target="_blank">Title →</a></span></sup> where N is the source number. The [N] click scrolls to the bottom reference list. The tooltip hover shows the source title as a clickable link to the actual source.
Single consolidated References section at the bottom of the paper, after Methods/Limitations:
<section class="paper-section" id="references">
<div class="s-num">REFERENCES</div>
<h2>References</h2>
<ol class="ref-list-num">
<li id="ref-1" value="1"><a href="URL">Source Title (Year)</a></li>
...
</ol>
</section>
Required CSS for the citation system:
sup{font-size:11px;line-height:0;vertical-align:super}
sup.fn{position:relative;cursor:pointer;font-family:var(--sans)}
sup.fn>a{color:var(--accent);font-weight:600;text-decoration:none;border-bottom:none}
sup.fn:hover>a{color:var(--bright)}
sup.fn .tip{display:none;position:absolute;bottom:calc(100% + 4px);left:50%;transform:translateX(-50%);background:var(--bg2);border:1px solid var(--border);padding:8px 14px;font-family:var(--sans);font-size:13px;font-weight:400;color:var(--text);white-space:normal;max-width:480px;min-width:200px;z-index:50;pointer-events:auto;line-height:1.5;box-shadow:0 4px 16px rgba(0,0,0,0.15)}
sup.fn .tip a{color:var(--accent);text-decoration:none;border-bottom:1px dotted var(--accent)}
sup.fn .tip a:hover{color:var(--bright);border-bottom-style:solid}
sup.fn:hover .tip,sup.fn .tip:hover{display:block}
.ref-list-num{padding-left:2.5em;margin:0}
.ref-list-num li{font-family:var(--mono);font-size:12px;color:var(--dim);padding:5px 0;border-bottom:1px solid color-mix(in srgb,var(--border),transparent 50%);line-height:1.6}
.ref-list-num li::marker{color:var(--accent);font-weight:600}
.ref-list-num li a{color:var(--text);text-decoration:none;border-bottom:1px dotted var(--border)}
Key behaviors: Hover on [N] shows tooltip with source title as clickable link (opens in new tab). Tooltip stays open when pointer moves to it (pointer-events:auto). Click [N] scrolls to the numbered reference at bottom. Font is sans-serif (not monospace) for readability.
Numbering: Sources are numbered in order of first appearance in the paper, starting at 1. Add a "References" link to the sidebar nav.
No nested <a> tags. The [N] link and the tooltip link must be siblings inside <sup>, not nested. Structure: <sup class="fn"><a href="#ref-N">[N]</a><span class="tip"><a href="URL" target="_blank">Title →</a></span></sup>
In slides: Add <span class="cite">[ID]</span> for key claims, styled with font-family:var(--mono);font-size:11px;color:var(--dim). Source links in a references/colophon slide.
When the user signals end of session (or context is getting long):
.project/changelog.md with what was done.project/todo.md with current status (check off completed items, update source adequacy table, note what's next).project/methodology.md with any decisions made this session.project/continuation-prompt.md AND displayed to user)The continuation prompt must include: repo URL, project subfolder, what's completed, what's next (with specific instructions like "assess S2 sources first"), template system notes, and related projects for reference. Never include API keys, PATs, or other secrets in the continuation prompt file — GitHub push protection will block the push. Use placeholders like (provide at runtime).
| Domain | Minimum Sources | Must-Have References |
|---|---|---|
| AI/ML technology | 6-8 | Stanford AI Index, Epoch AI |
| Employment/labor | 6-8 | BLS, WEF, Brookings, one Fed bank |
| Public opinion | 5-6 | Pew, KPMG/Melbourne or Ipsos |
| Industry predictions | 5-7 | Gartner, IDC or Forrester, one consulting firm |
| Regulation/policy | 4-6 | Government sources, one legal scholar |
| Economics/infrastructure | 5-6 | Goldman Sachs or similar, Epoch AI |
| Scientific domain | 6-8 | 3+ primary papers, 1+ survey |
| Multimodal/creative AI | 5-7 | Model benchmarks, safety/deepfake data |
viewBox="0 0 400 200" needs <rect ... width="400" height="200"/>.IBM’s framework.char_count × font-size × 0.62 for monospace. Run the verification script before committing.<sup class="fn">[N]</sup> citations with hover tooltips and a single consolidated References section at the bottom. If grep -c 'class="fn"' paper.html returns fewer than 10 citations for a multi-section paper, something is wrong.survey-paper.html next to s1-topic.dd.html). Never create papers/ or slides/ subdirectories.continuation-prompt.md). GitHub push protection will block the push. Use (provide at runtime) as a placeholder.<a href> tags inline as you compose. The literature collection is your URL lookup table. This applies to deep-dives, papers, AND slides.planned and gets updated at every commit. Don't wait until "everything is done" to build the index; by then you've lost the monitoring benefit.*.dd.html). Deep-dives are internal working documents that fed the synthesis. If a reader needs to "see the deep-dive for details," the paper hasn't done its job.<div class="section-sources"> blocks. Per-section footers are for deep-dives only. Papers use Wikipedia-style <sup>[N]</sup> inline citations pointing to the bottom reference list.<a> tags in citations. The citation [N] link and the tooltip source link must be siblings inside <sup>, never nested. Invalid: <sup><a class="fn">[N]<span><a>...</a></span></a></sup>. Valid: <sup class="fn"><a>[N]</a><span class="tip"><a>...</a></span></sup>.var(--sans), not var(--mono). Monospace is for the reference list items only.<em> word in headings must be the differentiator — the word that makes THIS heading unique among siblings. Accenting the last word by default (or programmatically) produces generic, decorative styling instead of meaningful emphasis. If the accented word appears in multiple headings in the same document (e.g., "Religion" in a religion project), it's wrong — accent the lens word instead.