Generate high-quality content iteratively with multi-model orchestration. Claude handles research, enrichment, quality evaluation, and assembly. GPT-5.4 (via Vercel AI Gateway) handles all prose/copy generation. No deterministic scripts.
Generate newsletters, lead magnets, LinkedIn posts, and Instagram scripts with iterative quality improvement. Claude handles orchestration, research, KB enrichment, quality gates, and HTML/JSON assembly. GPT-5.4 (via Vercel AI Gateway) handles all prose and copy generation. No deterministic scripts, regex parsing, or hard-coded logic.
/ralph-content "How psilocybin affects metabolic health" --type newsletter
/ralph-content "NAD+ supplementation deep dive" --type lead_magnet --diagrams 5
/ralph-content "Time-restricted eating benefits" --type linkedin
/ralph-content "Why we believe what we believe about supplements" --type newsletter --voice klosterman
Arguments:
<topic> - The subject to write about (required)--type - Content type: newsletter (default), lead_magnet, linkedin, instagram, regulatory_brief, podcast_roundup--voice--verify - Enable fact verification phase (cross-check claims with Perplexity). Auto-enabled for regulatory_brief--diagrams N - Number of diagrams to generate (default: 2 for newsletter, 5 for lead_magnet, 0 for regulatory_brief)--kb - Enable KB enrichment via Factor Shift pipeline. Surfaces cross-domain mechanistic connections from the NGM Signaling Knowledge Base. Use for any content touching biological mechanisms, pathways, interventions, or biomarkers. Auto-enabled for podcast_roundupClaude (orchestrator) makes ALL decisions:
GPT-5.4 (copywriter) generates ALL prose:
Vercel AI Gateway: https://ai-gateway.vercel.sh/v1/chat/completions with model openai/gpt-5.4. Auth via AI_GATEWAY_API_KEY from .env.local.
There is NO:
Claude reads the context files, understands the quality criteria, constructs prompts, evaluates GPT-5.4's output, and makes judgments about iteration.
Before starting, read:
.ralph-content/progress.txt - Prior learnings (if exists)context/ directory (already loaded via skill)Note any patterns or gotchas from previous runs.
Goal: Gather comprehensive, verifiable information on the topic using Perplexity's deep research model.
IMPORTANT: Use Perplexity via OpenRouter for deep research, NOT the basic WebSearch tool.
How to call Perplexity:
API_KEY=$(grep OPENROUTER_API_KEY .env | cut -d'=' -f2)
curl -s https://openrouter.ai/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $API_KEY" \
-d '{
"model": "perplexity/sonar-deep-research",
"messages": [{"role": "user", "content": "YOUR RESEARCH QUERY HERE"}]
}' | jq -r '.choices[0].message.content'
Actions:
sonar-deep-research via OpenRouter using the Bash toolWhy Perplexity Deep Research:
Quality Check (you decide):
--kb flagGoal: Surface non-obvious, cross-domain mechanistic connections from the NGM Signaling Knowledge Base that add genuine editorial value — NOT generic clinical advice.
When to use: Enabled by --kb flag. Auto-enabled for podcast_roundup. Use for ANY content that touches biological mechanisms, interventions, biomarkers, or clinical protocols. The KB contains 862 curated documents across pathways (152), interventions (413), and biomarkers (297).
Skip when: Content is purely editorial/opinion, business strategy, or has no clinical/scientific substrate.
How to call the Factor Shift Enrichment pipeline:
VS_API_KEY=$(grep VECTORSHIFT_API_KEY .env 2>/dev/null | cut -d'=' -f2 || echo "sk_fkyJyb86LyHQR9IaomQq52xCdL7a31uyYbuHDxuqoTVqdf6n")
# Write full content to temp file, use jq to safely construct JSON
cat > /tmp/kb_content.txt << 'EOF'
YOUR FULL SECTION TEXT HERE — use complete paragraphs, not summaries
EOF
jq -n --rawfile content /tmp/kb_content.txt --arg focus "YOUR ENRICHMENT FOCUS" \
'{inputs: {content: $content, enrichment_focus: $focus}}' | \
curl -s -X POST 'https://api.vectorshift.ai/v1/pipeline/69ab22d39813d41fbf525d0c/run' \
-H "Authorization: Bearer $VS_API_KEY" \
-H "Content-Type: application/json" \
-d @-
The pipeline returns (nested under outputs key):
enriched_context — narrative markdown with sections: Mechanistic Foundation, Key Interventions & Protocols, Biomarker Landscape, Cross-Domain Connections, Evidence Gaps, Key Citationsgap_analysis — entity extraction + gap identificationDO NOT send compressed summaries or one-line descriptions to the KB. The pipeline performs semantic retrieval — richer input text retrieves richer KB intersections.
BAD (produces generic output):
"Exercise induces GPLD1 which benefits cognition in aged mice"
GOOD (produces specific cross-domain connections):
"Exercise is not merely calorie expenditure or mitochondrial stress. It is a secretory event
distributed across liver, muscle, adipose tissue, endothelium, and brain. GPLD1 emerged as a
liver-derived mediator capable of recapitulating some exercise-associated gains in hippocampal
neurogenesis and cognition. The canonical muscle-brain pathway remains essential: exercise
elevates lactate, which signals through SIRT1 and PGC-1α, increasing FNDC5 expression and
cleavage to irisin, which supports BDNF-linked neuroplasticity..."
Always send the FULL prose text of each section/finding/topic — multiple paragraphs with mechanistic detail. This is what enables the KB to find deep intersections rather than surface-level matches.
Use a rich, specific enrichment focus — not a vague category:
"mechanistic depth, cross-domain connections, specific pathway crosstalk with other longevity pathways (AMPK, mTOR, NAD+, senescence), intervention protocols, biomarker interpretation, unexpected connections to other domains in the KB""intervention protocols and dosing, cross-domain pathway connections, biomarker panels for monitoring, evidence gaps""mechanistic depth, cross-domain connections, specific pathway crosstalk, intervention protocols, biomarker interpretation, unexpected connections""one specific cross-domain connection or mechanistic insight that would surprise a knowledgeable clinician"The key section is "Cross-Domain Connections." This is where the editorial value lives. The KB surfaces connections like:
These are the insights that belong in NGM Deep Analysis callouts — specific, non-obvious, pathway-named connections that practitioners wouldn't know from the source material alone.
After collecting KB cross-domain connections, send them to GPT-5.4 with explicit instructions:
Write "NGM Deep Analysis" callout text. Rules:
- 2-3 sentences max
- Surface a specific cross-domain connection, not a truism
- Name specific pathways, molecules, or biomarkers
- Frame as "what practitioners wouldn't know from the source alone"
- No "For practitioners:" prefix — just state the insight directly
- Be intellectually honest about evidence confidence
DO NOT flatten KB insights into generic clinical advice like "treat exercise as a multisystem secretome." That wastes the KB's depth. Every callout should make a reader think "I didn't know that."
Send each finding through enrichment separately with full text. Run queries in parallel. The KB returns different cross-domain connections for each topic because it retrieves against different pathway/intervention/biomarker clusters.
--kb:A single KB query is usually sufficient. Extract the single most surprising cross-domain connection and use it as the post's intellectual anchor — the thing that makes someone stop scrolling. Example: "Your patient on metformin may be blunting their own exercise gains — AMPK activation from metformin antagonizes the mTORC1 signal their muscles need for protein synthesis after training."
Goal: Generate content that passes the quality rubric using GPT-5.4 for prose generation.
Copywriting Model: All prose/copy generation is done by GPT-5.4 via the Vercel AI Gateway. Claude handles orchestration, prompt construction, research, enrichment, quality evaluation, and HTML assembly — but the actual copywriting is delegated to GPT-5.4.
How to call GPT-5.4:
AI_GATEWAY_KEY=$(grep AI_GATEWAY_API_KEY .env.local | cut -d'=' -f2)
curl -s -X POST "https://ai-gateway.vercel.sh/v1/chat/completions" \
-H "Authorization: Bearer $AI_GATEWAY_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-5.4",
"messages": [{"role": "user", "content": "YOUR PROMPT HERE"}],
"stream": false,
"max_tokens": 8000
}' | jq -r '.choices[0].message.content'
Tip: For long prompts, write the prompt to a temp file first, then use jq to construct the JSON safely:
cat > /tmp/prompt.txt << 'EOF'
Your prompt here...
EOF
PROMPT=$(cat /tmp/prompt.txt) && jq -n --arg model "openai/gpt-5.4" --arg content "$PROMPT" \
'{"model": $model, "messages": [{"role": "user", "content": $content}], "stream": false, "max_tokens": 8000}' | \
curl -s -X POST "https://ai-gateway.vercel.sh/v1/chat/completions" \
-H "Authorization: Bearer $AI_GATEWAY_KEY" \
-H "Content-Type: application/json" \
-d @- | jq -r '.choices[0].message.content'
Actions:
context/every-voice-patterns.mdDivision of labor:
Key Insight: The prompt you send to GPT-5.4 is everything. Include all context, all constraints, all examples. GPT-5.4 produces better copy when given a comprehensive brief — don't send vague instructions.
Goal: Ensure claims are accurate, citations are correct, and sources are reputable.
For regulatory_brief type, this phase is MANDATORY and expanded.
Actions:
Before verifying claims, audit ALL cited sources for credibility:
ACCEPTABLE sources:
REJECT and replace:
If a rejected source is found, search for a peer-reviewed alternative that supports the same claim.
For each citation, verify:
Common errors to catch:
WebSearch to cross-verify with primary sourcesWhen creating related documents (e.g., multiple safety briefs), ensure:
Example: If Document A has human trial data and Document B only has preclinical data, Document A should not receive a weaker conclusion unless there's explicit justification (e.g., Document B has class-level regulatory precedent).
Goal: Create SVG diagrams that render correctly and illuminate concepts.
Actions:
context/diagram-guidelines.md
c. Self-validate the SVG:
Validation Example: "I see text at y=380 in a viewBox of height 400. With 40px padding requirement, the lowest y for text should be 360. This will overflow. I need to either increase the viewBox height or move the text up."
Goal: Compose final HTML output.
Actions:
Use templates from context/ngm-style-guide.md.
Goal: Save output and capture learnings.
Actions:
Save the content:
content/social-content/newsletters/YYYY-MM-DD-{slug}.htmlcontent/learn-platform/lead-magnets/{slug}.html + .jsoncontent/social-content/linkedin-posts/YYYY-MM-DD-{slug}.mdcontent/social-content/instagram-scripts/YYYY-MM-DD-{slug}.mdUpdate .ralph-content/progress.txt:
Git commit with quality summary:
feat: Add {type}: {title}
- Quality: Passed {N}/{total} criteria on iteration {M}
- Diagrams: {N} generated, {N} validated
- Research: {N} sources cited
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Podcast roundup workflow (replaces generic Phase 1-2 flow):
Phase 1: Episode Research — Use Perplexity to find recent episodes from the podcast roster (Huberman, Attia, Patrick, Hyman, Nicola, Campbell, and emerging voices). Extract the 5-8 most interesting, novel, or clinically actionable findings across all episodes. Do NOT force findings into predefined themes.
Phase 1.5: KB Enrichment (REQUIRED) — For each major finding, call the Factor Shift Enrichment pipeline with the finding as content and "mechanistic depth, pathway connections, intervention protocols, evidence quality" as enrichment focus. The KB enrichment serves three purposes:
Phase 2: Draft — Write the roundup as a series of short, punchy findings (not mini-essays). Each finding:
Structure:
HEADER: Title, subtitle, date range, byline
THE BOTTOM LINE: 2-3 sentence executive summary
FINDINGS (5-8): Each is a self-contained insight enriched by KB context
- Finding headline (specific, not thematic)
- Episode attribution
- The claim + KB-enriched analysis (2-3 paragraphs)
- NGM DEEP ANALYSIS callout (what the knowledge base adds)
QUICK HITS: 3-5 one-liner mentions of other notable moments
WHAT TO LISTEN TO: Top 3 episode recommendations with reasons
What NOT to do:
Output location: content/social-content/newsletters/YYYY-MM-DD-podcast-roundup.html (+ .json)
Structure for regulatory briefs:
Output location: content/social-content/newsletters/YYYY-MM-DD-{slug}.html (+ .json)
User: /ralph-content "How cold exposure affects brown adipose tissue" --type newsletter --verify
You (thinking through each phase):
PHASE 0: CONTEXT
- Reading progress.txt... Previous runs show diagram text overflow was common issue.
I'll be extra careful with text positioning in diagrams.
PHASE 1: RESEARCH
- Query 1: "cold exposure brown adipose tissue activation mechanisms 2025 2026 studies"
[WebSearch results with studies]
- Query 2: "BAT thermogenesis UCP1 clinical applications"
[WebSearch results with mechanism details]
- Evaluating: I have Hanssen et al. 2015, Cypess et al. 2009, specific activation
temperatures, and metabolic rate increases. Sufficient for newsletter.
PHASE 2: DRAFT + CRITIQUE
- Writing draft with concrete scenario hook...
- Self-critique against 8 points:
✓ Specificity: Named researchers, 17.5°C threshold, 5-15% metabolic increase
✗ Evidence cascade: Only 2 studies stacked, need 3+
✓ Colon technique: Used 4 times
✓ Em-dash clarity: Used for definitions and contrast
✓ One-sentence paragraphs: 4 strategic placements
✓ Concrete opening: Dr. Cypess case at Joslin
✓ Voice: Confident, specific
✓ Forward-looking: Ends with implications for metabolic practice
- Revising: Adding Ouellet et al. 2012 to evidence cascade...
- Re-evaluating: All 8 criteria now pass.
PHASE 3: VERIFICATION
- Claim: "Cold exposure at 17-19°C activates BAT"
[WebSearch confirms - Hanssen et al., Cell Metabolism]
- Claim: "5-15% increase in metabolic rate"
[WebSearch confirms - multiple sources]
- All key claims verified.
PHASE 4: DIAGRAMS
- Concept 1: BAT activation mechanism (pathway diagram)
- Generating SVG...
- Validating: Text at y=350 in 400-height viewBox, with 40px padding = OK
- Container at height 120 with 3 text lines (75px) + 80px padding = needs 155px, FAIL
- Regenerating with taller container...
- Re-validating: Container now 180px, text fits with padding. PASS.
- Concept 2: Cold vs thermoneutral comparison
- Generating SVG...
- Validating: All checks pass.
PHASE 5: ASSEMBLY
- Composing email HTML with diagrams embedded...
PHASE 6: PUBLISH
- Saving to content/social-content/newsletters/2026-01-18-cold-exposure-bat.html
- Updating progress.txt with learnings
- Git commit
User: /ralph-content "FDA safety brief for DSIP peptide" --type regulatory_brief
You (thinking through each phase):
PHASE 0: CONTEXT
- Reading progress.txt... Previous regulatory brief run had citation attribution error.
I'll verify each citation's author/year against PubMed.
PHASE 1: RESEARCH (Extended for regulatory)
- Query 1: "DSIP delta sleep inducing peptide clinical trials safety human studies PubMed"
[WebSearch returns 4 clinical studies from 1980s-1990s]
- Query 2: "DSIP regulatory approval Russia Deltaran Ministry of Health"
[WebSearch returns registration info]
- Query 3: "DSIP LD50 toxicity animal studies lethal dose"
[WebSearch confirms no lethal dose established]
- Evaluating: I have human trial data (n=~200 total), Russian approval, toxicity profile.
Sufficient for regulatory brief.
PHASE 2: DRAFT + CRITIQUE
- Writing with regulatory brief structure...
- Self-critique against 12-point regulatory rubric:
✓ Clinical data with n, design, findings
✓ Route-specific table
✓ Safety signals categorized (Known/Theoretical/Absent)
✓ Regulatory precedent documented
✗ Conclusion needs explicit evidence gap statement
- Revising conclusion to explicitly state limitations...
- Re-evaluating: All 12 criteria now pass.
PHASE 3: VERIFICATION (MANDATORY for regulatory_brief)
- Step 3A: Source Credibility Audit
- Found cosmicnootropic.com in research notes - REJECT (vendor site)
- Searching for peer-reviewed alternative...
- Found Popovich et al. 2003 (PMID 12782416) - ACCEPT
- Step 3B: Citation Accuracy Validation
- Ref [1] claims "Schneider-Helmert 1988" but PubMed 1299794 shows "Bes F et al. 1992"
- CORRECTING: Update author and year
- Step 3C: Claim Verification
- Claim: "LD50 never determined" - Verified via Graf & Kastin 1984 review
- Claim: "97% opiate addicts improved" - Verified via Dick 1984 (PMID 6548969)
- Step 3D: Reasoning Consistency
- N/A for single document (check if creating related briefs)
PHASE 4: DIAGRAMS
- Regulatory briefs use tables, not diagrams. Skipping.
PHASE 5: ASSEMBLY
- Composing print-optimized HTML with NGM styling...
PHASE 6: PUBLISH
- Saving to content/social-content/newsletters/2026-01-27-fda-safety-brief-dsip.html
- Creating JSON metadata file
- Updating progress.txt with citation validation learning
- Git commit
This skill prioritizes quality over speed. It's acceptable to:
The goal is content that meets Every.to editorial standards—content the team would be proud to publish.
CRITICAL: All content must be saved as JSON files to appear in /content-pipeline. The content-pipeline API reads JSON files from content directories.
{
"id": "unique-id",
"createdAt": "2026-01-22T17:30:00.000Z",
"content": "The full post text with\n\nline breaks preserved...",
"meta": {
"alphaIdea": "Short summary used as title in content pipeline",
"hookType": "curiosity|stakes|contrarian|pattern",
"wordCount": 195,
"targetAudience": "longevity medicine professionals"
},
"quality": {
"iterations": 1,
"passed": true,
"scores": {
"pattern_interrupt": true,
"hook_under_150_chars": true,
"creates_curiosity": true,
"has_clear_thesis": true,
"uses_line_breaks": true,
"avoids_wall_of_text": true,
"follows_hook_expand_close": true,
"has_specific_numbers": true,
"avoids_jargon": true,
"has_original_insight": true,
"stays_focused": true,
"has_subtle_cta": true,
"paragraphs_punchy": true,
"intellectually_honest": true
}
},
"status": "draft",
"images": []
}
Required fields for display:
id - Unique identifiercreatedAt - ISO 8601 timestamp (used for sorting)content - Full post textmeta.alphaIdea - Used as title in content pipelinequality.passed - Boolean for quality badgequality.iterations - Number for iteration countquality.scores - Object with individual rubric scores{
"id": "unique-id",
"createdAt": "2026-01-22T17:30:00.000Z",
"title": "The Newsletter Title",
"subtitle": "Optional subtitle shown in preview",
"textContent": "## Full markdown content\n\nWith all sections...",
"hasHtmlContent": true,
"status": "draft",
"meta": {
"format": "research_synthesis|ai_in_clinic_playbook|deep_dive",
"length": "short|medium|long",
"wordCount": 680,
"estimatedReadTime": 3,
"targetAudience": "longevity medicine professionals"
},
"quality": {
"iterations": 1,
"passed": true,
"scores": {
"specificity_check": true,
"evidence_cascade_present": true,
"colon_technique_used": true,
"em_dash_clarity": true,
"one_sentence_emphasis": true,
"concrete_over_abstract": true,
"voice_authenticity": true,
"forward_looking_conclusion": true
}
}
}
Required fields for display:
id - Unique identifiercreatedAt - ISO 8601 timestamptitle - Used as title in content pipelinetextContent - Full markdown content (shown in Markdown view)hasHtmlContent - Set to true if HTML file exists (enables preview iframe)quality.passed, quality.iterations - For quality badgeNote: The HTML file must have the same base filename as the JSON for the preview iframe to work.
The content pipeline supports two formats. Use the new format for all new content.
{
"id": "unique-slug",
"createdAt": "2026-01-22T17:30:00.000Z",
"title": "The Lead Magnet Title",
"subtitle": "Optional subtitle",
"slug": "unique-slug",
"sections": [
{
"title": "Section Title",
"content": ["paragraph 1", "paragraph 2"]
}
],
"unexpectedDiscoveries": [
"Discovery 1",
"Discovery 2"
],
"frameworks": [
{
"name": "Framework Name",
"description": "Framework description"
}
],
"references": [
{
"title": "Reference title with authors and journal"
}
],
"accessKeyword": "OPTIONAL_KEYWORD"
}
{
"id": "unique-id",
"title": "The Lead Magnet Title",
"slug": "unique-slug",
"created_at": "2026-01-22T00:00:00.000Z",
"keyword": "KEYWORD",
"key_findings": [
{ "finding": "Finding text", "source": "Source citation" }
],
"mechanisms": [
{ "mechanism": "Mechanism name", "clinical_takeaway": "Clinical takeaway" }
],
"references": ["Reference 1 as string", "Reference 2 as string"]
}
Required fields for display:
id - Unique identifiercreatedAt OR created_at - ISO 8601 timestamptitle - Used as title in content pipelineContent fields (use one set):
sections, frameworks, unexpectedDiscoverieskey_findings, mechanismsReferences: Supports both [{title: "..."}] and ["string"] formats
Keyword: Supports both accessKeyword and keyword
Note: The "View HTML Lead Magnet" button always appears for lead magnets (HTML file must exist with same base name as JSON).
Diagram PDF Export: Lead magnets support diagram-only PDF export via the "Download Diagram PDF" button. Features:
accessKeyword or keyword field)The hook is auto-generated using pattern matching on the title/subtitle (e.g., "What 30+ Top Researchers Agree On" for consensus topics, "The Shift Nobody Saw Coming" for revolution topics).
{
"id": "unique-id",
"createdAt": "2026-01-22T17:30:00.000Z",
"meta": {
"topic": "Short topic used as title"
},
"script": {
"hook": "First 3 seconds text",
"body": "Main content of the script...",
"cta": "Call to action text",
"totalDuration": 45
},
"quality": {
"passed": true
}
}
Required fields for display:
id - Unique identifiercreatedAt - ISO 8601 timestampmeta.topic - Used as title in content pipelinescript.hook, script.body, script.cta, script.totalDurationquality.passed - Boolean for quality badgecontent/social-content/newsletters/ (both .html AND .json)content/learn-platform/lead-magnets/ (both .html AND .json)content/social-content/linkedin-posts/ (.json only)content/social-content/instagram-scripts/ (.json only)GET /api/lead-magnet-html/[slug]GET /api/lead-magnet-diagrams-pdf/[slug] - Generates LinkedIn-optimized PDF with cover page + diagrams (uses puppeteer + jspdf).ralph-content/progress.txtcontext/every-voice-patterns.md - Voice patterns and techniquescontext/quality-rubrics.md - All quality criteriacontext/diagram-guidelines.md - SVG generation rulescontext/ngm-style-guide.md - Brand and HTML templates