End-to-end workflow for creating 2-minute AI news magazine YouTube videos from the week's top AI news. Produces monetization-ready content optimized for channel growth. Use when the user wants to create video content from AI news, research trending AI topics, generate YouTube video scripts, produce video from HackerNews/Reddit/TechCrunch/GitHub/ArXiv/Deep Research, automate YouTube content pipelines, create AI news roundups, or build a multi-agent content production workflow.
Six-agent pipeline producing 2-minute AI news magazine videos from 8+ sources.
[SCOUT v4] ──research.json──> [GHOSTWRITER v4] ──script.json──> [IMAGE GEN v2] ──> [MUSIC] ──> [DIRECTOR v7]
│ │ │ │ │
├─ HackerNews ├─ 2-min magazine script ├─ 3 images ├─ Suno AI ├─ Remotion 4.0
├─ Reddit (6 subs) ├─ 5 story segments │ per segment ├─ Mood- ├─ TransitionSeries
├─ TechCrunch RSS ├─ [IMPORTANT] line markers │ (wide/ │ matched ├─ fade() transitions
├─ TheVerge RSS ├─ Music mood output │ detail/ │ prompts ├─ Google Fonts
├─ VentureBeat RSS └─ SEO metadata │ abstract) │ ├─ Ken Burns cross-fade
├─ Wired RSS │ │ ├─ Music ducking
├─ ArsTechnica RSS │ │ └─ 120s exact
├─ GitHub Trending
├─ ArXiv (4 categories)
└─ Deep Research (context-aware, runs last)
# Full pipeline (recommended)
python3 scripts/pipeline.py --stories 5
# With YouTube upload
python3 scripts/pipeline.py --stories 5 --upload --privacy unlisted
# Individual agents
python3 scripts/scout.py # Step 1: Research (8+ sources + Deep Research)
python3 scripts/ghostwriter.py # Step 2: 2-min magazine script
python3 scripts/image_gen.py # Step 3: 3 images per segment
python3 scripts/music_suno.py # Step 4: Mood-matched music
python3 scripts/director.py # Step 5: Render video (Remotion v7)
| Variable | Required | Purpose |
|---|---|---|
GEMINI_API_KEY | Yes | Scout enrichment, Ghostwriter scripts, image generation |
SUNO_COOKIE | Optional | Suno AI music generation (falls back to MusicGen) |
TEXT_MODEL | No | Gemini model (default: gemini-2.5-flash) |
IMAGE_MODEL | No | Gemini image model (default: nano-banana-pro-preview) |
Scans 8+ sources with weekly focus, AI-powered clustering, weighted relevance scoring, and context-aware Deep Research.
| Source | Method | Signal |
|---|---|---|
| HackerNews | Firebase API (top + best stories) | Points, comments, velocity |
| JSON API (6 AI subreddits) | Score, comments, weekly top | |
| TechCrunch | RSS feed | AI category articles |
| TheVerge | RSS feed | AI coverage |
| VentureBeat | RSS feed | Enterprise AI news |
| Wired / ArsTechnica | RSS feeds | In-depth AI reporting |
| GitHub Trending | Search API (4 topic queries) | Stars velocity, weekly |
| ArXiv | RSS (CS.AI, CS.CL, CS.LG, CS.CV) | Latest papers |
| Deep Research | Gemini Interactions API | Context-aware deep analysis |
all_sourcesProduces 2-minute magazine-style scripts with intro, 5 story segments, and outro.
[0:00-0:08] INTRO (8s) - Dramatic opening, preview biggest story
[0:08-0:28] STORY 1 (20s) - Headline + narration + [IMPORTANT] line
[0:28-0:48] STORY 2 (20s) - Headline + narration + [IMPORTANT] line
[0:48-1:08] STORY 3 (20s) - Headline + narration + [IMPORTANT] line
[1:08-1:28] STORY 4 (20s) - Headline + narration + [IMPORTANT] line
[1:28-1:48] STORY 5 (20s) - Headline + narration + [IMPORTANT] line
[1:48-2:00] OUTRO (10s) - Wrap-up + CTA
[IMPORTANT] markers for music ducking + text emphasisGenerates 3 images per segment using Gemini nano-banana-pro-preview:
| Angle | Shot Style |
|---|---|
| Wide | Cinematic establishing shot, dramatic scale |
| Detail | Close-up, shallow depth of field |
| Abstract | Artistic interpretation, bold colors |
Reads the script's mood and generates a Suno AI track matched to the story's emotion.
7 mood presets: shocking (dark trap), hype (EDM), dramatic (orchestral), inspiring (uplifting), alarming (urgent), exciting (future bass), curious (ambient).
Falls back to MusicGen (local) if SUNO_COOKIE is not set.
Renders 2-minute magazine videos using Remotion 4.0.434 with the remotion-best-practices skill.
| Package | Usage |
|---|---|
@remotion/transitions | TransitionSeries with fade() between segments |
@remotion/google-fonts | Inter font loading (400, 700, 800, 900) |
@remotion/media-utils | Audio utilities |
remotion | Core: interpolate, spring, Sequence, Img, Audio |
| Effect | Implementation |
|---|---|
| Segment transitions | TransitionSeries + fade() + linearTiming (0.5s) |
| Ken Burns cross-fade | 3 images per segment (zoom-in, pan-right, zoom-out) |
| Headline animation | Word-by-word spring entrance (snappy: damping=20, stiffness=200) |
| Narration subtitles | Sliding 2-line window with smooth spring (damping=200) |
| [IMPORTANT] emphasis | Accent color + larger font + music ducks to 12% |
| Source badges | Per-segment source indicator with colored dot |
| Progress bar | Gradient bar with segment tick marks |
| Branding | Channel name + animated FOLLOW CTA (last 10s) |
| Particles | 8 floating dots with frame-driven sin() movement |
| Property | Value |
|---|---|
| Resolution | 1080x1920 (9:16 vertical) |
| FPS | 30 |
| Duration | 120s (2 minutes) |
| Codec | H.264, CRF 16, yuv420p |
| Audio | Music with loop + volume ducking |
useCurrentFrame() — NO CSS transitions/animations<Img> from remotion, never native <img>staticFile() for all public/ assetsf starts at 0 when audio begins{damping: 200}, snappy {damping: 20, stiffness: 200}extrapolateRight: "clamp"