Generate videos using the faceless pipeline. Use when user wants to create new videos, modify the rendering process, or troubleshoot video generation issues.
cd /Users/anaskhan/.openclaw/workspace/tiktok/pipeline
source venv/bin/activate
# Generate 1 tech video and upload to all platforms
python auto_generate.py --niche tech --num 1
# Generate without uploading
python auto_generate.py --niche tech --num 1 --no-upload
# Generate multiple videos
python auto_generate.py --niche tech --num 5
python generate_script.py --niche tech --topic "What is Kubernetes?"
# Output: scripts/ep_XXX.json
Or create manually:
{
"topic": "What is Kubernetes?",
"niche": "tech",
"character_duo": "peter_stewie",
"lines": [
{"character": "Peter", "text": "Hey Stewie, what's this Kubernetes thing?"},
{"character": "Stewie", "text": "It's container orchestration. Manages deployment, scaling, and operations of containers across clusters."}
]
}
python pipeline_v2.py --script scripts/ep_XXX.json
# Output: out/ep_XXX_TIMESTAMP_final.mp4
# YouTube
python upload_youtube.py "out/video.mp4" "Title #shorts #tech"
# TikTok
python upload_tiktok.py "out/video.mp4" "Title #fyp #tech"
# Instagram
python upload_instagram.py "out/video.mp4" "Caption #reels #tech"
Script JSON
↓
┌─────────────────────────────────────────────┐
│ For each line: │
│ 1. Fish.audio TTS → audio/line_X.mp3 │
│ 2. Deepgram → word timestamps │
│ 3. Build subtitle track (karaoke style) │
└─────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────┐
│ Compositing: │
│ 1. Pick random background video │
│ 2. Scale to 1080x1920 │
│ 3. Add topic image (fade in/out at 3s) │
│ 4. Add character PNGs when speaking │
│ 5. Overlay karaoke subtitles │
│ 6. Concatenate audio tracks │
└─────────────────────────────────────────────┘
↓
out/ep_XXX_TIMESTAMP_final.mp4
Edit auto_generate.py:
CHARACTER_DUO = "elon_zuck" # or "peter_stewie", etc.
Backgrounds rotate randomly. To add new ones:
backgrounds/ (any resolution, will be scaled)BACKGROUNDS list in pipeline_v2.py:BACKGROUNDS = [
"backgrounds/subway-720p.mp4",
"backgrounds/minecraft-parkour.mp4",
"backgrounds/gta-gameplay.mp4",
"backgrounds/your-new-video.mp4", # Add here
]
In pipeline_v2.py, find the subtitle generation section:
# Font size
FONT_SIZE = 48
# Colors (per character, defined in config.json)
# Karaoke highlight: current word in yellow
In pipeline_v2.py:
TOPIC_IMAGE_FADE_OUT = 3.0 # seconds
| Property | Value |
|---|---|
| Resolution | 1080x1920 |
| FPS | 30 |
| Codec | H.264 |
| Audio | AAC 128kbps |
| Max Duration | 60 seconds |
ls -la audio/
# Each line gets its own audio file
# Play specific audio
afplay audio/line_0.mp3
python pipeline_v2.py --script scripts/ep_XXX.json
open out/ep_XXX_*_final.mp4
Add verbose flag to see FFmpeg output:
# In pipeline_v2.py, change:
subprocess.run(cmd, check=True)
# To:
subprocess.run(cmd, check=True) # Remove capture_output if present
| Service | Cost per Video |
|---|---|
| Fish.audio TTS | ~$0.01 |
| Deepgram | ~$0.01 |
| OpenAI GPT-4o | ~$0.01 |
| Total | ~$0.03 |