This skill should be used when the user asks to "build a dashboard", "create a video analysis dashboard", "generate content analysis", "run topic analysis on transcripts", "analyze sentiment", "compare cross-platform messaging", or needs to aggregate transcript and frame data into an interactive web dashboard.
Aggregate transcripts and frame analysis data into structured analysis JSONs, then generate an interactive single-page web dashboard for exploring the results.
transcripts/{platform}/{id}.txt (from /video-transcribe)frame-analysis/{platform}/{id}.json (from /video-frames)metadata.json with video entriesPresent the user with section options:
| Section | Description | Data needed |
|---|---|---|
| Overview stats | Video count, platforms, total minutes, words | metadata.json |
| Video catalog |
| Filterable grid with transcript accordion |
| metadata.json + transcripts |
| Transcript search | Full-text search with highlighted excerpts | transcripts |
| Topic analysis | Keyword frequency chart with topic pills | transcripts |
| Sentiment analysis | Positive/negative/urgent tone breakdown | transcripts |
| Cross-platform comparison | Side-by-side platform metrics + top words | transcripts + metadata |
All sections are recommended. The user can deselect any they don't want.
Topic analysis uses keyword matching against transcripts. The default categories are generic:
TOPIC_KEYWORDS = {
"politics": ["government", "policy", "legislation", "law", "vote"],
"economy": ["job", "business", "economy", "wage", "worker", "tax"],
"health": ["health", "hospital", "mental health", "doctor", "care"],
"education": ["school", "student", "teacher", "education", "university"],
"environment": ["climate", "green", "pollution", "sustainability"],
"technology": ["tech", "digital", "software", "AI", "data"],
"community": ["community", "neighborhood", "local", "together"],
"safety": ["crime", "police", "safety", "violence", "security"],
}
Ask the user: "Want to customize the topic categories for this subject, or use the defaults?" If the subject is a politician, suggest political topic categories (housing, transit, budget, immigration, etc.).
Generate four JSON files in analysis/:
topics.json — keyword frequency per video, per platform, and overall:
{
"overall": {"topic": count, ...},
"per_platform": {"twitter": {"topic": count}, ...},
"per_video": {"video_id": {"title": "...", "platform": "...", "topics": {...}}}
}
sentiment.json — positive/negative/urgent scoring per video:
{
"per_video": {"video_id": {"raw_counts": {...}, "dominant_tone": "urgent"}},
"per_platform": {"twitter": {"positive": N, "negative": N, "urgent": N, "count": N}}
}
cross-platform.json — platform comparison metrics:
{
"platforms": {
"twitter": {
"video_count": N, "total_words": N, "avg_duration_seconds": N,
"avg_words_per_video": N, "top_words": {"word": count, ...}
}
}
}
summary.json — high-level overview stats:
{
"total_videos": N, "total_duration_minutes": N, "total_words": N,
"platforms": [...], "top_topics": [...],
"dominant_tone_distribution": {"urgent": N, "positive": N, ...}
}
Build a single HTML file at web/index.html with:
../analysis/*.json, ../metadata.json)Data normalization layer: The dashboard should normalize field names on load to handle variations in analysis script output. Map common patterns:
overall / frequencies (topics)per_video / by_videoper_platform / by_platformDashboard sections (based on user selection):
Start a local server and verify:
cd {project-dir} && python -m http.server 8888
# Open http://localhost:8888/web/index.html
Check: charts render, video grid populates, search works, platform filters work across sections.
Commit the analysis script, JSON outputs, and dashboard. Report key findings:
{"word": count} objects, but the dashboard may expect [{word, count}] arrays. Normalize on load.${CLAUDE_PLUGIN_ROOT}/scripts/build-analysis.py