Use when the user wants today's summary of their configured podcasts and YouTube channels, asks to "跑每日摘要 / daily digest / 今天的 podcast 摘要 / YouTube 摘要 / 推一下摘要到 Discord", or wants a Traditional Chinese briefing of recent episodes with cross-source analysis posted to Discord.
Fetch the latest items from the user's configured podcasts and YouTube channels, transcribe audio when needed, write a detailed Traditional Chinese briefing (including cross-source analysis), save it, and post it to Discord.
Critical principle: you write the summaries yourself. The Python scripts in this skill only do fetching, transcription, and notification. No LLM subprocess is ever invoked — you read each transcript and produce the briefing directly.
Triggers (Chinese and English):
config/sources.yaml and wants to see it runDon't use for ad-hoc transcription of a single video the user pastes — that's a simpler task, not a daily digest.
Before first run, verify:
ffmpeg is installed (which ffmpeg); if not, tell user brew install ffmpegconfig/sources.yamlconfig.example/sources.yamlconfig/discord-webhook.txt exists and is non-emptyuv sync from the skill directoryIf any of these are missing, stop and report what's needed instead of guessing.
All commands run from the skill directory (the directory containing this SKILL.md). Use cd once, then uv run for each step.
uv run scripts/fetch.py
Returns JSON on stdout:
{
"date": "2026-04-17",
"items": [
{
"key": "youtube:@channel:vid",
"title": "...",
"source_name": "@channel",
"type": "youtube",
"link": "https://...",
"media_url": "https://...",
"description": "...",
"transcript_path": "/.../summaries/transcripts/2026-04-17_title.txt"
}
]
}
transcript_path: null means you still need to transcribe — go to step 2 for that item.transcript_path: "..." means captions were found and saved — read that file directly.items array means nothing new today; tell the user and stop.For every item where transcript_path is null, run:
uv run scripts/transcribe.py --url "<media_url>" --title "<title>"
This downloads the audio, transcribes with faster-whisper, and prints the path to the written transcript. Capture that path for step 3. Transcription of a 1-hour podcast can take several minutes on the default tiny model; set WHISPER_MODEL=large-v3 for higher quality at ~10x the runtime.
Run these in sequence, not in parallel — whisper is CPU-bound.
Read every transcript file collected above. Then write summaries/<YYYY-MM-DD>.md with this structure:
# 每日摘要 YYYY-MM-DD
## <item title>
**來源:** <source_name>
**連結:** <link>
<detailed Traditional Chinese summary of this item>
---
## <next item title>
...
---
## 📊 今日綜合思考
<cross-source analysis>
Per-item summary style (follow exactly):
Cross-source analysis style:
If only one item came back, skip the 綜合思考 section — there's nothing to cross-analyze.
Use the Write tool to save the markdown file to summaries/<date>.md.
Build a short Discord-friendly recap (one line per item with the title, source, and a one-sentence teaser) and write it to a temp file, or just post the full markdown:
# Option A — full markdown (auto-chunked at 2000 chars):
uv run scripts/notify.py summaries/<YYYY-MM-DD>.md
# Option B — dry-run first to preview:
uv run scripts/notify.py summaries/<YYYY-MM-DD>.md --dry-run
For the Discord recap prefer a compact version: lead with 📰 **每日摘要 YYYY-MM-DD**, then for each item **<title>** (<source>) <one-line teaser>. If the user has not said otherwise, use the compact recap for Discord and keep the long report only in the markdown file. Write the recap to a separate file (e.g. summaries/<date>-discord.md) and pass that to notify.py.
uv run scripts/mark_done.py <key1> <key2> ...
Pass every key from step 1's output. This updates config/digest-state.json so tomorrow's fetch won't re-process them. Do this last — only after the Discord post succeeded, so a failure mid-way is recoverable by re-running.
config/sources.yaml format: