Collect and structure research for YouTube video topics. Web search, summarize sources, extract claims with evidence, flag fact-check issues. Triggers on "yt-research", "리서치", "자료 수집", "research topic".
Collect references, structure claims with evidence, and flag fact-check issues for YouTube video topics. Output feeds into the yt-script skill for script writing.
topic input → web search (KO+EN) → source processing → structuring → fact-check flagging → output (JSON + brief)
Parse user input for these components:
yt-research "<topic>" [--depth quick|standard|deep] [--channel <channel_id>]
standard):
quickstandard — thorough research, 5-10 sources, cross-referenced claimsdeep — academic-level, 10-20 sources, full fact-check with contradictionschannels/{channel_id}/config.jsonExamples:
yt-research "양자 컴퓨팅의 현재와 미래"yt-research "AI 반도체 전쟁" --deepyt-research "일본 경제 버블" --channel tech-shorts리서치 "메이플스토리 20주년 역사"자료 수집 "한국 출산율 위기" --deepyt-research "<topic>" — standard depth researchyt-research "<topic>" --deep — deep researchyt-research "<topic>" --quick — quick overviewyt-research "<topic>" --channel <id> — channel-specific researchyt-research list — show recent research in research/ directoryyt-research show <topic_slug> — show research detail from saved JSONUse WebSearch to find relevant sources. Run searches in both Korean and English for broader coverage.
quick: "{topic}", "{topic} 정리", "{topic} summary"standard: Add "{topic} 통계", "{topic} statistics", "{topic} 논란", "{topic} timeline"deep: Add "{topic} 논문", "{topic} research paper", "{topic} official report", "{topic} 반론"Rank sources by type (highest to lowest reliability):
quick: 3-5 sourcesstandard: 5-10 sourcesdeep: 10-20 sourcesUse WebFetch to get full content for each discovered source.
For each source, extract:
Organize all collected data into the following JSON structure:
{
"topic": "주제명",
"topic_en": "Topic in English",
"depth": "standard",
"created_at": "2026-03-16T14:30:00+09:00",
"summary": "1-2 paragraph overview of the topic covering the most important findings and context.",
"claims": [
{
"id": "claim_001",
"statement": "주장 내용",
"statement_en": "Claim in English",
"category": "fact|opinion|prediction|historical",
"evidence": [
{
"source_id": "src_001",
"excerpt": "근거 발췌",
"excerpt_en": "Evidence excerpt in English",
"reliability": "high|medium|low"
}
],
"contradictions": [
{
"source_id": "src_003",
"excerpt": "반론 내용",
"detail": "How this contradicts the claim"
}
],
"fact_check_flags": ["claim_flag_id_if_any"]
}
],
"sources": [
{
"id": "src_001",
"title": "출처 제목",
"url": "https://...",
"type": "news|academic|official|blog|wiki|video|community",
"date": "2026-01-15",
"language": "ko|en",
"reliability": "high|medium|low",
"author": "Author name if available",
"publisher": "Publisher/outlet name"
}
],
"timeline": [
{
"date": "2024-01",
"event": "사건 설명",
"event_en": "Event description",
"source_id": "src_001"
}
],
"key_numbers": [
{
"value": "42%",
"context": "맥락 설명",
"context_en": "Context in English",
"source_id": "src_001",
"date": "2025-12"
}
],
"key_people": [
{
"name": "이름",
"name_en": "Name in English",
"role": "직함/역할",
"relevance": "Why this person matters to the topic"
}
],
"fact_check_flags": [
{
"id": "flag_001",
"claim_id": "claim_001",
"issue": "single_source|outdated|conflicting|unverified|sensational|number_unverified",
"detail": "설명",
"severity": "high|medium|low",
"recommendation": "Suggested action to resolve"
}
]
}
Automatically detect and flag potential issues. Run these checks on every claim:
| Check | Trigger | Severity |
|---|---|---|
| Single source | Claim supported by only 1 source | medium |
| Outdated info | Source date >1 year old for news/tech topics, >3 years for historical | medium |
| Conflicting sources | Two or more sources contradict each other | high |
| Unverified numbers | Statistics without clear original source | high |
| Sensational language | Exaggerated/clickbait phrasing in source | low |
| Proper noun mismatch | Name/title spelled differently across sources | medium |
| Prediction as fact | Future prediction presented as established fact | medium |
| Missing date | Claim references an event without specifying when | low |
| Wiki-only | Claim only found on Wikipedia/나무위키 | medium |
| Circular sourcing | Multiple sources trace back to same original report | high |
Save results to the project directory:
research/{topic_slug}/research.json — full structured dataresearch/{topic_slug}/brief.md — human-readable summaryConvert topic to filesystem-safe slug:
"양자 컴퓨팅의 현재와 미래" → quantum-computing-present-futurebrief.md)# Research Brief: {topic}
**Depth**: {depth} | **Sources**: {count} | **Date**: {date}
## Summary
{1-2 paragraph overview}
## Key Claims
1. {claim} — [{reliability}] (source: {source_title})
2. ...
## Key Numbers
- {value}: {context} (source: {source_title}, {date})
- ...
## Timeline
- {date}: {event}
- ...
## Fact-Check Flags
### High Severity
- [{claim_id}] {detail} — {recommendation}
### Medium Severity
- [{claim_id}] {detail}
## Sources
1. [{title}]({url}) — {type}, {date}, {reliability}
2. ...
After saving files, display:
Load channel-specific config to adjust research behavior.
channels/{channel_id}/config.json){
"channel_id": "tech-shorts",
"name": "채널명",
"default_depth": "standard",
"topic_domains": ["tech", "science", "AI"],
"tone": "casual|professional|educational",
"language_priority": "ko|en|both",
"research_preferences": {
"prefer_korean_sources": true,
"include_community_sentiment": false,
"max_source_age_months": 12
}
}
When --channel is provided:
default_depth if no explicit depth giventopic_domains relevancemax_source_age_months to outdated flagging thresholdyt-script: Research JSON serves as the primary input for script generation. The script skill reads research/{topic_slug}/research.json directly.yt-research → yt-script → yt-thumbnail (planned)research/{topic_slug}/research.jsonresearch/{topic_slug}/brief.mdchannels/{channel_id}/config.json