Aggregates latest AI news from RSS feeds and presents an interactive Top 10 digest. Use when user says "latest AI news", "AI news digest", "what's new in AI", "fetch AI news", or wants to see recent AI developments.
Fetches latest AI news from multiple sources and presents an interactive digest. User chooses time range and categories, browses results in the terminal, then selects which items to save.
Briefly confirm that you're starting the AI News Digest.
Example: "AI 뉴스를 가져오겠습니다. 먼저 몇 가지 설정을 선택해주세요."
Use AskUserQuestion to let the user choose how far back to look.
AskUserQuestion:
questions:
- question: "어떤 기간의 AI 뉴스를 볼까요?"
header: "기간 선택"
multiSelect: false
options:
- label: "오늘 (24시간)"
description: "지난 24시간 이내 게시된 최신 뉴스"
- label: "지난 일주일"
description: "최근 7일간의 주요 뉴스 (Recommended)"
- label: "지난 한달"
description: "최근 30일간의 뉴스. 결과가 많을 수 있습니다"
Map selection to --days argument:
--days 1--days 7--days 30Use AskUserQuestion with multiSelect to let the user filter by source category.
AskUserQuestion:
questions:
- question: "어떤 소스의 뉴스를 볼까요?"
header: "카테고리"
multiSelect: true
options:
- label: "전체"
description: "모든 소스에서 가져오기 (Recommended)"
- label: "AI 도구/에이전트"
description: "Claude Code, Copilot, LangChain, Vercel AI SDK 등 실무 도구"
- label: "공식 블로그"
description: "Anthropic, OpenAI, DeepMind 등 공식 발표"
- label: "연구 논문"
description: "ArXiv ML/AI/CL 최신 논문"
Map selection to --category argument:
--category all (ignore other selections)ai_toolsofficialresearchcommunity,tech_newsofficial,ai_tools)4-1. Check Python dependencies:
python3 -c "import feedparser, yaml, certifi" 2>/dev/null || \
pip3 install feedparser pyyaml certifi --quiet
4-2. Inform user about fetching:
"AI 뉴스를 가져오는 중입니다... (약 5-10초 소요됩니다)"
Note: The plugin now uses:
4-3. Locate and run the fetch script:
First, find the plugin directory. Check these paths in order:
~/.claude/skills/ai-news-digest/../../config/fetch_news.py (installed via symlink)plugins/ai-news-digest/config/fetch_news.py (local development)python3 {path_to_fetch_news.py} --days {days} --top 10 --category {category} --output json
The script will show real-time progress like:
Fetching from 17 RSS feeds (parallel)...
[1/17] OpenAI News... ✓ (12 articles)
[2/17] DeepMind Blog... ✓ (8 articles)
...
Total articles found: 145
Error handling:
After fetching data successfully, you can optionally show trending topics:
from config.trend_analyzer import TrendAnalyzer
analyzer = TrendAnalyzer()
trends = analyzer.analyze_trends(all_entries, top_n=5)
if trends:
print("\n## 이번 주 AI 뉴스 트렌드:")
for trend in trends:
print(f"- **{trend['term']}** ({trend['count']}개 기사)")
if trend['example_articles']:
example = trend['example_articles'][0]
print(f" 예: {example['title']} ({example['source']})")
Example output:
## 이번 주 AI 뉴스 트렌드:
- **AI agent** (8개 기사)
예: New autonomous agents from OpenAI (OpenAI News)
- **Claude 4** (5개 기사)
예: Claude 4 benchmarks released (Anthropic Engineering)
- **RAG** (4개 기사)
예: Improving RAG with vector databases (LangChain Blog)
Parse the JSON output and display results directly in the terminal. DO NOT save to a file yet.
Format each entry as:
**IMPORTANT**: Run `date '+%Y-%m-%d %H:%M'` to get the exact current date/time. Never estimate.
## AI News Top 10 — {today's date} (최근 {period})
---
**1. {Title}** (Score: {score})
{Source} | {Published date}
{Summary (first 200 chars)}
Link: {url}
---
**2. {Title}** (Score: {score})
{Source} | {Published date}
{Summary (first 200 chars)}
Link: {url}
---
... (up to 10 items)
Display rules:
After displaying all results, proceed to Step 6.
After displaying the list, ask the user in plain text:
Output this exact message:
"저장할 뉴스가 있다면 번호로 알려주세요. (예: 1, 3, 7)" "없으면 '없음'이라고 해주세요."
Wait for user response. The user may respond in various formats:
Parse the numbers from the response. Extract all digits that correspond to displayed item numbers.
If user said "없음" / no save:
If user selected specific items:
7-1. Determine save location:
~/.claude/skills/learning-summary/config.yamllearning_repo is configured: save to {learning_repo}/digests/ai-news-digest-YYYY-MM-DD.md./ai-news-digest-YYYY-MM-DD.md7-2. Generate markdown for selected items only:
# AI News Digest - YYYY-MM-DD
> **Generated**: YYYY-MM-DD HH:MM
> **Period**: Last {N} days
> **Categories**: {selected categories}
> **Saved items**: {count} of {total}
---
## 1. {Title}
**Source**: {Source} | **Published**: YYYY-MM-DD | **Score**: {Score}
{Full summary}
**Key Points**:
- {Extracted key point 1}
- {Extracted key point 2}
**Why It Matters**: {Brief significance analysis}
**Read More**: {Link}
---
## 2. {Title}
...
7-3. Save using Write tool.
7-4. Confirm to user:
Saved {N}개 뉴스를 저장했습니다: {file_path}
저장된 항목:
- 1. {Title}
- 2. {Title}
다음에 또 AI 뉴스가 필요하면 말씀해주세요!
English:
Korean:
Official Blogs (weight: 9-10):
Research Papers (weight: 8):
Community (weight: 6):
Tech News (weight: 5):
Final Score = Base Weight + Keyword Boost + Recency Boost
Power users can create a user_preferences.yaml file to customize defaults:
default_time_range: 7 # Skip time range question
default_categories: "all" # Skip category question
default_top_n: 10
favorite_sources: # Get +2 weight boost
- "OpenAI News"
- "Anthropic Engineering (Community)"
excluded_sources: [] # Skip these feeds entirely