Generate tech news digests with unified source model, quality scoring, and multi-format output. Six-source data collection from RSS feeds, Twitter/X KOLs, GitHub releases, GitHub Trending, Reddit, and web search. Pipeline-based scripts with retry mechanisms and deduplication. Supports Discord, email, and markdown templates.
Automated tech news digest system with unified data source model, quality scoring pipeline, and template-based output generation.
Configuration Setup: Default configs are in config/defaults/. Copy to workspace for customization:
mkdir -p workspace/config
cp config/defaults/sources.json workspace/config/tech-news-digest-sources.json
cp config/defaults/topics.json workspace/config/tech-news-digest-topics.json
Environment Variables:
TWITTERAPI_IO_KEY - twitterapi.io API key (optional, preferred)X_BEARER_TOKEN - Twitter/X official API bearer token (optional, fallback)TAVILY_API_KEY - Tavily Search API key, alternative to Brave (optional)WEB_SEARCH_BACKEND - Web search backend: auto|brave|tavily (optional, default: auto)BRAVE_API_KEYS - Brave Search API keys, comma-separated for rotation (optional)BRAVE_API_KEY - Single Brave key fallback (optional)GITHUB_TOKEN - GitHub personal access token (optional, improves rate limits)Generate Digest:
# Unified pipeline (recommended) β runs all 6 sources in parallel + merge
python3 scripts/run-pipeline.py \
--defaults config/defaults \
--config workspace/config \
--hours 48 --freshness pd \
--archive-dir workspace/archive/tech-news-digest/ \
--output /tmp/td-merged.json --verbose --force
Use Templates: Apply Discord, email, or PDF templates to merged output
sources.json - Unified Data Sources{
"sources": [
{
"id": "openai-rss",
"type": "rss",
"name": "OpenAI Blog",
"url": "https://openai.com/blog/rss.xml",
"enabled": true,
"priority": true,
"topics": ["llm", "ai-agent"],
"note": "Official OpenAI updates"
},
{
"id": "sama-twitter",
"type": "twitter",
"name": "Sam Altman",
"handle": "sama",
"enabled": true,
"priority": true,
"topics": ["llm", "frontier-tech"],
"note": "OpenAI CEO"
}
]
}
topics.json - Enhanced Topic Definitions{
"topics": [
{
"id": "llm",
"emoji": "π§ ",
"label": "LLM / Large Models",
"description": "Large Language Models, foundation models, breakthroughs",
"search": {
"queries": ["LLM latest news", "large language model breakthroughs"],
"must_include": ["LLM", "large language model", "foundation model"],
"exclude": ["tutorial", "beginner guide"]
},
"display": {
"max_items": 8,
"style": "detailed"
}
}
]
}
run-pipeline.py - Unified Pipeline (Recommended)python3 scripts/run-pipeline.py \
--defaults config/defaults [--config CONFIG_DIR] \
--hours 48 --freshness pd \
--archive-dir workspace/archive/tech-news-digest/ \
--output /tmp/td-merged.json --verbose --force
*.meta.json$GITHUB_TOKEN not setfetch-rss.py - RSS Feed Fetcherpython3 scripts/fetch-rss.py [--defaults DIR] [--config DIR] [--hours 48] [--output FILE] [--verbose]
fetch-twitter.py - Twitter/X KOL Monitorpython3 scripts/fetch-twitter.py [--defaults DIR] [--config DIR] [--hours 48] [--output FILE] [--backend auto|official|twitterapiio]
TWITTERAPI_IO_KEY set, else official X API v2 if X_BEARER_TOKEN setfetch-web.py - Web Search Enginepython3 scripts/fetch-web.py [--defaults DIR] [--config DIR] [--freshness pd] [--output FILE]
fetch-github.py - GitHub Releases Monitorpython3 scripts/fetch-github.py [--defaults DIR] [--config DIR] [--hours 168] [--output FILE]
$GITHUB_TOKEN β GitHub App auto-generate β gh CLI β unauthenticated (60 req/hr)fetch-github.py --trending - GitHub Trending Repospython3 scripts/fetch-github.py --trending [--hours 48] [--output FILE] [--verbose]
fetch-reddit.py - Reddit Posts Fetcherpython3 scripts/fetch-reddit.py [--defaults DIR] [--config DIR] [--hours 48] [--output FILE]
enrich-articles.py - Article Full-Text Enrichmentpython3 scripts/enrich-articles.py --input merged.json --output enriched.json [--min-score 10] [--max-articles 15] [--verbose]
merge-sources.py - Quality Scoring & Deduplicationpython3 scripts/merge-sources.py --rss FILE --twitter FILE --web FILE --github FILE --reddit FILE
validate-config.py - Configuration Validatorpython3 scripts/validate-config.py [--defaults DIR] [--config DIR] [--verbose]
generate-pdf.py - PDF Report Generatorpython3 scripts/generate-pdf.py --input report.md --output digest.pdf [--verbose]
weasyprint.sanitize-html.py - Safe HTML Email Converterpython3 scripts/sanitize-html.py --input report.md --output email.html [--verbose]
source-health.py - Source Health Monitorpython3 scripts/source-health.py --rss FILE --twitter FILE --github FILE --reddit FILE --web FILE [--verbose]
summarize-merged.py - Merged Data Summarypython3 scripts/summarize-merged.py --input merged.json [--top N] [--topic TOPIC]
Place custom configs in workspace/config/ to override defaults:
"enabled": falseid β user version takes precedenceid β appended to defaultsid β user version completely replaces default// workspace/config/tech-news-digest-sources.json
{
"sources": [
{
"id": "simonwillison-rss",
"enabled": false,
"note": "Disabled: too noisy for my use case"
},
{
"id": "my-custom-blog",
"type": "rss",
"name": "My Custom Tech Blog",
"url": "https://myblog.com/rss",
"enabled": true,
"priority": true,
"topics": ["frontier-tech"]
}
]
}
references/templates/discord.md)<link>)references/templates/email.md)references/templates/pdf.md)scripts/generate-pdf.py (requires weasyprint)All sources pre-configured with appropriate topic tags and priority levels.
pip install -r requirements.txt
Optional but Recommended:
feedparser>=6.0.0 - Better RSS parsing (fallback to regex if unavailable)jsonschema>=4.0.0 - Configuration validationAll scripts work with Python 3.8+ standard library only.
# Validate configuration
python3 scripts/validate-config.py --verbose
# Test RSS feeds
python3 scripts/fetch-rss.py --hours 1 --verbose
# Check Twitter API
python3 scripts/fetch-twitter.py --hours 1 --verbose
<workspace>/archive/tech-news-digest/Set in ~/.zshenv or similar:
# Twitter (at least one required for Twitter source)
export TWITTERAPI_IO_KEY="your_key" # twitterapi.io key (preferred)
export X_BEARER_TOKEN="your_bearer_token" # Official X API v2 (fallback)
export TWITTER_API_BACKEND="auto" # auto|twitterapiio|official (default: auto)
# Web Search (optional, enables web search layer)
export WEB_SEARCH_BACKEND="auto" # auto|brave|tavily (default: auto)
export TAVILY_API_KEY="tvly-xxx" # Tavily Search API (free 1000/mo)
# Brave Search (alternative)
export BRAVE_API_KEYS="key1,key2,key3" # Multiple keys, comma-separated rotation
export BRAVE_API_KEY="key1" # Single key fallback
export BRAVE_PLAN="free" # Override rate limit detection: free|pro
# GitHub (optional, improves rate limits)
export GITHUB_TOKEN="ghp_xxx" # PAT (simplest)
export GH_APP_ID="12345" # Or use GitHub App for auto-token
export GH_APP_INSTALL_ID="67890"
export GH_APP_KEY_FILE="/path/to/key.pem"
TWITTERAPI_IO_KEY preferred ($3-5/mo); X_BEARER_TOKEN as fallback; auto mode tries twitterapiio firstThe cron prompt should NOT hardcode the pipeline steps. Instead, reference references/digest-prompt.md and only pass configuration parameters. This ensures the pipeline logic stays in the skill repo and is consistent across all installations.
Read <SKILL_DIR>/references/digest-prompt.md and follow the complete workflow to generate a daily digest.
Replace placeholders with:
- MODE = daily
- TIME_WINDOW = past 1-2 days
- FRESHNESS = pd
- RSS_HOURS = 48
- ITEMS_PER_SECTION = 3-5
- ENRICH = true
- BLOG_PICKS_COUNT = 3
- EXTRA_SECTIONS = (none)
- SUBJECT = Daily Tech Digest - YYYY-MM-DD
- WORKSPACE = <your workspace path>
- SKILL_DIR = <your skill install path>
- DISCORD_CHANNEL_ID = <your channel id>
- EMAIL = (optional)
- LANGUAGE = English
- TEMPLATE = discord
Follow every step in the prompt template strictly. Do not skip any steps.
Read <SKILL_DIR>/references/digest-prompt.md and follow the complete workflow to generate a weekly digest.
Replace placeholders with:
- MODE = weekly
- TIME_WINDOW = past 7 days
- FRESHNESS = pw
- RSS_HOURS = 168
- ITEMS_PER_SECTION = 10-15
- ENRICH = true
- BLOG_PICKS_COUNT = 3-5
- EXTRA_SECTIONS = π Weekly Trend Summary (2-3 sentences summarizing macro trends)
- SUBJECT = Weekly Tech Digest - YYYY-MM-DD
- WORKSPACE = <your workspace path>
- SKILL_DIR = <your skill install path>
- DISCORD_CHANNEL_ID = <your channel id>
- EMAIL = (optional)
- LANGUAGE = English
- TEMPLATE = discord
Follow every step in the prompt template strictly. Do not skip any steps.
digest-prompt.md, not scattered across cron configsOpenClaw enforces cross-provider isolation: a single session can only send messages to one provider (e.g., Discord OR Telegram, not both). If you need to deliver digests to multiple platforms, create separate cron jobs for each provider:
# Job 1: Discord + Email
- DISCORD_CHANNEL_ID = <your-discord-channel-id>
- EMAIL = [email protected]
- TEMPLATE = discord
# Job 2: Telegram DM
- DISCORD_CHANNEL_ID = (none)
- EMAIL = (none)
- TEMPLATE = telegram
Replace DISCORD_CHANNEL_ID delivery with the target platform's delivery in the second job's prompt.
This is a security feature, not a bug β it prevents accidental cross-context data leakage.
This skill uses a prompt template pattern: the agent reads digest-prompt.md and follows its instructions. This is the standard OpenClaw skill execution model β the agent interprets structured instructions from skill-provided files. All instructions are shipped with the skill bundle and can be audited before installation.
The Python scripts make outbound requests to:
tech-news-digest-sources.json)api.x.com or api.twitterapi.io)api.search.brave.com)api.tavily.com)api.github.com)reddit.com)No data is sent to any other endpoints. All API keys are read from environment variables declared in the skill metadata.
Email delivery uses send-email.py which constructs proper MIME multipart messages with HTML body + optional PDF attachment. Subject formats are hardcoded (Daily Tech Digest - YYYY-MM-DD). PDF generation uses generate-pdf.py via weasyprint. The prompt template explicitly prohibits interpolating untrusted content (article titles, tweet text, etc.) into shell arguments. Email addresses and subjects must be static placeholder values only.
Scripts read from config/ and write to workspace/archive/. No files outside the workspace are accessed.
--verbose for detailsvalidate-config.py for specific issues--hours) and source enablementAll scripts support --verbose flag for detailed logging and troubleshooting.
MAX_WORKERS in scripts for your systemTIMEOUT for slow networksMAX_ARTICLES_PER_FEED based on needsThe digest prompt instructs agents to run Python scripts via shell commands. All script paths and arguments are skill-defined constants β no user input is interpolated into commands. Two scripts use subprocess:
run-pipeline.py orchestrates child fetch scripts (all within scripts/ directory)fetch-github.py has two subprocess calls:
openssl dgst -sha256 -sign for JWT signing (only if GH_APP_* env vars are set β signs a self-constructed JWT payload, no user content involved)gh auth token CLI fallback (only if gh is installed β reads from gh's own credential store)No user-supplied or fetched content is ever interpolated into subprocess arguments. Email delivery uses send-email.py which builds MIME messages programmatically β no shell interpolation. PDF generation uses generate-pdf.py via weasyprint. Email subjects are static format strings only β never constructed from fetched data.
Scripts do not directly read ~/.config/, ~/.ssh/, or any credential files. All API tokens are read from environment variables declared in the skill metadata. The GitHub auth cascade is:
$GITHUB_TOKEN env var (you control what to provide)GH_APP_ID, GH_APP_INSTALL_ID, and GH_APP_KEY_FILE β uses inline JWT signing via openssl CLI, no external scripts involved)gh auth token CLI (delegates to gh's own secure credential store)If you prefer no automatic credential discovery, simply set $GITHUB_TOKEN and the script will use it directly without attempting steps 2-3.
This skill does not install any packages. requirements.txt lists optional dependencies (feedparser, jsonschema) for reference only. All scripts work with Python 3.8+ standard library. Users should install optional deps in a virtualenv if desired β the skill never runs pip install.
Scripts make outbound HTTP requests to configured RSS feeds, Twitter API, GitHub API, Reddit JSON API, Brave Search API, and Tavily Search API. No inbound connections or listeners are created.