Research a topic from the last 30 days. Also triggered by 'last30'. Sources: Reddit, X, YouTube, TikTok, Instagram, Hacker News, Polymarket, web. Become an expert and write copy-paste-ready prompts.
Permissions overview: Reads public web/platform data and optionally saves research briefings to
~/Documents/Last30Days/. X/Twitter search uses optional user-provided tokens (AUTH_TOKEN/CT0 env vars) — no browser session access. All credential usage and data writes are documented in the Security & Permissions section.
Research ANY topic across Reddit, X, YouTube, TikTok, Hacker News, Polymarket, and the web. Surface what people are actually discussing, recommending, betting on, and debating right now.
Before doing anything, parse the user's input for:
Common patterns:
[topic] for [tool] → "web mockups for Nano Banana Pro" → TOOL IS SPECIFIED[topic] prompts for [tool] → "UI design prompts for Midjourney" → TOOL IS SPECIFIED[topic] → "iOS design mockups" → TOOL NOT SPECIFIED, that's OKIMPORTANT: Do NOT ask about target tool before research.
Store these variables:
TOPIC = [extracted topic]TARGET_TOOL = [extracted tool, or "unknown" if not specified]QUERY_TYPE = [RECOMMENDATIONS | NEWS | HOW-TO | GENERAL]DISPLAY your parsing to the user. Before running any tools, output:
I'll research {TOPIC} across Reddit, X, TikTok, and the web to find what's been discussed in the last 30 days.
Parsed intent:
- TOPIC = {TOPIC}
- TARGET_TOOL = {TARGET_TOOL or "unknown"}
- QUERY_TYPE = {QUERY_TYPE}
Research typically takes 2-8 minutes (niche topics take longer). Starting now.
If TARGET_TOOL is known, mention it in the intro: "...to find {QUERY_TYPE}-style content for use in {TARGET_TOOL}."
This text MUST appear before you call any tools. It confirms to the user that you understood their request.
If TOPIC looks like it could have its own X/Twitter account - people, creators, brands, products, tools, companies, communities (e.g., "Dor Brothers", "Jason Calacanis", "Nano Banana Pro", "Seedance", "Midjourney"), do ONE quick WebSearch:
WebSearch("{TOPIC} X twitter handle site:x.com")
From the results, extract their X/Twitter handle. Look for:
x.com/{handle} or twitter.com/{handle}Verify the account is real, not a parody/fan account. Check for:
If you find a clear, verified handle, pass it as --x-handle={handle} (without @). This searches that account's posts directly - finding content they posted that doesn't mention their own name.
Skip this step if:
--quick depthStore: RESOLVED_HANDLE = {handle or empty}
If --agent appears in ARGUMENTS (e.g., /last30days plaud granola --agent):
AskUserQuestion calls - use TARGET_TOOL = "unknown" if not specifiedAgent mode saves raw research data to ~/Documents/Last30Days/ automatically via --save-dir (handled by the script, no extra tool calls).
Agent mode report format:
## Research Report: {TOPIC}
Generated: {date} | Sources: Reddit, X, YouTube, TikTok, HN, Polymarket, Web
### Key Findings
[3-5 bullet points, highest-signal insights with citations]
### What I learned
{The full "What I learned" synthesis from normal output}
### Stats
{The standard stats block}
Step 1: Run the research script (FOREGROUND — do NOT background this)
CRITICAL: Run this command in the FOREGROUND with a 5-minute timeout. Do NOT use run_in_background. The full output contains Reddit, X, AND YouTube data that you need to read completely.
IMPORTANT: The script handles API key/Codex auth detection automatically. Run it and check the output to determine mode.
# Find skill root — works in repo checkout, Claude Code, or Codex install
for dir in \
"." \
"${CLAUDE_PLUGIN_ROOT:-}" \
"$HOME/.claude/skills/last30days" \
"$HOME/.agents/skills/last30days" \
"$HOME/.codex/skills/last30days"; do
[ -n "$dir" ] && [ -f "$dir/scripts/last30days.py" ] && SKILL_ROOT="$dir" && break
done
if [ -z "${SKILL_ROOT:-}" ]; then
echo "ERROR: Could not find scripts/last30days.py" >&2
exit 1
fi
python3 "${SKILL_ROOT}/scripts/last30days.py" "$ARGUMENTS" --emit=compact --no-native-web --save-dir=~/Documents/Last30Days # Add --x-handle=HANDLE if RESOLVED_HANDLE is set
Use a timeout of 300000 (5 minutes) on the Bash call. The script typically takes 1-3 minutes.
The script will automatically:
Read the ENTIRE output. It contains EIGHT data sections in this order: Reddit items, X items, YouTube items, TikTok items, Instagram Reels items, Hacker News items, Polymarket items, and WebSearch items. If you miss sections, you will produce incomplete stats.
YouTube items in the output look like: **{video_id}** (score:N) {channel_name} [N views, N likes] followed by a title, URL, and optional transcript snippet. Count them and include them in your synthesis and stats block.
TikTok items in the output look like: **{TK_id}** (score:N) @{creator} [N views, N likes] followed by a caption, URL, hashtags, and optional caption snippet. Count them and include them in your synthesis and stats block.
Instagram Reels items in the output look like: **{IG_id}** (score:N) @{creator} (date) [N views, N likes] followed by caption text, URL, and optional transcript. Count them and include them in your synthesis and stats block. Instagram provides unique creator/influencer perspective — weight it alongside TikTok.
After the script finishes, do WebSearch to supplement with blogs, tutorials, and news.
For ALL modes, do WebSearch to supplement (or provide all data in web-only mode).
Choose search queries based on QUERY_TYPE:
If RECOMMENDATIONS ("best X", "top X", "what X should I use"):
best {TOPIC} recommendations{TOPIC} list examplesmost popular {TOPIC}If NEWS ("what's happening with X", "X news"):
{TOPIC} news 2026{TOPIC} announcement updateIf PROMPTING ("X prompts", "prompting for X"):
{TOPIC} prompts examples 2026{TOPIC} techniques tipsIf GENERAL (default):
{TOPIC} 2026{TOPIC} discussionFor ALL query types:
Options (passed through from user's command):
--days=N → Look back N days instead of 30 (e.g., --days=7 for weekly roundup)--quick → Faster, fewer sources (8-12 each)--deep → Comprehensive (50-70 Reddit, 40-60 X)After all searches complete, internally synthesize (don't display stats yet):
The Judge Agent must:
Weight Reddit/X sources HIGHER (they have engagement signals: upvotes, likes)
Weight YouTube sources HIGH (they have views, likes, and transcript content)
Weight TikTok sources HIGH (they have views, likes, and caption content — viral signal)
Weight WebSearch sources LOWER (no engagement data)
For Reddit: Pay special attention to top comments — they often contain the wittiest, most insightful, or funniest take. When a top comment has high upvotes (shown as 💬 Top comment (N upvotes)), quote it directly in your synthesis. Reddit's value is in the comments.
Identify patterns that appear across ALL sources (strongest signals)
Note any contradictions between sources
Extract the top 3-5 actionable insights
Cross-platform signals are the strongest evidence. When items have [also on: Reddit, HN] or similar tags, it means the same story appears across multiple platforms. Lead with these cross-platform findings - they're the most important signals in the research.
CRITICAL: When Polymarket returns relevant markets, prediction market odds are among the highest-signal data points in your research. Real money on outcomes cuts through opinion. Treat them as strong evidence, not an afterthought.
How to interpret and synthesize Polymarket data:
Prefer structural/long-term markets over near-term deadlines. Championship odds > regular season title. Regime change > near-term strike deadline. IPO/major milestone > incremental update. Presidency > individual state primary. When multiple markets exist, the bigger question is more interesting to the user.
When the topic is an outcome in a multi-outcome market, call out that specific outcome's odds and movement. Don't just say "Polymarket has a #1 seed market" - say "Arizona has a 28% chance of being the #1 overall seed, up 10% this month." The user cares about THEIR topic's position in the market.
Weave odds into the narrative as supporting evidence. Don't isolate Polymarket data in its own paragraph. Instead: "Final Four buzz is building - Polymarket gives Arizona a 12% chance to win the championship (up 3% this week), and 28% to earn a #1 seed."
Citation format: Always include specific odds AND movement. "Polymarket has Arizona at 28% for a #1 seed (up 10% this month)" - not just "per Polymarket."
When multiple relevant markets exist, highlight 3-5 of the most interesting ones in your synthesis, ordered by importance (structural > near-term). Don't just pick the highest-volume one.
Domain examples of market importance ranking:
Do NOT display stats here - they come at the end, right before the invitation.
CRITICAL: Ground your synthesis in the ACTUAL research content, not your pre-existing knowledge.
Read the research output carefully. Pay attention to:
ANTI-PATTERN TO AVOID: If user asks about "clawdbot skills" and research returns ClawdBot content (self-hosted AI agent), do NOT synthesize this as "Claude Code skills" just because both involve "skills". Read what the research actually says.
CRITICAL: Extract SPECIFIC NAMES, not generic patterns.
When user asks "best X" or "top X", they want a LIST of specific things:
BAD synthesis for "best Claude Code skills":
"Skills are powerful. Keep them under 500 lines. Use progressive disclosure."
GOOD synthesis for "best Claude Code skills":
"Most mentioned skills: /commit (5 mentions), remotion skill (4x), git-worktree (3x), /pr (3x). The Remotion announcement got 16K likes on X."
Identify from the ACTUAL RESEARCH OUTPUT:
Display in this EXACT sequence:
FIRST - What I learned (based on QUERY_TYPE):
If RECOMMENDATIONS - Show specific things mentioned with sources:
🏆 Most mentioned:
[Tool Name] - {n}x mentions
Use Case: [what it does]
Sources: @handle1, @handle2, r/sub, blog.com
[Tool Name] - {n}x mentions
Use Case: [what it does]
Sources: @handle3, r/sub2, Complex
Notable mentions: [other specific things with 1-2 mentions]
CRITICAL for RECOMMENDATIONS:
If PROMPTING/NEWS/GENERAL - Show synthesis and patterns:
CITATION RULE: Cite sources sparingly to prove research is real.
CITATION PRIORITY (most to least preferred):
The tool's value is surfacing what PEOPLE are saying, not what journalists wrote. When both a web article and an X post cover the same fact, cite the X post.
URL FORMATTING: NEVER paste raw URLs anywhere in the output — not in synthesis, not in stats, not in sources.
🌐 Web: 10 pages — https://later.com/blog/..., https://buffer.com/...🌐 Web: 10 pages — Later, Buffer, CNN, SocialBee
Use the publication/site name, not the URL. The user doesn't need links — they need clean, readable text.BAD: "His album is set for March 20 (per Rolling Stone; Billboard; Complex)." GOOD: "His album BULLY drops March 20 — fans on X are split on the tracklist, per @honest30bgfan_" GOOD: "Ye's apology got massive traction on r/hiphopheads" OK (web, only when Reddit/X don't have it): "The Hellwatt Festival runs July 4-18 at RCF Arena, per Billboard"
Lead with people, not publications. Start each topic with what Reddit/X users are saying/feeling, then add web context only if needed. The user came here for the conversation, not the press release.
What I learned:
**{Topic 1}** — [1-2 sentences about what people are saying, per @handle or r/sub]
**{Topic 2}** — [1-2 sentences, per @handle or r/sub]
**{Topic 3}** — [1-2 sentences, per @handle or r/sub]
KEY PATTERNS from the research:
1. [Pattern] — per @handle
2. [Pattern] — per r/sub
3. [Pattern] — per @handle
THEN - Stats (right before invitation):
CRITICAL: Calculate actual totals from the research output.
[Xlikes, Yrt] from each X post, [Xpts, Ycmt] from RedditCopy this EXACTLY, replacing only the {placeholders}:
---
✅ All agents reported back!
├─ 🟠 Reddit: {N} threads │ {N} upvotes │ {N} comments
├─ 🔵 X: {N} posts │ {N} likes │ {N} reposts
├─ 🔴 YouTube: {N} videos │ {N} views │ {N} with transcripts
├─ 🎵 TikTok: {N} videos │ {N} views │ {N} likes │ {N} with captions
├─ 📸 Instagram: {N} reels │ {N} views │ {N} likes │ {N} with captions
├─ 🟡 HN: {N} stories │ {N} points │ {N} comments
├─ 📊 Polymarket: {N} markets │ {short summary of up to 5 most relevant market odds, e.g. "Championship: 12%, #1 Seed: 28%, Big 12: 64%, vs Kansas: 71%"}
├─ 🌐 Web: {N} pages — Source Name, Source Name, Source Name
└─ 🗣️ Top voices: @{handle1} ({N} likes), @{handle2} │ r/{sub1}, r/{sub2}
---
🌐 Web: line — how to extract site names from URLs:
Strip the protocol, path, and www. — use the recognizable publication name:
https://later.com/blog/instagram-reels-trends/ → Laterhttps://socialbee.com/blog/instagram-trends/ → SocialBeehttps://buffer.com/resources/instagram-algorithms/ → Bufferhttps://www.cnn.com/2026/02/22/tech/... → CNNhttps://medium.com/the-ai-studio/... → Mediumhttps://radicaldatascience.wordpress.com/... → Radical Data Science
List as comma-separated plain names: Later, SocialBee, Buffer, CNN, Medium⚠️ WebSearch citation — ALREADY SATISFIED. DO NOT ADD A SOURCES SECTION. The WebSearch tool mandates source citation. That requirement is FULLY satisfied by the source names on the 🌐 Web: line above. Do NOT append a separate "Sources:" section at the end of your response. Do NOT list URLs anywhere. The 🌐 Web: line IS your citation. Nothing more is needed.
CRITICAL: Omit any source line that returned 0 results. Do NOT show "0 threads", "0 stories", "0 markets", or "(no results this cycle)". If a source found nothing, DELETE that line entirely - don't include it at all. NEVER use plain text dashes (-) or pipe (|). ALWAYS use ├─ └─ │ and the emoji.
SELF-CHECK before displaying: Re-read your "What I learned" section. Does it match what the research ACTUALLY says? If you catch yourself projecting your own knowledge instead of the research, rewrite it.
LAST - Invitation (adapt to QUERY_TYPE):
CRITICAL: Every invitation MUST include 2-3 specific example suggestions based on what you ACTUALLY learned from the research. Don't be generic — show the user you absorbed the content by referencing real things from the results.
If QUERY_TYPE = PROMPTING:
---
I'm now an expert on {TOPIC} for {TARGET_TOOL}. What do you want to make? For example:
- [specific idea based on popular technique from research]
- [specific idea based on trending style/approach from research]
- [specific idea riffing on what people are actually creating]
Just describe your vision and I'll write a prompt you can paste straight into {TARGET_TOOL}.
If QUERY_TYPE = RECOMMENDATIONS:
---
I'm now an expert on {TOPIC}. Want me to go deeper? For example:
- [Compare specific item A vs item B from the results]
- [Explain why item C is trending right now]
- [Help you get started with item D]
If QUERY_TYPE = NEWS:
---
I'm now an expert on {TOPIC}. Some things you could ask:
- [Specific follow-up question about the biggest story]
- [Question about implications of a key development]
- [Question about what might happen next based on current trajectory]
If QUERY_TYPE = GENERAL:
---
I'm now an expert on {TOPIC}. Some things I can help with:
- [Specific question based on the most discussed aspect]
- [Specific creative/practical application of what you learned]
- [Deeper dive into a pattern or debate from the research]
Example invitations (to show the quality bar):
For /last30days nano banana pro prompts for Gemini:
I'm now an expert on Nano Banana Pro for Gemini. What do you want to make? For example:
- Photorealistic product shots with natural lighting (the most requested style right now)
- Logo designs with embedded text (Gemini's new strength per the research)
- Multi-reference style transfer from a mood board
Just describe your vision and I'll write a prompt you can paste straight into Gemini.
For /last30days kanye west (GENERAL):
I'm now an expert on Kanye West. Some things I can help with:
- What's the real story behind the apology letter — genuine or PR move?
- Break down the BULLY tracklist reactions and what fans are expecting
- Compare how Reddit vs X are reacting to the Bianca narrative
For /last30days war in Iran (NEWS):
I'm now an expert on the Iran situation. Some things you could ask:
- What are the realistic escalation scenarios from here?
- How is this playing differently in US vs international media?
- What's the economic impact on oil markets so far?
STOP and wait for the user to respond. Do NOT call any tools after displaying the invitation. The research script already saved raw data to ~/Documents/Last30Days/ via --save-dir.
Read their response and match the intent:
Only write a prompt when the user wants one. Don't force a prompt on someone who asked "what could happen next with Iran."
When the user wants a prompt, write a single, highly-tailored prompt using your research expertise.
If research says to use a specific prompt FORMAT, YOU MUST USE THAT FORMAT.
ANTI-PATTERN: Research says "use JSON prompts with device specs" but you write plain prose. This defeats the entire purpose of the research.
Here's your prompt for {TARGET_TOOL}:
---
[The actual prompt IN THE FORMAT THE RESEARCH RECOMMENDS]
---
This uses [brief 1-line explanation of what research insight you applied].
Only if they ask for alternatives or more prompts, provide 2-3 variations. Don't dump a prompt pack unless requested.
After delivering a prompt, offer to write more:
Want another prompt? Just tell me what you're creating next.
For the rest of this conversation, remember:
CRITICAL: After research is complete, treat yourself as an EXPERT on this topic.
When the user asks follow-up questions:
Only do new research if the user explicitly asks about a DIFFERENT topic.
After delivering a prompt, end with:
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} Reddit threads ({sum} upvotes) + {n} X posts ({sum} likes) + {n} YouTube videos ({sum} views) + {n} TikTok videos ({sum} views) + {n} Instagram reels ({sum} views) + {n} HN stories ({sum} points) + {n} web pages
Want another prompt? Just tell me what you're creating next.
What this skill does:
api.scrapecreators.com) for Reddit search, subreddit discovery, and comment enrichment (requires SCRAPECREATORS_API_KEY — same key as TikTok + Instagram)api.openai.com) for Reddit discovery (fallback if no SCRAPECREATORS_API_KEY)api.x.ai) for X searchhn.algolia.com) for Hacker News story and comment discovery (free, no auth)gamma-api.polymarket.com) for prediction market discovery (free, no auth)yt-dlp locally for YouTube search and transcript extraction (no API key, public data)api.scrapecreators.com) for TikTok and Instagram search, transcript/caption extraction (same SCRAPECREATORS_API_KEY as Reddit, PAYG after 100 free credits)reddit.com for engagement metricsWhat this skill does NOT do:
--agent for non-interactive report outputBundled scripts: scripts/last30days.py (main research engine), scripts/lib/ (search, enrichment, rendering modules), scripts/lib/vendor/bird-search/ (vendored X search client, MIT licensed)
Review scripts before first use to verify behavior.