Internal architecture spec for the v3 last30days runtime pipeline. Not user-invocable.
Use last30days when the user wants recent, cross-source evidence from the last 30 days.
The runtime is a single v3 pipeline:
(subquery, source)for dir in \
"." \
"${CLAUDE_PLUGIN_ROOT:-}" \
"${GEMINI_EXTENSION_DIR:-}" \
"$HOME/.openclaw/workspace/skills/last30days" \
"$HOME/.openclaw/skills/last30days" \
"$HOME/.claude/skills/last30days" \
"$HOME/.agents/skills/last30days" \
"$HOME/.codex/skills/last30days"; do
[ -n "$dir" ] && [ -f "$dir/scripts/last30days.py" ] && SKILL_ROOT="$dir" && break
done
if [ -z "${SKILL_ROOT:-}" ]; then
echo "ERROR: Could not find scripts/last30days.py" >&2
exit 1
fi
for py in python3.14 python3.13 python3.12 python3; do
command -v "$py" >/dev/null 2>&1 || continue
"$py" -c 'import sys; raise SystemExit(0 if sys.version_info >= (3, 12) else 1)' || continue
LAST30DAYS_PYTHON="$py"
break
done
if [ -z "${LAST30DAYS_PYTHON:-}" ]; then
echo "ERROR: last30days v3 requires Python 3.12+. Install python3.12 or python3.13 and rerun." >&2
exit 1
fi
"${LAST30DAYS_PYTHON}" "${SKILL_ROOT}/scripts/last30days.py" $ARGUMENTS --emit=compact
"${LAST30DAYS_PYTHON}" "${SKILL_ROOT}/scripts/last30days.py" $ARGUMENTS --emit=json
"${LAST30DAYS_PYTHON}" "${SKILL_ROOT}/scripts/last30days.py" $ARGUMENTS --quick
"${LAST30DAYS_PYTHON}" "${SKILL_ROOT}/scripts/last30days.py" $ARGUMENTS --deep
"${LAST30DAYS_PYTHON}" "${SKILL_ROOT}/scripts/last30days.py" $ARGUMENTS --search=reddit,x,grounding
"${LAST30DAYS_PYTHON}" "${SKILL_ROOT}/scripts/last30days.py" $ARGUMENTS --store
"${LAST30DAYS_PYTHON}" "${SKILL_ROOT}/scripts/last30days.py" --diagnose
GOOGLE_API_KEY for Gemini, MINIMAX_API_KEY for MiniMax, or XAI_API_KEY for xAI.BRAVE_API_KEY enables Brave web search (recommended). SERPER_API_KEY is the web fallback.SCRAPECREATORS_API_KEY enables Reddit, TikTok, and Instagram.XAI_API_KEY enables xAI reasoning and X search.AUTH_TOKEN plus CT0 enables Bird-backed X search.yt-dlp enables YouTube.compact and md: cluster-first markdownjson: full v3 reportcontext: short synthesis-oriented contextImportant report fields:
provider_runtimequery_planranked_candidatesclustersitems_by_sourceerrors_by_source--quick for fast iteration.--deep only when the user explicitly wants maximum recall or the topic is complex enough to justify extra latency.--emit=json when downstream code or evaluation will consume the result.--search= only when the user explicitly wants source restrictions.If the topic could have its own X/Twitter account (people, brands, products, companies), do a quick WebSearch for their handle:
WebSearch("{TOPIC} X twitter handle site:x.com")
If you find a verified handle, pass --x-handle={handle} (without @). This searches their posts directly, finding content they posted that doesn't mention their own name. Skip this for generic concepts ("best headphones 2026", "how to use Docker").
Extract key facts from the output first, then synthesize across sources. Lead with patterns that appear across multiple clusters. Present a unified narrative, not a source-by-source summary.
Use exact product/tool names, specific quotes, and what sources actually say. If research mentions "ClawdBot" and "@clawdbot", that is a different product than "Claude Code" -- read what the research actually says.
Anti-pattern to avoid:
When Polymarket returns relevant markets:
Domain importance ranking:
Cite the single strongest source per point in short format: "per @handle" or "per r/subreddit". Save engagement metrics for the stats section. Use the priority order from source weighting above. The tool's value is surfacing what PEOPLE are saying, not what journalists wrote.
For "X vs Y" queries, structure output as:
## Quick Verdict
[1-2 sentences: which one the community prefers and why, with source counts]
## [Entity A]
**Community Sentiment:** [Positive/Mixed/Negative] (N mentions across sources)
**Strengths:** [with source attribution]
**Weaknesses:** [with source attribution]
## [Entity B]
[Same structure]
## Head-to-Head
| Dimension | Entity A | Entity B |
|-----------|----------|----------|
| [Key dim] | [position] | [position] |
## Bottom Line
Choose A if... Choose B if... (based on community data)
When users ask "best X" or "top X", extract SPECIFIC NAMES:
Most mentioned:
[Name] -- Nx mentions
Sources: @handle1, r/subreddit, [YouTube channel]
[Name] -- Nx mentions
Sources: @handle2, r/subreddit2
Notable mentions: [others with 1-2 mentions]
After research completes, treat yourself as an expert on this topic. Answer follow-ups from the research findings. Cite the specific threads, posts, and channels you found. Only run new research if the user asks about a DIFFERENT topic.
What this skill does:
What this skill does NOT do: