Use iDeer as a daily paper-reading workflow for chatbot-first users such as Codex, Gemini, or ChatGPT. Keep the original iDeer paper-digest setup, source selection, history validation, email/report/ideas workflow, but replace in-repo LLM API summarization and scoring with the current chatbot session. 适用于不用单独配置 OpenAI/SiliconFlow/Ollama API key 的每日论文整理、报告、想法生成与自动化。
Use this skill when the user wants the iDeer daily-paper workflow but does not want the repo to call its own LLM API. The chatbot should do the reading, scoring, grouping, report writing, and idea generation directly in the current conversation.
Keep as much of the original iDeer workflow as possible:
.env, profiles/description.txt, and profiles/researcher_profile.mdhistory/ as the artifact destination when saving outputsBut do not rely on main.py for any step that requires MODEL_NAME, BASE_URL, API_KEY, or Ollama. Instead, fetch raw items and have the chatbot perform the intelligence layer.
profiles/description.txtprofiles/researcher_profile.mdhistory/Replace these original in-repo LLM tasks with chatbot work in-session:
Do not call python main.py or bash scripts/run_daily.sh unless the user explicitly wants to test the original API-based pipeline. For chatbot-first runs, fetch raw data with the repo's fetchers or with web browsing and continue in the conversation.
Always check:
.envprofiles/description.txtCheck when needed:
profiles/researcher_profile.mdprofiles/x_accounts.txtIf .env does not exist, copy from .env.example. Do not invent secrets.
Map the user request to one of these modes:
.env, profiles, categories, or fetchers so source collection worksarxiv semanticscholar huggingfacegithub only when the user wants code/repo signalstwitter only when the user explicitly wants social signals and credentials existcs.AI cs.CL cs.LG; expand to cs.CV cs.RO for embodied, spatial, or robotics interestsRead the profile and decide:
Use references/presets.md for presets.
Prefer the repo fetchers first when the repo is available:
fetchers/arxiv_fetcher.pyfetchers/huggingface_fetcher.pyfetchers/semanticscholar_fetcher.pyfetchers/github_fetcher.pyfetchers/twitter_fetcher.pyIf the repo is not available or a fetcher is broken, use browsing and cite the public source pages.
Fetch raw candidates only. Do not call the repo's LLM scoring path.
The chatbot should:
When the user gave explicit directions such as Agent / Spatial Intelligence / World Model, preserve those headings in the final digest.
Prefer these output shapes:
history/<source>/<date>/<date>.md for source-level markdown digestshistory/reports/<date>/report.md for cross-source reporthistory/ideas/<date>/ideas.json for structured idea outputhistory/<source>/<date>/<source>_email.html if you render an HTML email bodyIt is acceptable for chatbot-first runs to write fewer files than the original pipeline, as long as you report exactly what was written.
If the user wants HTML artifacts without touching the main repo scripts, use the bundled renderer:
python skills/ideer-daily-paper-chatbot/scripts/render_chatbot_artifacts.py \
--date YYYY-MM-DD \
--base-dir <artifact-dir>
This script should render report.html and digest_email.html from chatbot-written markdown/json outputs inside the chosen artifact directory.
If SMTP is incomplete, do not claim that email was sent. Save the digest locally and tell the user what is missing.
If SMTP is complete and the user explicitly asked for sending, either:
Never send email on the first validation run unless the user clearly asked for a live send.
For chatbot-first automation, prefer Codex automation over cron. Use the repo root as the working directory and write the prompt so the chatbot fetches raw source items, performs summarization itself, saves artifacts, and only sends email if SMTP exists.
Use small fetch/test commands instead of the full original pipeline.
Examples:
.venv/bin/python - <<'PY'
from fetchers.huggingface_fetcher import get_daily_papers
print(len(get_daily_papers(10)))
PY
.venv/bin/python - <<'PY'
from fetchers.arxiv_fetcher import fetch_papers_for_categories
print(fetch_papers_for_categories(['cs.AI','cs.LG'], max_entries=25, sleep_range=(0,0)).keys())
PY
Use bash scripts/run_daily.sh only to debug the legacy API-based path.
After each run, report:
For users who want paper digestion without API keys, start with:
arxiv and huggingface