Triages labeled Gmail newsletters into to-reads and to-dos. To-reads go into dated NotebookLM notebooks (split by category) each with an audio overview. To-dos are listed in the triage table and final report only (no external task app). Use this skill whenever the user says anything like "process my starred emails", "triage my inbox", "build my reading list", "add starred emails to NotebookLM", "sort my starred emails", "run my email triage", "what do I need to do from my emails", or "check my starred emails from the last few days". Triggers any time starred emails need to be sorted, routed, or acted on — including multi-day ranges.
Fetches labeled Gmail newsletters (today or a date range), classifies them, shows a triage table for user approval, then routes to-reads into categorised NotebookLM notebooks with audio overviews. To-do items are not synced to Todoist or any task system — they appear only in the triage output and the closing report so the user can handle them in Gmail. After audio is generated, fetches the notebook summary and renames each episode with a bullet-point description and sources list.
config/feeds.json)Before any Gmail or NotebookLM step, read config/feeds.json relative to this repo
(reading-with-ears/config/feeds.json when the project root is on your path). If the user
has ~/.config/reading-with-ears/feeds.json, prefer that content when the skill can read
it; otherwise the bundled file in the repo is the source of truth for automation.
"enabled": true (omit or treat as disabled when "enabled": false).notebook_order (ascending). This order defines the
suffix in notebook titles (, , …) for the target date.nn0102slug, notebook_category (full title suffix),
notebook_emoji, gmail_labels (array), audio_focus_prompt.Primary Gmail search — union of all Gmail labels on enabled feeds (skip feeds with
an empty gmail_labels list for the OR query only — they still get notebooks if mail is
classified into them another way, e.g. manual triage):
label:foo OR label:bar OR … for every distinct label across enabled feeds.gmail_search_messages(q="(label:newsletter/news OR label:newsletter/think OR label:newsletter/pro OR label:newsletter/healthcare OR label:newsletter/ai-everybody) after:YYYY/MM/DD")
Classification hint: If an email carries a label that appears in feed gmail_labels,
default its read category to that feed’s notebook_category. The content classifier
may still override when clearly wrong. Starred mail without any of those labels is a
potential To-Do (report only).
Sender registry: Keep reading-with-ears/data/newsletter_sender_registry.json updated;
add new senders under the right category when you adopt new labels.
If the user says "today" or gives no date, use today only. If the user gives a range ("back to 3/15", "last few days", "this week"), use the earliest day as the start.
Primary search — use the OR-of-labels query built from enabled feeds’
gmail_labels (see above). If no enabled feed defines any label, report that misconfiguration
and stop (or fall back to starred search only if the user explicitly asks).
Fallback — unlabeled starred emails (to-do triage):
gmail_search_messages(q="is:starred after:YYYY/MM/DD")
Run both searches when applicable. Deduplicate by messageId. Starred email that does not carry any newsletter label from the feeds config should be treated as a potential To-Do.
If no emails are found at all, tell the user and stop.
For each email returned, fetch the full message:
gmail_read_message(messageId=<id>)
Fetch ALL emails before doing anything else. This keeps tool calls front-loaded and avoids hitting the per-turn tool call limit mid-workflow.
HTML-only emails: Some emails return an empty plain-text body with a note to view the HTML version. When this happens:
"<subject> — <sender> [body unavailable]"Classify every email as one of:
notebook_category name in the triage table (e.g. 📰 News & Current Affairs, 🧠 Things to Think About, 💼 Professional Reading, 🎙️ AI is for Everybody, …).gmail_labelsDefault to to-do if any action is implied, even loosely. Unlabeled starred emails default to to-do unless content is unambiguously a newsletter read.
Build one NotebookLM section per enabled feed that has at least one email (use
notebook_category as the heading). Example shape when four feeds are enabled:
📧 Triage — [date range]
📰 News & Current Affairs (→ NotebookLM):
• …
🧠 Things to Think About (→ NotebookLM):
• …
💼 Professional Reading (→ NotebookLM):
• …
🎙️ AI is for Everybody (→ NotebookLM):
• …
📋 To-Do (→ report only — not NotebookLM, no task app):
• "Re: Resume: Dave Holmes-Kinsella" — follow up with Olivier @ Ramp
• "It's that time of year" — reply to birthday message from G
• "Your receipt from Gamma #2239-6502" — file the receipt
Proceed? (or tell me what to reclassify)
Wait for confirmation. If the user reclassifies items, re-show the updated table and confirm again before acting.
Skip empty categories silently — if there are no professional reads, don't create that notebook. Do not call any task-manager MCP for to-dos.
Create one notebook per non-empty enabled feed (non-empty = at least one email in that
feed’s read bucket). Assign nn by notebook_order among enabled feeds only:
the lowest notebook_order gets 01, the next 02, etc. (pad to two digits).
Title format:
reading-list-YYYY-MM-DD-nn <notebook_category from feeds.json for that feed>
Example with four enabled feeds:
reading-list-2026-04-05-01 📰 News & Current Affairsreading-list-2026-04-05-02 🧠 Things to Think Aboutreading-list-2026-04-05-03 💼 Professional Readingreading-list-2026-04-05-04 🎙️ AI is for EverybodyIf running for a date range, use the end date (today) in the name. If a notebook with today’s date already exists, increment the numeric suffix.
Create it (substitute the feed’s notebook_category):
notebook_create(title="reading-list-YYYY-MM-DD-nn <notebook_category>")
Add each email as a text source:
source_add(
notebook_id=<id>,
source_type="text",
title="<subject> — <sender> (<date>)",
text="From: <sender>\nDate: <date>\n\n<body>",
wait=True
)
Generate Audio Overview after all sources are loaded — use that feed’s
audio_focus_prompt from feeds.json (verbatim). If missing, fall back to a single
generic ~12-minute insight-first prompt.
studio_create(
notebook_id=<id>,
artifact_type="audio",
audio_format="deep_dive",
audio_length="long",
focus_prompt="<audio_focus_prompt from the matching feed>",
confirm=True
)
This workflow is tool-call-intensive. To stay within per-turn limits:
After firing all audio generations, call studio_status on each notebook to confirm completion, then call notebook_describe to get the AI-generated summary. Use this to rename each artifact with a rich title, bullet-point key ideas, and sources line.
For each notebook:
studio_status(notebook_id=<id>) # get artifact_id and confirm completed
notebook_describe(notebook_id=<id>) # get AI summary to distill into bullets
Then rename:
studio_status(
notebook_id=<id>,
action="rename",
artifact_id=<artifact_id>,
new_title="<NotebookLM-generated title>\n\n• <key idea 1>\n• <key idea 2>\n• <key idea 3>\n\nSources: <Newsletter (topic)> · <Newsletter (topic)> · ..."
)
Newsletter Name (topic shorthand) · Newsletter Name (topic) · ...How Power Profits From Manufactured Chaos
• AI companies are uniquely selling a product framed around existential risk and job displacement — yet people are buying it anyway
• The Iran war is functioning as a massive regressive global tax, hitting import-dependent nations hardest while the US gets off relatively easy
• Trump's second-term economic instability is self-generated, not inherited — a structural shift from his first term
• The White House governs through a repeating pattern: manufacture crisis → demand concessions → claim victory
Sources: Noahpinion (AI's worst sales pitch; Iran war economics) · Jonah Goldberg / The Dispatch (Two Trump economies) · The Atlantic Daily (Trump's bailout pattern)
After all episodes are titled, download each completed audio file to the iCloud Personal Podcast folder so Phase 2 (Element.fm upload) can pick them up.
download_artifact(
notebook_id=<id>,
artifact_id=<artifact_id>,
output_path="~/Library/Mobile Documents/com~apple~CloudDocs/Personal Podcast/YYYY-MM-DD-<slug>.mp3"
)
feeds.json)Use each feed’s slug field for the filename: YYYY-MM-DD-<slug>.mp3
(e.g. news, think, professional, vital-signs, ai-everybody). Phase 2 (publish_episodes.py) uses the
same slugs for enabled feeds.
✅ Done — [date range]
📚 NotebookLM — N notebooks created (one line per enabled feed that had mail):
• "reading-list-YYYY-MM-DD-01 …" → K sources
🎧 "<episode title>" → 📥 YYYY-MM-DD-<slug>.mp3
• (repeat for each feed)
📋 To-dos (handle in Gmail — not exported to a task app):
• "Re: Resume …" — follow up
• …
Nothing was deleted or archived in Gmail.
Audio files downloaded to iCloud Personal Podcast — ready for Phase 2 (Element.fm upload).
gmail_labels in feeds.json, and note the gap to the usergmail_labels: Primary search won’t catch mail; user must route content manually or add labels before relying on automation