Aggregate activity from all data sources into an Obsidian digest document. Supports daily and weekly cadences. Use when the user says "digest", "what happened today", "what happened this week", "/digest", "/digest --daily", "/digest --weekly", "/ws", "/weekly-status", "daily digest", "weekly digest", "aggregate status", "build my status", or asks to compile cross-source activity.
Produce a comprehensive, topic-grouped activity digest from all data sources. Daily mode captures the last 24 hours; weekly mode captures the last 7 days. Both modes write a synthesized document to Obsidian.
This is Layer 1 of a two-layer architecture: the digest skill gathers and synthesizes cross-source activity; consumer skills (weekly-priorities, search, meeting-prep, daily-briefing) read the output and enrich with targeted queries. Weekly mode produces the pre-synthesized status document that /wp reads as its first input.
Style: Follow all rules in the writing-style skill. No em dashes, no emojis, concise and direct.
Parse $ARGUMENTS for cadence flags:
--daily or no flag: daily mode (default). Covers the past 24 hours.--weekly: weekly mode. Covers the past 7 days./ws and /weekly-status invocations always map to weekly mode (the ws.md command file appends to arguments).--weeklySet the following variables based on the resolved cadence:
| Variable | Daily Mode | Weekly Mode |
|---|---|---|
[TIME_WINDOW] | past 24 hours | past 7 days |
[SOQL_DATE_FILTER] | LAST_N_DAYS:1 | LAST_N_DAYS:7 |
[OUTPUT_DIR_SLUG] | Today's date (YYYY-MM-DD) | Monday date (YYYY-MM-DD, computed by boundary rule below) |
[OBSIDIAN_PATH] | Regal/Daily/Digests/MM-DD-YY - Daily Digest.md | Regal/Weekly/MM.DD-MM.DD/Weekly Status.md (Monday-Friday of the work week, computed by boundary rule below) |
Compute the Monday and Friday dates for the work week in weekly mode:
| Current Day | Monday Rule | Friday Rule | Filename Example |
|---|---|---|---|
| Monday | Use today | Today + 4 | Mon Mar 2 → 03.02-03.06 |
| Tuesday | This week's Monday | This week's Friday | Tue Mar 3 → 03.02-03.06 |
| Wednesday | This week's Monday | This week's Friday | Wed Mar 4 → 03.02-03.06 |
| Thursday | This week's Monday | This week's Friday | Thu Mar 5 → 03.02-03.06 |
| Friday | This week's Monday | Today | Fri Mar 6 → 03.02-03.06 |
| Saturday | Next Monday | Next Friday | Sat Mar 7 → 03.09-03.13 |
| Sunday | Next Monday | Next Friday | Sun Mar 8 → 03.09-03.13 |
The full weekly Obsidian path is: Regal/Weekly/MM.DD-MM.DD/Weekly Status.md
Example: if today is any day Mon-Fri during the week of March 2-6, the file is Regal/Weekly/03.02-03.06/Weekly Status.md.
[OUTPUT_DIR_SLUG] still uses the Monday date (YYYY-MM-DD) for the /tmp/digest/ output directory.
Compute the dates first, then use them as the filename. Do not list the directory to find an existing file and reuse its name. The computed date range is the single source of truth for the output path.
All 6 agents dispatch in both daily and weekly modes.
| Agent | Role | max_turns |
|---|---|---|
| session-log-researcher | Claude Code session activity | 5 |
| salesforce-researcher | Opportunity updates, Tasks, stage changes | 8 |
| transcript-locator | Discover external meeting transcript files from Regal/Granola/ | 7 |
| transcript-analyzer | Extract decisions, action items, and outcomes from located transcripts | 8 |
| slack-researcher | Deal-related Slack discussions and Sahil's messages | 8 |
| obsidian-locator | Discover which account folders have recent files in Regal/Accounts/ | 5 |
| obsidian-analyzer | Deep-read recently modified account files. Receives obsidian-locator file paths. | 8 |
| google-workspace-researcher | Gmail threads, Calendar events, Drive docs | 8 |
| Aspect | Daily Mode | Weekly Mode |
|---|---|---|
| Time window | Last 24 hours | Last 7 days |
| Agent count | 6 (all agents) | 6 (all agents) |
| Synthesis depth | Sections 1-6, 10, 11 of synthesis-protocol.md (Section 5 replaced by 11) | Sections 1-8, 10 of synthesis-protocol.md (Section 5 and 11 replaced by weekly-sections.md classification) |
| Supplementary input | None | Read daily digests from past week in Regal/Daily/Digests/ |
| Output location | Regal/Daily/Digests/MM-DD-YY - Daily Digest.md | Regal/Weekly/MM.DD-MM.DD/Weekly Status.md (Mon-Fri per boundary rule) |
| Write behavior | Append if same-date file exists | Always replace (clean snapshot) |
Run a single Salesforce query to get Nick's current active opportunity account names before dispatching sub-agents:
Task(subagent_type="salesforce-researcher", prompt="""
Run this SOQL query and return the results:
SELECT Account.Name, Name, Amount, StageName, CloseDate
FROM Opportunity
WHERE OwnerId IN (SELECT Id FROM User WHERE Name = 'Nick Yebra')
AND IsClosed = false
ORDER BY Amount DESC
""")
Store the returned account names and amounts. Pass them to all sub-agents in Phase 1 so search terms stay current as the book changes. This query takes ~1 second and prevents hardcoded account names from going stale.
Always dispatch obsidian-analyzer, transcript-locator, and transcript-analyzer normally in Phase 1. All three agents handle their own Obsidian MCP availability detection and fall back to filesystem reads automatically when MCP is unavailable. No orchestrator-level fallback logic is needed.
Read tools/references/output-persistence.md for the pattern and fallback protocol.
Before dispatching any agents, create the output directory:
mkdir -p /tmp/digest/[OUTPUT_DIR_SLUG]
Agent-to-file mapping:
| Agent | Output File |
|---|---|
session-log-researcher | session-log-researcher.md |
salesforce-researcher | salesforce-researcher.md |
transcript-locator | transcript-locator.md |
transcript-analyzer | transcript-analyzer.md |
slack-researcher | slack-researcher.md |
obsidian-locator | obsidian-locator.md |
obsidian-analyzer | obsidian-analyzer.md |
google-workspace-researcher | google-workspace-researcher.md |
Dispatch these 4 agents in a SINGLE message for parallel execution. Each prompt includes the active account names from Phase 0 and uses the cadence-appropriate time window. obsidian-locator replaces obsidian-analyzer in this wave (see tools/references/locator-analyzer-guidelines.md, Pattern B).
Before dispatching: Replace [ACTIVE_ACCOUNT_NAMES] with the comma-separated account list from Phase 0. Replace [TIME_WINDOW], [SOQL_DATE_FILTER], and [OUTPUT_DIR_SLUG] per the Flag Parsing variables table.
Task(subagent_type="salesforce-researcher", max_turns=8, prompt="""
Find all Salesforce activity from the [TIME_WINDOW] for Nick Yebra. Search for:
1. Opportunities updated recently (any stage change, amount change, or close date change):
SELECT Name, StageName, Amount, CloseDate, LastModifiedDate, Account.Name
FROM Opportunity
WHERE OwnerId IN (SELECT Id FROM User WHERE Name = 'Nick Yebra')
AND LastModifiedDate = [SOQL_DATE_FILTER]
2. Tasks completed or created recently:
SELECT Subject, Status, WhatId, What.Name, ActivityDate
FROM Task
WHERE OwnerId IN (SELECT Id FROM User WHERE Name = 'Nick Yebra')
AND CreatedDate = [SOQL_DATE_FILTER]
Active accounts to focus on: [ACTIVE_ACCOUNT_NAMES]
Return the TOP 3 most significant changes per account. Skip routine system updates.
Also note any upcoming close dates within 30 days as forward signals.
CRITICAL — Output Persistence:
Write your complete, detailed findings to: /tmp/digest/[OUTPUT_DIR_SLUG]/salesforce-researcher.md
Format as markdown with all data, evidence, URLs, and citations.
After writing the file, return a 2-3 paragraph summary of key findings only.
The file is the primary deliverable. The returned text is a backup summary.
If the Write tool fails, return your full findings as text instead.
WRITE EARLY: After your first 2-3 successful research calls, IMMEDIATELY write
your current findings to the output file. Do NOT wait until all research is
complete. Continue researching and UPDATE the file with additional findings.
The initial write is your insurance against output loss. A partial file is
infinitely better than no file.
""")
Task(subagent_type="transcript-locator", max_turns=7, prompt="""
Find external customer and prospect meeting transcript files from the [TIME_WINDOW] in the Obsidian vault.
Search in Regal/Granola/ for recent files (created or modified in the [TIME_WINDOW]).
CRITICAL: Filter to EXTERNAL meetings only. Skip all of the following:
- Internal 1:1s (Nick/Sahil, Nick/Sean, sales team syncs)
- Team meetings, all-hands, training sessions, onboarding calls
- Any meeting where ALL attendees have @regalvoice.com or @regal.ai email domains
Active accounts to focus on: [ACTIVE_ACCOUNT_NAMES]
Return a list of file paths, dates, account names, and attendees for each qualifying transcript. Do NOT read full transcript contents.
CRITICAL — Output Persistence:
Write your complete, detailed findings to: /tmp/digest/[OUTPUT_DIR_SLUG]/transcript-locator.md
Format as markdown with all file paths, dates, and account names.
After writing the file, return a 2-3 paragraph summary of key findings only.
The file is the primary deliverable. The returned text is a backup summary.
If the Write tool fails, return your full findings as text instead.
WRITE EARLY: By your 3rd tool call, IMMEDIATELY write your current findings to the output file.
Do NOT wait until all research is complete. A partial file is infinitely better than no file.
""")
Task(subagent_type="obsidian-locator", max_turns=5, prompt="""
Search Regal/Accounts/ for files modified in the [TIME_WINDOW]. Also check Regal/TASKS.md existence.
Active accounts to focus on: [ACTIVE_ACCOUNT_NAMES]
For each active account, list files in Regal/Accounts/[ACCOUNT_NAME]/ and note which have been recently modified. Return file paths and modification dates only. Do NOT read file contents.
Also check if Regal/TASKS.md exists and note its path.
CRITICAL — Output Persistence:
Write your complete, detailed findings to: /tmp/digest/[OUTPUT_DIR_SLUG]/obsidian-locator.md
Format as markdown with all file paths, modification dates, and account names.
After writing the file, return a 2-3 paragraph summary of key findings only.
The file is the primary deliverable. The returned text is a backup summary.
If the Write tool fails, return your full findings as text instead.
WRITE EARLY: By your 3rd tool call, IMMEDIATELY write your current findings to the output file.
Do NOT wait until all research is complete. A partial file is infinitely better than no file.
""")
Task(subagent_type="google-workspace-researcher", max_turns=8, prompt="""
Search Google Workspace for Nick Yebra's activity from the [TIME_WINDOW].
Search Gmail for:
- Email threads sent or received about active deals and accounts: [ACTIVE_ACCOUNT_NAMES]
- Notable correspondence with prospects or partners
- Internal emails about deal progress or blockers
Search Calendar for:
- Meetings attended in the [TIME_WINDOW] (external and internal)
- Upcoming meetings scheduled for the next 7 days
Search Drive for:
- Documents created or modified in the [TIME_WINDOW] related to deals or accounts
Return the top 3 most significant email threads per account.
For calendar: list all upcoming external meetings as forward signals.
Skip routine calendar events (daily standups, recurring 1:1s with internal team).
IMPORTANT: Every workspace-mcp tool call requires user_google_email: [email protected]
CRITICAL — Output Persistence:
Write your complete, detailed findings to: /tmp/digest/[OUTPUT_DIR_SLUG]/google-workspace-researcher.md
Format as markdown with all data, evidence, URLs, and citations.
After writing the file, return a 2-3 paragraph summary of key findings only.
The file is the primary deliverable. The returned text is a backup summary.
If the Write tool fails, return your full findings as text instead.
WRITE EARLY: After your first 2-3 successful research calls, IMMEDIATELY write
your current findings to the output file. Do NOT wait until all research is
complete. Continue researching and UPDATE the file with additional findings.
The initial write is your insurance against output loss. A partial file is
infinitely better than no file.
""")
Wait for all 4 Phase 1a agents to return before dispatching Phase 1b. This reduces concurrent Task dispatches from 7 to 4, mitigating the Task tool parallel dispatch race condition (#14055). Before dispatching Phase 1b, read obsidian-locator output from /tmp/digest/[OUTPUT_DIR_SLUG]/obsidian-locator.md and transcript-locator output from /tmp/digest/[OUTPUT_DIR_SLUG]/transcript-locator.md. Extract discovered file paths from each.
If obsidian-locator found files, pass those paths to obsidian-analyzer's dispatch prompt.
If obsidian-locator found no files, skip obsidian-analyzer and note [NO OBSIDIAN DATA].
If obsidian-locator failed entirely, dispatch obsidian-analyzer without path pre-filtering (graceful degradation).
If transcript-locator found transcript files, pass those paths to transcript-analyzer's dispatch prompt.
If transcript-locator found no transcripts, skip transcript-analyzer and note [NO TRANSCRIPT DATA].
If transcript-locator failed entirely (crash, timeout, or tool_uses: 0), skip transcript-analyzer and note [TRANSCRIPT LOCATOR FAILED].
Dispatch these agents in a SINGLE message for parallel execution (session-log-researcher, slack-researcher, obsidian-analyzer, and transcript-analyzer if gated in).
Task(subagent_type="session-log-researcher", max_turns=5, prompt="""
Extract a summary of all Claude Code activity from the [TIME_WINDOW].
Focus on: accounts researched, deals worked, emails drafted, demo requests created,
briefs updated, plans implemented. Return the structured summary format from your instructions.
CRITICAL — Output Persistence:
Write your complete, detailed findings to: /tmp/digest/[OUTPUT_DIR_SLUG]/session-log-researcher.md
Format as markdown with all data, evidence, URLs, and citations.
After writing the file, return a 2-3 paragraph summary of key findings only.
The file is the primary deliverable. The returned text is a backup summary.
If the Write tool fails, return your full findings as text instead.
WRITE EARLY: After your first 2-3 successful research calls, IMMEDIATELY write
your current findings to the output file. Do NOT wait until all research is
complete. Continue researching and UPDATE the file with additional findings.
The initial write is your insurance against output loss. A partial file is
infinitely better than no file.
""")
Task(subagent_type="slack-researcher", max_turns=8, prompt="""
Search Slack for the [TIME_WINDOW]. Prioritize in this order:
1. HIGHEST PRIORITY - Sahil Mehta's messages to or about Nick Yebra:
- Search for messages from Sahil mentioning Nick, accounts, deals, or priorities
- Search Nick's DM thread with Sahil for any questions, requests, or feedback
- Any account Sahil asked about or flagged MUST appear in findings
2. Nick Yebra's deal-related messages:
- Search for messages from Nick about specific accounts: [ACTIVE_ACCOUNT_NAMES]
- Threads where Nick discussed deal progress, blockers, or next steps
3. Only if the above are sparse - broaden to deal channels Nick is in.
For each finding, extract: account name, what was discussed, commitments made, blockers mentioned.
Return the top 3 most significant discussions per account, with Sahil's requests/questions flagged prominently.
Also note any forward-looking commitments ("I'll send X by Friday", "let's schedule Y").
CRITICAL — Output Persistence:
Write your complete, detailed findings to: /tmp/digest/[OUTPUT_DIR_SLUG]/slack-researcher.md
Format as markdown with all data, evidence, URLs, and citations.
After writing the file, return a 2-3 paragraph summary of key findings only.
The file is the primary deliverable. The returned text is a backup summary.
If the Write tool fails, return your full findings as text instead.
WRITE EARLY: After your first 2-3 successful research calls, IMMEDIATELY write
your current findings to the output file. Do NOT wait until all research is
complete. Continue researching and UPDATE the file with additional findings.
The initial write is your insurance against output loss. A partial file is
infinitely better than no file.
""")
Only dispatch if transcript-locator found transcript files. If transcript-locator returned no files, skip and note [NO TRANSCRIPT DATA]. If transcript-locator crashed, skip and note [TRANSCRIPT LOCATOR FAILED].
Task(subagent_type="transcript-analyzer", max_turns=8, prompt="""
Read the following external meeting transcripts and extract key details from each.
Transcript files to read: [TRANSCRIPT_LOCATOR_PATHS]
Active accounts to focus on: [ACTIVE_ACCOUNT_NAMES]
If more than 15 transcripts were found, read only the 15 most recent ones.
For each qualifying external meeting, extract:
- Date and time
- Account/company name
- Attendees (with company/role)
- Key decisions or outcomes
- Action items and commitments made
- Next steps agreed upon
- Any scheduled follow-ups
Return the top 3 most significant meetings with full details, then brief one-liners for the rest.
CRITICAL -- Output Persistence:
Write your complete, detailed findings to: /tmp/digest/[OUTPUT_DIR_SLUG]/transcript-analyzer.md
Format as markdown with all data, evidence, URLs, and citations.
After writing the file, return a 2-3 paragraph summary of key findings only.
The file is the primary deliverable. The returned text is a backup summary.
If the Write tool fails, return your full findings as text instead.
WRITE EARLY: After your first 2-3 successful research calls, IMMEDIATELY write
your current findings to the output file. Do NOT wait until all research is
complete. Continue researching and UPDATE the file with additional findings.
The initial write is your insurance against output loss. A partial file is
infinitely better than no file.
""")
After all agents return, collect their findings using the three-tier fallback:
ls /tmp/digest/[OUTPUT_DIR_SLUG]/ to check which files were writtentools/references/output-persistence.md:
[DATA INCOMPLETE: <agent-name>] and proceedWeekly mode only: Before proceeding to Phase 2, read all daily digest files from the past 7 days in Regal/Daily/Digests/ via Obsidian MCP (mcp__obsidian__obsidian_get_file_contents) or filesystem fallback at /Users/nick.yebra/Library/Mobile Documents/iCloud~md~obsidian/Documents/Core Vault/Regal/Daily/Digests/. Files use the naming convention MM-DD-YY - Daily Digest.md. These provide pre-correlated findings as supplementary context for synthesis. If no daily digests exist, proceed normally with the 6 agents' findings alone.
STOP and assess before doing any additional research. Check:
If all boxes are checked: PROCEED to Phase 2 synthesis immediately. Do NOT launch additional research.
Hard guardrail: If you have completed Phase 1 and are considering dispatching more agents, you are in research rabbit-hole territory. Produce the digest now with what you have. Use [DATA INCOMPLETE: <source>] for any gaps.
After collecting all agent findings (and daily digests for weekly mode), read tools/references/synthesis-protocol.md and apply its rules per the cadence's consumption rules from Section 9:
references/weekly-sections.md)The output format depends on cadence. See the Document Structure section for the appropriate template.
Topic Inference: Apply Section 11 of synthesis-protocol.md. Scan all agent findings and cluster activity into topics/projects using shared context, shared participants, shared timeline, and non-account work signals. Topics should contain 2-10 bullets each. Account-specific activity that doesn't connect to a broader theme falls back to an account-named topic (e.g., "Camping World: Discovery Phase").
Action Items: Scan all agent findings for commitments, follow-ups, and deliverables. Each gets: checkbox, description, account name in parentheses, source tag, date/deadline if known. Deduplicate across sources.
Decisions Made: Apply Section 10 of synthesis-protocol.md (flat-list format). Extract conclusions reached, approvals/rejections, and direction changes. Each gets: account name, what was decided, source tag. Cap at 5-7 decisions.
Replace the daily Topic Inference, Action Items, and Decisions Made steps with the activity-type classification workflow:
Read section template: Read references/weekly-sections.md for section definitions and classification rules.
Activity-type classification: Classify each activity bullet from agent findings into exactly one primary section using the ordered classification algorithm (9 rules, first match wins).
Temporal classification: Within each pipeline section, classify bullets into Completed / Priority To-Do / Overdue using the temporal classification rules from the reference file.
Cross-cutting aggregation: Build This Week and Last Week by aggregating ALL items from pipeline sections. This Week gets all Priority To-Do / Next Steps items; Last Week gets all Completed items. Items appear in both the cross-cutting section and their pipeline section (dual listing). Order by deal size (largest first), then recency. No filtering or cap on count.
Decisions Made: Apply Section 10 extraction rules, structured with account/project sub-headings using #### headers. This intentionally overrides Section 10's flat-list format for weekly scannability; Section 10's flat-list format still applies to daily mode. Cap at 10-15 decisions.
After completing the mode-specific steps above, apply these steps to all bullets regardless of cadence:
Cross-cutting deduplication (daily mode only): When an action item or decision also appears in the topic-grouped body, the topic body bullet should reference upward: "(see Action Items above)" or "(see Decisions Made above)" instead of repeating the full detail. In weekly mode, dual listing between cross-cutting and pipeline sections is intentional; skip this step.
Correlation and dedup: Apply Sections 1-2. Same fact from multiple sources becomes one bullet with multiple tags. Positive finding wins over silence.
Source citations: Apply Section 6. Every bullet gets inline source tags: [SF], [Slack], [transcript], [email], [Obsidian], [Drive], [session], [calendar].
Forward signals: Apply Section 4. Calendar events, close dates within 30 days, commitments with deadlines, scheduled follow-ups. These appear as bullets within the relevant topic or activity-type section.
Contradiction handling (weekly mode only): Apply Sections 7-8. Flag conflicts with both values and source tags. Show confidence tags only on [CONFLICT] markers.
Compute the output path. Use the [OUTPUT_DIR_SLUG] date from the Flag Parsing table (for weekly mode, apply the Weekly Date Boundary Rule). Build the full path:
Regal/Daily/Digests/[OUTPUT_DIR_SLUG] - Daily Digest.md (where [OUTPUT_DIR_SLUG] is formatted as MM-DD-YY)Regal/Weekly/[WEEKLY_FILENAME_RANGE]/Weekly Status.md (where [WEEKLY_FILENAME_RANGE] is the MM.DD-MM.DD Monday-Friday range from the Weekly Date Boundary Rule)Check if the target directory exists via obsidian_list_files_in_dir. For daily mode, Regal/Daily/Digests/ may not exist yet; create it if needed.
Daily mode (append-safe): Check if a file at the exact computed path already exists. If it does, read it via obsidian_get_file_contents, then append new content below a --- divider with a timestamp header: ## Updated [HH:MM]. If it does not exist, create a new file using obsidian_append_content with the full document.
Weekly mode (always replace): Delete any existing file at the exact computed path (via obsidian_delete_file), then create a fresh file using obsidian_append_content with the full document. Do NOT append to existing weekly files. A weekly re-run produces a clean snapshot, not stacked content. Do NOT list the directory to find an existing file; use only the computed path from step 1.
Filesystem fallback: If Obsidian MCP is unavailable (connection refused), write directly to:
/Users/nick.yebra/Library/Mobile Documents/iCloud~md~obsidian/Documents/Core Vault/Regal/Daily/Digests//Users/nick.yebra/Library/Mobile Documents/iCloud~md~obsidian/Documents/Core Vault/Regal/Weekly/[WEEKLY_FILENAME_RANGE]/
Apply the same daily-append / weekly-replace semantics using filesystem Read and Write tools.
Write tool constraint: The Write tool requires a prior Read of any existing file before it will write. For weekly-replace: use Glob to check if the file exists, then Read it (even though you will overwrite it entirely), then Write the full new content. Skipping the Read will produce a "File has not been read yet" error.The output format depends on cadence. Daily mode uses the topic-grouped template below. Weekly mode uses the fixed-section template defined in references/weekly-sections.md. Read the appropriate template before writing.
# Daily Digest - [DATE]
## Action Items
- [ ] [Description] ([account]) [source tag]
- [ ] [Description] ([account]) [source tag]
## Decisions Made
- [Account]: [What was decided] [source tag]
- [Account]: [What was decided] [source tag]
---
## [Topic/Project Name]
- [source tag] ([Account]) [Activity bullet] ([date])
- [source tag] ([Account]) [Activity bullet] ([date])
- [Forward signal or commitment]
## [Next Topic/Project Name]
- [source tag] ([Account]) [Activity bullet] ([date])
...
## Toolkit & Internal
- [session] [Non-account-specific activity]
---
[X] action items · [Y] decisions · [Z] topics · [W] source conflicts
Across [N] sources · Covering [time range]
*Sources: SF, Slack, Obsidian, Gmail, Calendar, Drive, Session Logs, Transcripts*
Topic section ordering: Most active topic first (by activity volume), reverse-chronological within each topic. Account names appear in parentheses on every bullet.
Cross-cutting references: When a bullet corresponds to an Action Item or Decision from the sections above, reference upward: "(see Action Items above)" or "(see Decisions Made above)" instead of repeating the detail.
Summary stats footer: Counts computed during synthesis. "Source conflicts" only shown if >0; otherwise omit that count.
Read references/weekly-sections.md for the full template and classification rules. The template defines 11 fixed sections organized by activity type, with Completed/Priority To-Do/Overdue subsections for pipeline sections. Sources appear in the footer, not the header.
The summary stats footer uses [Z] sections with activity (count of non-empty sections out of 11) instead of [Z] topics.
run_in_background: true for any sub-agent dispatch. MCP tools are unavailable in background subagents.[DATA INCOMPLETE].tools/references/synthesis-protocol.md and apply every applicable rule.user_google_email: [email protected].references/weekly-sections.md are the primary grouping axis. Account names appear as #### sub-headings within cross-cutting sections (This Week, Last Week, Decisions Made) and as inline tags (Account) within pipeline section bullets.After writing to Obsidian, report:
/wp to draft priorities from this status doc."/briefing to get your morning priorities."