Generate customer-facing proposal decks with calculated pricing, ROI analysis, and order form checklists. Use when the user says "create a proposal", "/proposal", "/create-proposal", "proposal for [prospect]", "pricing deck for [prospect]", "generate a proposal".
Generate customer-facing proposal decks by copying and populating the Pricing & Proposal Template in Google Slides. Produces three deliverables: a branded proposal deck (Google Slides URL), an ROI calculator (Google Sheet URL), and an order form input checklist (text).
Four phases executed in sequence:
Main context handles: Phase 1 intake (interactive user input collection), Phase 3 content synthesis, pricing calculation, and user confirmation, Phase 4 template copy, batch update execution, and final delivery, ROI engine pipeline (direct Python execution via Bash), org-context file reads (Step 2b, 5 small files), error handling and graceful degradation.
Delegate to sub-agents: All prospect research (5 agents dispatched in parallel in Phase 2, Step 2a). Each agent writes findings to /tmp/create-proposal/<account-slug>/<agent>.md.
NEVER in main context: Batch-read Salesforce, Obsidian account files, or transcripts directly (delegate to agents). Call 3+ MCP research tools directly (spawn sub-agents instead). Dispatch agents with run_in_background: true (MCP tools unavailable in background).
| Agent | Role | max_turns |
|---|---|---|
| salesforce-researcher | Account, Opportunity, Contact, Task data | 8 |
| obsidian-locator | Discover account files in Obsidian | 5 |
| obsidian-analyzer | Account files, prior briefs, org-context | 8 |
| transcript-locator | Find call transcripts mentioning the prospect | 7 |
| transcript-analyzer | Volume data, pricing discussions from calls | 8 |
| company-researcher | Company background, tech stack, news, logo URL | 12 |
| edgar-researcher | SEC filings for public companies | 8 |
Read tools/references/output-persistence.md for the pattern and fallback protocol.
Pre-dispatch directory creation:
rm -rf /tmp/create-proposal/<account-slug> && mkdir -p /tmp/create-proposal/<account-slug>
Agent-to-file mapping:
| Agent | Output File |
|---|---|
salesforce-researcher | salesforce-researcher.md |
obsidian-locator | obsidian-locator.md |
obsidian-analyzer | obsidian-analyzer.md |
transcript-locator | transcript-locator.md |
transcript-analyzer | transcript-analyzer.md |
company-researcher | company-researcher.md |
edgar-researcher | edgar-researcher.md |
Persistence block (include verbatim in every dispatch prompt, replacing placeholders):
Output Persistence:
Write your complete, detailed findings to: /tmp/create-proposal/<account-slug>/<agent-name>.md
Format as markdown with all data, evidence, URLs, and citations.
After writing the file, return a 2-3 paragraph summary of key findings only.
The file is the primary deliverable. The returned text is a backup summary.
If the Write tool fails, return your full findings as text instead.
WRITE EARLY: After your first 2-3 successful research calls, IMMEDIATELY write
your current findings to the output file. Do NOT wait until all research is
complete. Continue researching and UPDATE the file with additional findings.
The initial write is your insurance against output loss. A partial file is
infinitely better than no file.
Three-tier fallback after dispatch:
| Tier | Condition | Action |
|---|---|---|
| 1 | File exists at /tmp/create-proposal/<slug>/<agent>.md | Read it (primary source) |
| 2 | File missing, Task result >500 chars | Use Task result as fallback |
| 3 | Both missing or truncated | Note [DATA INCOMPLETE: <agent>], proceed with available data |
Pre-populate all intake fields from existing CRM and vault data before asking the user to confirm. NEVER present blank defaults when real data is available.
Extract the prospect name from $ARGUMENTS. If no prospect name provided, ask:
"Which prospect should I create the proposal for?"
Run these in parallel (single message) to pre-fill the intake table:
Salesforce query via mcp__Salesforce-Search__salesforce_query_records:
SELECT Id, Name, Industry,
(SELECT Id, Name, StageName, Amount, AI_Agent_Use_Case__c, CloseDate
FROM Opportunities WHERE IsClosed = false ORDER BY CreatedDate DESC LIMIT 1),
(SELECT Id, FirstName, LastName, Title, Email
FROM Contacts WHERE Primary_Contact__c = true LIMIT 1)
FROM Account
WHERE Name LIKE '%<prospect>%'
LIMIT 1
If the nested query fails (some orgs restrict subqueries), run two separate queries:
SELECT Id, Name, StageName, Amount, AI_Agent_Use_Case__c FROM Opportunity WHERE Account.Name LIKE '%<prospect>%' AND IsClosed = false ORDER BY CreatedDate DESC LIMIT 1Obsidian account folder via mcp__obsidian__obsidian_list_files_in_dir for Regal/Accounts/<prospect>/. If the folder exists, read ALL files in recency-priority order:
Priority 1: Meeting Transcripts (latest to oldest). List Meeting Transcripts/ subfolder, sort by date (filename or modified date), read newest first. These are the freshest signal and the primary source for: volumes discussed, AI minute estimates, call duration benchmarks, telephony mentions (SIP/VoIP), contact names and titles, use case details, deal-specific data. Stop reading older transcripts once all intake fields are populated.
Priority 2: Other account docs (latest to oldest). Read remaining files in the account folder (deck outlines, meeting prep docs, demo agent prompts, status files, etc.) sorted newest-first. These fill gaps the transcripts didn't cover: industry context, competitive landscape, pain points, internal Regal notes.
Priority 3: Context.md (base layer). Read last. Context.md is a synthesized brief that may contain outdated information. It serves as fallback for any field still missing after Priorities 1-2. When a transcript contradicts Context.md, the transcript wins.
Key principle: Later information supersedes earlier information. A volume number from last week's transcript overrides a volume number from Context.md written two months ago.
If Obsidian MCP is unavailable, fall back to filesystem at $VAULT_ROOT/Regal/Accounts/<prospect>/.
Map discovered data to intake fields. Priority: latest transcript > older transcripts > other account docs > Context.md > SF fields > smart defaults. First non-null wins at each priority level.
| Field | P1: Transcripts (latest first) | P2: Account docs | P3: Context.md | P4: SF | Default |
|---|---|---|---|---|---|
| Use case | Use case discussed in calls | Deck outline, demo agent | Use case section | AI_Agent_Use_Case__c | Inbound |
| Key contact | Participants, named contacts | Meeting prep contacts | Contact list | Primary Contact | (unknown) |
| Monthly AI minutes | Volume/minute numbers discussed | Deck outline estimates | Volume section | (rarely) | 20,000 |
| Telephony type | SIP/VoIP mentions | (rare) | Telephony section | (rarely) | VoIP |
| Monthly call volume | Call volume numbers | Deck outline estimates | Volume section | (rarely) | 10,000 |
| Avg call duration | Duration mentions | (rare) | (rare) | (never) | 5 min (inbound), 3 min (outbound) |
Fixed values (never vary, omit from editable table): Deal structure = POC, Initial term = 90 days, Minimum spend = $25,000/quarter, Support tier = Base.
Present the pre-filled table with source citations so the user knows where each value came from:
Proposal Intake for [Prospect] (pre-filled from SF + Obsidian + Transcripts):
| Field | Value | Source |
|------------------------|--------------------|---------------------------------|
| Industry | Healthcare | SF: Account.Industry |
| Use case | Inbound | SF: AI_Agent_Use_Case__c |
| Key contact | Jane Doe, VP CX | SF: Primary Contact |
| Monthly AI minutes | 50,000 | Transcript: "50K minutes/month" |
| Telephony type | VoIP | Default (no data found) |
| Monthly call volume | 25,000 | Transcript: volume discussion |
| Avg call duration | 5 min | Default (inbound use case) |
Fixed values (standard for all proposals):
- Deal structure: POC
- Initial term: 90 days
- Minimum spend: $25,000/quarter
- Support tier: Base
Confirm these values or specify overrides (e.g., "change monthly AI minutes to 30,000"):
Wait for user confirmation before proceeding to Phase 2. Accept overrides to any field.
Run rm -rf /tmp/create-proposal/<account-slug> && mkdir -p /tmp/create-proposal/<account-slug> via Bash before dispatching.
Dispatch in THREE waves. Wave 1 runs 4 agents (obsidian-locator, salesforce-researcher, transcript-locator, company-researcher). Wave 2 runs 2-3 agents (obsidian-analyzer with locator paths, transcript-analyzer gated on locator results, edgar-researcher).
After Wave 1 returns, read obsidian-locator output. Pass discovered file paths to obsidian-analyzer's dispatch prompt. If obsidian-locator found no files in the target directories, skip obsidian-analyzer for this account. Note [NO OBSIDIAN DATA: <account>] and proceed with remaining agents. If obsidian-locator fails entirely (crash, timeout, or tool_uses: 0), dispatch obsidian-analyzer without path pre-filtering (graceful degradation).
Wave 1 (dispatch 4 agents in a SINGLE message). Each agent receives the account name, context from intake, the output persistence block, and agent-specific extraction targets:
obsidian-locator: "List all files in Regal/Accounts/[PROSPECT]/. Identify Context.md, research docs, prior proposals, meeting transcripts in the account folder, pricing discussions. Return file paths, types, and modification dates."
salesforce-researcher: "Find Account, Opportunities, Contacts, and recent Activities for [prospect]. Extract: deal stage, ACV, contract terms discussed, estimated volumes (monthly AI minutes, call volume), any pricing or discount mentions in Activity notes. Also query AI_Agent_Use_Case__c on the Opportunity."
transcript-locator: "Find transcripts mentioning [prospect]. Return transcript file paths, dates, and brief descriptions. Write results to /tmp/create-proposal/<account-slug>/transcript-locator.md."
company-researcher: "Research [prospect] company background. Find: industry vertical, business model, contact center scale (locations, agent count, call volume if public), technology stack, recent news, strategic priorities. ALSO: find a transparent PNG or SVG of the company logo suitable for a presentation cover slide. Search the company website, press kit, media/brand assets page, or Wikipedia. Return the direct image URL."
After Wave 1 returns: ls /tmp/create-proposal/<account-slug>/ then read each file using the three-tier fallback protocol.
Wave 2 (dispatch 2-3 agents in a SINGLE message, after Wave 1 returns):
obsidian-analyzer: "Read these specific files discovered by obsidian-locator: [LOCATOR_OUTPUT_PATHS]. Extract: contacts with titles, volume data discussed, pricing history, use case details, competitive mentions, prior proposal terms. Prioritize Context.md and research docs over raw transcripts. WRITE findings after reading the first 2-3 files."
transcript-analyzer: Gated on transcript-locator results. Apply three-state logic:
[NO TRANSCRIPT DATA] and proceed.[TRANSCRIPT LOCATOR FAILED] and proceed.edgar-researcher: "Research [prospect]. Return structured SEC filing data: revenue, growth rate, margins, business segments, strategic priorities from 10-K/earnings calls. If private company, return 'Private company' immediately."
After Wave 2 returns: read output files using the three-tier fallback protocol.
Read these Obsidian org-context files for positioning content. Try Obsidian MCP obsidian_get_file_contents first. If Obsidian MCP is unavailable (connection refused), fall back to reading directly from the filesystem at /Users/nick.yebra/Library/Mobile Documents/iCloud~md~obsidian/Documents/Core Vault/Regal/org-context/.
Regal/org-context/use-cases.mdRegal/org-context/case-studies.mdRegal/org-context/differentiators.mdRegal/org-context/product.mdRegal/org-context/sales-process.md (Stage 3 exit criteria for Close slide)NEVER use obsidian_batch_get_file_contents for these files. These 5 files total 100K+ characters combined and will overflow the batch call's token limit, triggering the context-trap "read in sequential chunks" error. Call obsidian_get_file_contents for each file individually in a single parallel message (5 calls, 1 per file).
Run the ROI pipeline inline (not via nested /business-case dispatch):
Detect industry and use case from agent outputs. Check Salesforce AI_Agent_Use_Case__c first, then transcript mentions, then company context. Options: General, Healthcare, Education, Retail. Use case options: Inbound Triage, Inbound Lead Verification, Outbound, Operational.
Load matching benchmark YAML: tools/skills/business-case/references/benchmarks/{use_case}/{industry}.yaml
Prepare input JSON mapping intake fields to engine keys:
| Intake Field | Engine Key |
|---|---|
| Estimated monthly AI minutes | monthly_inbound_volume (inbound triage), monthly_leads (lead verification), monthly_outbound_attempts (outbound), monthly_transactions (operational) |
| Estimated monthly call volume | monthly_call_volume |
| Use case | use_case |
| Industry | industry |
Run roi_engine.py (Bash tool, from repo root):
source .venv/bin/activate && python tools/skills/business-case/roi_engine.py --json '{
"prospect": {"name": "<name>", "use_case": "<use_case>", "industry": "<industry>"},
"inputs": { ... confirmed volume/cost inputs ... },
"assumptions": { ... merged assumptions from benchmark YAML ... }
}'
Parse the JSON output containing results, formatted, interpretations, traces, and scenarios.
Run validate_model.py on the output. If any "fail" status findings, warn the user before proceeding.
Run sheets_builder.py to create .xlsx:
source .venv/bin/activate && python tools/skills/business-case/sheets_builder.py --json '{
"calc_output": { ... full roi_engine output with inputs_used and assumptions_used ... },
"prospect_name": "<name>",
"use_case": "<use_case>",
"industry": "<industry>",
"benchmarks": { ... parsed contents of the benchmark YAML (read the file first, then inline the dict) ... },
"findings": [ ... validation findings ... ]
}'
calc_output is a required wrapper key. Do NOT pass the roi_engine JSON at the top level. The script does data["calc_output"] and raises KeyError: 'calc_output' if the key is missing. Always nest the full roi_engine output dict under "calc_output": { ... }.
Upload .xlsx to Drive via workspace-mcp create_drive_file:
file_name: "ROI Calculator - <Prospect>.xlsx"mime_type: "application/vnd.google-apps.spreadsheet"fileUrl: "file:///Users/nick.yebra/tmp/business-case/<prospect-slug>/roi-calculator.xlsx"user_google_email: "[email protected]"Get shareable link via get_drive_shareable_link.
Store ROI headline metrics and Google Sheet URL for Phase 3.
Read these reference files for content generation:
references/slide-content-formulas.md (per-slide data sources and content requirements)references/pricing-rate-card.md (pricing calculation formulas and SKU tables)tools/skills/create-presentation/references/scr-framework.md (SCR methodology for Slide 2)Generate content for each of the 5 automated slides:
Slide 1 (Cover): Company name in title. Format: "TRANSFORM {COMPANY}'S CUSTOMER EXPERIENCE WITH AI AGENTS". Logo URL presented to user for manual insertion.
Slide 2 (Engagement Overview): Single slide with 3 paragraph subsections following compressed SCR framework:
Text length constraints: 2-3 sentences per subsection, 120-180 words total across all three sections.
Slide 5 (Pricing Overview): Calculate estimated costs using intake inputs + rate card formulas. Present as labeled key-value pairs, one per line:
Slide 6 (ROI): Headline metrics from roi_engine.py output + Google Sheet URL. Show 3-5 headline metrics (e.g., "Annual savings: $X", "ROI: X%", "Payback period: X months"). Metrics vary by use case per references/slide-content-formulas.md.
Slide 11 (Close): Prospect-specific next steps. 3-4 numbered steps with timeframes, key contact name, action-oriented tone. Reference sales-process.md Stage 3 exit criteria for structure.
Generate the order form input checklist from references/order-form-checklist-template.md with pre-filled values from research and intake (Section 1 values populated, Sections 2-3 as template).
Present ALL content for user review:
Wait for explicit user approval. Accept revisions to any slide content, pricing adjustments, or scope changes before proceeding to Phase 4.
Read references/proposal-template-schema.md for objectIds, page IDs, and population strategy.
Copy the proposal template via workspace-mcp copy_drive_file:
1El0FEUrqw5pzjtZguVcWoFOruqEEnq2lZHfA3w4TFDURegal Proposal - {Company Name}user_google_email: "[email protected]"NEVER edit the template in place. Always copy first.
Build the batch_update_presentation request. All content must be plain text (strip markdown bold/italic markers, convert bullets to newlines, preserve numbers/currency).
Slide 2 text box resize (must precede replaceAllText): The BACKGROUND, COMPLICATION, and SOLUTION text boxes are header-sized (~0.4" tall). Adding paragraph content via replaceAllText will overflow. Before any replaceAllText calls, include updatePageElementTransform requests for the 3 SCR text boxes (g3cd20496ae9_0_901, g3cd20496ae9_0_904, g3cd20496ae9_0_905) to expand their height. Use RELATIVE transform mode with scaleY: 10.0 to safely multiply the current height without needing exact absolute coordinates. See EMU Dimensions section of references/proposal-template-schema.md for estimated dimensions and resize targets.
Target strings per slide (from references/proposal-template-schema.md), using replaceAllText with pageObjectIds scoping:
| Slide | Page ID | Target String | Replacement |
|---|---|---|---|
| 1 | g3cd20496ae9_0_1251 | TRANSFORM YOUR CUSTOMER EXPERIENCE WITH AI AGENTS | Prospect-specific title |
| 2 | g3cd20496ae9_0_899 | BACKGROUND | BACKGROUND\n\n[Situation paragraph] |
| 2 | g3cd20496ae9_0_899 | COMPLICATION | COMPLICATION\n\n[Conflict paragraph] |
| 2 | g3cd20496ae9_0_899 | SOLUTION | SOLUTION\n\n[Resolution paragraph] |
| 5 | g3cd20496ae9_0_1135 | Per-cell values in 13x4 table | Calculated pricing values |
| 6 | g39223e448a0_0_1613 | RETURN-ON-INVESTMENT IS EASY AS AI AGENTS | Headline ROI metric |
| 6 | g39223e448a0_0_1613 | COST LESS AND PERFORM BETTER | Supporting metrics |
| 6 | g39223e448a0_0_1613 | Assumptions: Onshore human agents... (full string) | Prospect-specific assumptions + Sheet URL |
| 11 | g3c910a3971c_0_739 | TREAT YOUR CUSTOMERS \u000bLIKE ROYALTY | Prospect-specific next steps |
Slide 5 pricing table: The 13x4 table requires per-cell replaceAllText calls. Use the hardcoded cell values from the Known Cell Values section of references/proposal-template-schema.md as containsText targets. Do NOT attempt to read cell text at runtime; the get_presentation API does not expose table cell contents. Each cell's exact current text (Slavic 401k sample data) is documented in the schema. Replace: Col 2 (Unit Price) with prospect rates, Col 3 (Projected Volume) with intake volumes, Col 4 (Projected Spend) with calculated values. Also replace the 1x1 header table text 12-Month, $100k Commitment (Post POC) with prospect-specific terms.
Slide 11 special character: The template uses \u000b (vertical tab / soft return), not \n. The containsText must include \u000b to match.
Do NOT touch slides 3, 4, 7, 8, 9, 10, 12, 13, 14 (static or V2).
batch_update_presentation with user_google_email: "[email protected]"get_drive_shareable_linkProposal deck: [Google Slides URL]
ROI calculator: [Google Sheet URL]
Logo URL for manual insertion into slide 1: [URL]
Order Form Input Checklist:
[checklist text]
Next steps:
- Insert the company logo into slide 1 (drag the image from the URL above)
- Review slides 8 (GANTT) and 10 (System Map) for V2 content
- Move an order form template and fill the Volume Worksheet tab
1El0FEUrqw5pzjtZguVcWoFOruqEEnq2lZHfA3w4TFDU for manual copy. Continue with remaining deliverables (ROI sheet, checklist).obsidian_append_content at Regal/Proposals/{Company Name} - Proposal Content.md. Include all slide content, pricing, and checklist in structured markdown.user_google_email: "[email protected]"copy_drive_fileobsidian_append_content)run_in_background: true (MCP tools unavailable in background)tools/references/output-persistence.md for the persistence pattern and fallback protocol/business-case skill dispatchreplaceAllText payloads (strip markdown, bullets to newlines, preserve numbers/currency)obsidian_batch_get_file_contents for org-context reads (Step 2b): the 5 files total 100K+ chars combined and will overflow the batch token limit. Use 5 individual obsidian_get_file_contents calls instead.calc_output key for sheets_builder.py: the script requires data["calc_output"] and raises KeyError if you pass the roi_engine output at the top level.