Draft an AI Agent Demo Request with deep multi-source research. Use when the user says "/demo-request [company]", "demo request for [company]", "draft a demo request", "fill out the demo form for [company]", or provides a transcript and asks for a demo request to be created. Researches Obsidian account files, meeting transcripts, Salesforce, Slack, Gmail/Drive, company website, and web sources to identify the best use case, find existing scripts, gather supporting materials, and produce a bulletproof demo brief.
Research an account across 7 data sources in three progressive tiers, then produce a Demo Brief with a form-ready draft (Part 1) and demo strategy (Part 2) for the builder.
Accept one or more of:
$ARGUMENTS or the messageRead Regal/org-context/org-context.md from Obsidian (obsidian_get_file_contents) for Regal product overview, terminology, ICP summary, and key metrics.
For deeper context, also read these sub-files from Obsidian:
Regal/org-context/product.md — platform capabilities, use cases by industry, buyer personasRegal/org-context/differentiators.md — positioning, proof points, competitive landscapeRegal/org-context/sales-process.md — deal stages, pricing, engagement modelRegal/org-context/use-cases.md — use cases by function (inbound/outbound/operational) with customer proof pointsRegal/org-context/case-studies.md — full case studies for Healthcare, Education, E-commerce/Retail verticalsReference these to inform product types, use case mapping, and competitive context when drafting the demo request.
Read EXAMPLES.md for concrete input/output pairs showing correct and incorrect demo request drafts. Study these patterns before researching.
Determine the company name from $ARGUMENTS or the user's message. Read organizational context files. If a transcript is referenced by name or date, locate it:
!ls -t ~/Library/Mobile\ Documents/iCloud~md~obsidian/Documents/Core\ Vault/Regal/Granola/ 2>/dev/null | head -15 || echo "Obsidian path not found"
If the transcript is pasted directly in chat (not a file reference), analyze it inline during synthesis. Do not spawn a sub-agent for pasted text.
Run in the main context. No sub-agents. Establish the baseline for targeted dispatch.
Salesforce-Search:salesforce_query_records:
SELECT Id, Name, StageName, Amount, CloseDate, Account.Name, Account.Id, Account.Type, Account.Industry, Account.Website, Owner.Name FROM Opportunity WHERE Account.Name LIKE '%<company>%' ORDER BY CloseDate DESC LIMIT 5SELECT Id, Name, Title, Email FROM Contact WHERE Account.Name LIKE '%<company>%' ORDER BY Nameobsidian_list_files_in_dir in the main context.Capture from Quick Discovery: [COMPANY], [CONTACT_NAMES], [CONTACT_EMAILS], [ACCOUNT_FOLDER_PATH], [SF_OPPORTUNITY_ID], [COMPANY_WEBSITE], [INDUSTRY].
Read tools/references/output-persistence.md for the pattern and fallback protocol.
Before dispatching Tier 1 agents, create the output directory:
rm -rf /tmp/ai-demo-request/<company-slug> && mkdir -p /tmp/ai-demo-request/<company-slug>
All tiers write to the same directory. Agent-to-file mapping:
| Tier | Agent | Output File |
|---|---|---|
| 1 | obsidian-locator | obsidian-locator.md |
| 1 | obsidian-analyzer | obsidian-analyzer.md |
| 1 | transcript-locator | transcript-locator.md |
| 1 | transcript-analyzer | transcript-analyzer.md |
| 1 | salesforce-researcher | salesforce-researcher.md |
| 2 | slack-researcher | slack-researcher.md |
| 2 | google-workspace-researcher | google-workspace-researcher.md |
| 3 | company-researcher | company-researcher.md |
| 3 | web-search-researcher | web-search-researcher.md |
| 3 | deep-research-agent | deep-research-agent.md |
Where <company-slug> is the kebab-cased company name (e.g., oscar-health, camping-world).
Append this block to every agent dispatch prompt, replacing <agent-name> with the agent's subagent_type:
CRITICAL -- Output Persistence: Write your complete, detailed findings to: /tmp/ai-demo-request/<company-slug>/<agent-name>.md Format as markdown with all data, evidence, URLs, and citations. After writing the file, return a 2-3 paragraph summary of key findings only. The file is the primary deliverable. The returned text is a backup summary. If the Write tool fails, return your full findings as text instead.
WRITE EARLY: After your first 2-3 successful research calls, IMMEDIATELY write your current findings to the output file. Do NOT wait until all research is complete. Continue researching and UPDATE the file with additional findings. The initial write is your insurance against output loss. A partial file is infinitely better than no file.
Split into two sub-waves: Tier 1a runs obsidian-locator alongside non-Obsidian agents; Tier 1b dispatches obsidian-analyzer with locator paths.
| Agent | Type | max_turns | Role |
|---|---|---|---|
obsidian-locator | Tier 1a | 5 | Discover files in Regal/Accounts/[COMPANY]/ and Regal/Granola/ matching the company |
transcript-locator | Tier 1a | 7 | Find ALL transcripts mentioning the company and key contacts |
salesforce-researcher | Tier 1a | 6 | Full Account, Contacts, Opportunity, Tasks with SF URLs |
obsidian-locator prompt (max_turns=5):
List all files in Regal/Accounts/[COMPANY]/ and search Regal/Granola/ for transcripts
mentioning [COMPANY]. Return file paths, types (brief, transcript, status, script,
call-flow), and modification dates. If account folder does not exist, report that
and return empty.
CRITICAL -- Output Persistence:
Write your complete, detailed findings to: /tmp/ai-demo-request/<company-slug>/obsidian-locator.md
Format as markdown with all file paths and metadata.
After writing the file, return a 2-3 paragraph summary of key findings only.
The file is the primary deliverable. The returned text is a backup summary.
If the Write tool fails, return your full findings as text instead.
Use the prompt templates in references/research-dispatch.md § Tier 1 for transcript-locator and salesforce-researcher. Populate [COMPANY], [CONTACT_NAMES], and [ACCOUNT_FOLDER_PATH] from Quick Discovery.
Include the Output Persistence block in each agent's dispatch prompt (see Sub-Agent Output Persistence above).
After Tier 1a returns, read obsidian-locator and transcript-locator output. Extract discovered file paths.
Obsidian locator gating:
[NO OBSIDIAN DATA: <company>] and proceed to Tier 1 Synthesis.Transcript locator gating: Read transcript-locator output from /tmp/ai-demo-request/<company-slug>/transcript-locator.md. Apply three-state handling:
transcript-analyzer in Tier 1b with the discovered transcript paths. Give it max_turns: 8.transcript-analyzer. Note [NO TRANSCRIPT DATA] in synthesis.tool_uses: 0): Skip transcript-analyzer. Note [TRANSCRIPT LOCATOR FAILED] in synthesis.| Agent | Type | max_turns | Role |
|---|---|---|---|
obsidian-analyzer | Tier 1b | 8 | Deep-read account folder for Context.md, call flows, SOWs, brand materials. Receives file paths from obsidian-locator |
transcript-analyzer | Tier 1b | 8 | Extract use cases ranked by emphasis from located transcripts. Receives transcript paths from transcript-locator |
Pass the obsidian-locator discovered file paths to obsidian-analyzer's dispatch prompt so it reads only confirmed files instead of searching blind. Pass the transcript-locator discovered paths to transcript-analyzer's dispatch prompt.
Wait for all Tier 1 agents to return. Read Tier 1 output files from /tmp/ai-demo-request/<company-slug>/ one at a time, NOT in parallel. A missing file from one agent will cancel all sibling parallel reads (cascade failure). Read them sequentially:
obsidian-analyzer.mdtranscript-locator.mdtranscript-analyzer.mdsalesforce-researcher.mdIf a file does not exist, note it as "agent produced no output" and continue with the available files. Do NOT let a missing file stop synthesis.
Apply three-tier fallback per tools/references/output-persistence.md.
Synthesize findings into targeting context:
Populate: [PRIMARY_USE_CASE], [CURRENT_STATE_SUMMARY].
Dispatch 2 agents in a SINGLE message. Prompts shaped by Tier 1 findings.
| Agent | Type | max_turns | Role |
|---|---|---|---|
slack-researcher | Tier 2 | 8 | Demo prep discussions, builder notes, script drafts |
google-workspace-researcher | Tier 2 | 8 | Email threads with contacts, Drive proposals, scripts |
Use the prompt templates in references/research-dispatch.md § Tier 2. Populate [PRIMARY_USE_CASE], [CONTACT_NAMES], [CONTACT_EMAILS], and [CURRENT_STATE_SUMMARY] from Tier 1 synthesis.
Include the Output Persistence block in each Tier 2 agent's dispatch prompt (see Sub-Agent Output Persistence above).
Wait for Tier 2 agents to return. Read Tier 2 output files from /tmp/ai-demo-request/<company-slug>/ one at a time, NOT in parallel:
slack-researcher.mdgoogle-workspace-researcher.mdIf a file does not exist, note it as "agent produced no output" and continue. Do NOT let a missing file stop synthesis.
Apply three-tier fallback per tools/references/output-persistence.md.
Update targeting context:
Populate: [TARGETED_QUESTIONS].
Dispatch 2 agents in a SINGLE message. Targeted by Tier 1+2 findings.
| Agent | Type | max_turns | Role |
|---|---|---|---|
company-researcher | Tier 3 | 10 | Company website, FAQ/KB URLs, brand terminology, phone numbers |
web-search-researcher | Tier 3 | 10 | Similar AI demos in industry, objection patterns, edge cases |
Use the prompt templates in references/research-dispatch.md § Tier 3. Populate [PRIMARY_USE_CASE], [CURRENT_STATE_SUMMARY], [TARGETED_QUESTIONS], [COMPANY_WEBSITE], and [INDUSTRY] from Tier 1+2 synthesis.
Include the Output Persistence block in each Tier 3 agent's dispatch prompt (see Sub-Agent Output Persistence above).
deep-research-agent -- Conditional. Dispatch when: (a) the prospect is in a complex industry (healthcare, financial services, insurance) where regulatory/operational nuance affects the demo request, or (b) Tier 1-2 findings reveal thin account context (no prior transcripts, no existing account brief). Read tools/references/deep-research-dispatch.md for query construction. Give it max_turns: 8.Before drafting, classify every capability found across all tiers. This is the critical translation step: converting raw prospect requirements into demo-ready tasks.
For each capability the prospect mentioned (from transcripts, Slack, emails, or call notes), classify it:
| Classification | Criteria | Action |
|---|---|---|
| Directly demoable | Uses only pre-built actions (End call, Internal Transfer, External Transfer, Gather Date/Time, Schedule Callback) and/or dynamic variables | Include as a core task in Part 1 |
| Demoable with reframing | Requires CRM data, API lookups, or backend state, but can be simulated with {{contact.customProperties.*}} pre-staging or verbal confirmation | Include as a core task using the demo alternative. See Reframing Patterns in demo-scripting.md § Scope Classification |
| Journey-level | Trigger conditions, retry cadence, TCPA timing, multi-channel sequences, time-of-day routing | Mention in the Use Case context sentence (1 sentence max). NOT a numbered core task |
| Production-only | Requires custom actions, action sequences, or integrations not configured for the demo | Note in builder notes as "Production: [capability]". Exclude from core tasks |
Output of this step: A filtered, prioritized list of core tasks ready for Part 1, with reframing notes for any "demoable with reframing" items. Journey-level and production-only items are separated and placed appropriately.
Consult references/demo-scripting.md § Dynamic Variables & Platform Capabilities for the full variable syntax and action inventory.
Wait for all Tier 3 agents to return. Read Tier 3 output files from /tmp/ai-demo-request/<company-slug>/ one at a time, NOT in parallel:
company-researcher.mdweb-search-researcher.mddeep-research-agent.md (only if deep-research-agent was dispatched in Step 6)If a file does not exist, note it and continue. Do NOT let a missing file stop synthesis.
Apply three-tier fallback per tools/references/output-persistence.md.
Draft the Demo Brief in two parts using the filtered task list from Step 6.5.
Consult references/form-fields.md for field definitions and best practices. Consult references/demo-scripting.md for demo strategy best practices. Consult references/examples.md for quality benchmarks.
Synthesis quality checks (verify before presenting the draft):
{{contact.customProperties.*}} or {{task.*}} syntax in the Script field.{{contact.customProperties.medication}}), not abstract placeholders like "[patient data]".Present the form draft in this exact format:
## AI Agent Demo Request Draft — [Company Name]
**Brand:** [Company name]
**Customer Industry:** [From SF Account.Industry or infer from company context]
**Company Website Link:** [https://www.example.com — from SF Account.Website or research]
**New or Existing Customer:** [New / Existing]
**Inbound or Outbound:** [Inbound / Outbound]
**Type of Agent:** [Comma-separated from valid options]
**Use Case/Role of Agent:**
[Company] is a [what they do, where]. The AI agent handles [inbound/outbound] [calls] for [purpose/when it activates].
Core tasks:
1. [Demoable capability] — [technical detail: routing examples, API mechanisms, dialogue snippets]
2. [Demoable capability] — [detail]
3. [Demoable capability] — [detail]
...
N. Transfer to human — [specific conditions]
[1-3 lines max: only voice/tone preferences, prior demo references ("similar to hear.com"), or build-critical requirements (voice clone, bilingual). Omit if none.]
**Script or Recordings in Regal:**
Flow:
- [Intro path]
- [Key path 1: brief description]
- [Key path 2: brief description]
- [Transfer/escalation conditions]
Knowledge base: [1-2 parent URLs — builder will explore subpages]
**Measuring Success:** [Primary metric — keep to a few words]
**# Answered Calls per Month:** [Number or blank]
**Requested by Date:** [YYYY-MM-DD — default to 3 business days from today]
**Urgent Justification:** [Only if < 3 business days]
**Salesforce Opportunity Link:** [Full URL from SF research]
Append the Demo Strategy after Part 1. This gives the builder deeper context for configuring the agent.
Reference references/demo-scripting.md for best practices on each section.
Describe how the company handles this use case today. Source from Tier 1 transcripts, Obsidian notes, and SF data. Include: current process, pain points, call volume, technology in use.
Recommend the demo call flow structure. Map to the 6-stage framework in demo-scripting.md (Intro, Verification, Core Task, Objection Handling, Escalation, Close). Specify what the builder needs to configure at each stage.
Select 3-5 challenge scenarios relevant to the use case type from demo-scripting.md. Adapt each scenario to the company's specific context using Tier 1-3 findings. Present as a table with Scenario, Challenge, and Response Approach columns.
Synthesize brand voice from all research tiers: website copy style, email tone, transcript dialogue patterns, product naming. Also reference brand-guidelines.md for Regal's own brand standard (confident, enterprise-ready, intentional) to ensure the demo request framing aligns with how Regal presents itself. Provide an example greeting line the builder can use. Reference demo-scripting.md § Brand Alignment.
List all materials discovered during research that the builder should reference:
After drafting the Demo Brief, save the full output (Part 1 + Part 2) to the account's Obsidian folder:
Path: Regal/Accounts/[COMPANY]/AI-Demo-Request - [YYYY-MM-DD] - [COMPANY].md
Use obsidian_append_content to create the file. If the account folder does not exist, create it by writing to the path (Obsidian MCP will create intermediate directories).
The saved file should contain both Part 1 (the form) and Part 2 (demo strategy) as a single markdown document. Add a YAML frontmatter block:
---