Research and document MEDDPICC qualification answers. Two modes: (1) Single-account deep research across Salesforce, Obsidian, transcripts, Gmail, and Slack producing a structured Obsidian document with per-element findings, confidence levels, evidence, gaps, and discovery questions. (2) Batch mode across all open opportunities, producing a single pipeline-wide MEDDPICC document with SF-ready entries. Use when the user says "meddpicc [account]", "qualify [account]", "deal qualification", "/meddpicc", "what do we know about [account] MEDDPICC", "MEDDPICC for [account]", "meddpicc all", "meddpicc batch", "meddpicc pipeline", "fill out meddpicc for my deals", or asks for MEDDPICC analysis on a specific account or across the pipeline. Do NOT use for pipeline-wide scoring (use pipeline-review) or general account research (use research-account).
Research, analyze, and document what is known (and unknown) about a deal's MEDDPICC elements. The output is a structured research document in Obsidian, not a scorecard. Each element captures the best current answer, supporting evidence, confidence level, gaps, and discovery questions to ask next.
Detect mode from user input:
An account name. Resolve to the Salesforce Account and primary Opportunity before any dispatch.
Each MEDDPICC element maps to a specific Salesforce field on the Opportunity object:
| Element | SF Field | Type |
|---|---|---|
| Metrics | AI_Agent_target_Metric_s__c | textarea (255) |
| Economic Buyer | Economic_Buyer__c | Contact lookup |
| Decision Criteria | SE_Technical_Resources_Summary__c | textarea |
| Decision Process | Timeline_for_Success__c | textarea (500) |
| Paper Process | Purchasing_Process__c | textarea (255) |
| Identified Pain | Business_Challenge_Pain_Point__c | textarea |
| Champion | Champion_Contact__c | Contact lookup |
| Competition | Primary_Competitors__c | picklist |
During Setup, read existing values for all 8 fields. During Output, propose write-back for empty or improved fields. See references/queries.md for SOQL queries, DML patterns, and picklist values.
Run the initial SF lookup. Read references/queries.md for SOQL queries. Extract: Account ID, primary Opportunity (highest Amount or most recent), stage, Amount, contacts with roles, and custom fields (Consequences_For_Not_Solving_Now__c, Timeline_for_Success__c, Purchasing_Process__c, Agent_Seats_input__c, AI_Agent_Use_Case__c, Phone_System__c, CRM__c, ICP_fit__c).
Prepare temp directory:
mkdir -p /tmp/meddpicc
mcp__obsidian__obsidian_get_file_contents("Regal/org-context/org-context.md")
NEVER combine org-context reads with agent dispatches in the same message.
Use these specialized agents by name with the Task tool (subagent_type):
| Agent | Role in MEDDPICC Research | max_turns | Wave |
|---|---|---|---|
salesforce-researcher | Query Opportunity, Contacts, ContactRoles, Tasks, recent Activities. Extract deal sizing, contact roles (Champion, Economic Decision Maker), stage history | 8 | 1 |
obsidian-locator | Discover account files in Regal/Accounts/<account>/ | 5 | 1 |
transcript-locator | Find meeting transcripts mentioning the account | 7 | 1 |
obsidian-analyzer | Deep-read Research docs, Context Brief, Contact Audits. Extract qualification data, pain statements, evaluation process, contacts | 8 | 1 |
transcript-analyzer | Extract MEDDPICC signals from transcripts: pain statements, champion behavior, EB mentions, buying process language, competitive references, metrics discussed | 8 | 1 |
google-workspace-researcher | Search Gmail for EB communication, champion email frequency, procurement/legal threads, budget discussions | 8 | 1 |
slack-researcher | Search internal discussions for champion advocacy signals, competitive intel, deal risk mentions | 10 | 2 (gap-fill) |
company-researcher | Research competitive landscape, company financials, industry context. ONLY dispatch if obsidian-analyzer found NO existing Research doc for the account | 12 | 2 (gap-fill) |
Each agent knows its tools. When prompting:
/tmp/meddpicc/<agent-name>.mdobsidian_list_files_in_dir to check if account folder existsrun_in_background: true (MCP tools are unavailable in background subagents)Dispatch in a single message (max 4 agents, then overflow):
Message 1 (4 agents):
salesforce-researcher: Full deal data extraction. Prompt with Account name and Opportunity ID.obsidian-locator: Discover files in Regal/Accounts/<account>/transcript-locator: Find transcripts mentioning the accountgoogle-workspace-researcher: Search Gmail for threads with key contacts. Focus on: EB communication and budget language, champion email frequency and forwarding behavior, procurement/legal threads, approval chain discussions. Pass account name and contact names/emails from SF lookup.Message 2 (gated on locator results, 2 agents):
obsidian-analyzer: Read files discovered by obsidian-locator: Research docs, Context Brief, Contact Audit. Extract per-element MEDDPICC evidence. Pass the account name and instruct it to look for: pain statements, metrics discussed, evaluation process, decision makers, purchasing process, competitive mentions, champion indicators. Flag whether a Research doc exists (this gates company-researcher in Wave 2).
transcript-analyzer: Read transcripts discovered by transcript-locator. Extract per-element MEDDPICC signals. Use three-state gating:
[NO TRANSCRIPT DATA][TRANSCRIPT LOCATOR FAILED]After Wave 1 completes, read each agent's output:
/tmp/meddpicc/<agent>.md exists: read it (primary)Read references/framework.md for the complete element definitions, evidence signals, and confidence criteria.
For each of the 8 MEDDPICC elements, synthesize Wave 1 findings into:
H: <answer> or M: <answer> or L. Low confidence shows only the letter. For Economic Buyer and Champion, the answer is a single contact (Name, Title). See the table rules in template.md.Mark elements at Low confidence as gap-fill candidates.
Dispatch ONLY for elements at Low confidence. Match agents to gaps:
| Gap Element | Agent to Dispatch | What to Find |
|---|---|---|
| Metrics (M) | company-researcher (conditional, see below) | Industry benchmarks, financial data, published KPIs |
| Economic Buyer (E) | slack-researcher | Internal discussions about EB, budget ownership mentions |
| Decision Criteria (D) | slack-researcher | Internal discussions about evaluation criteria, RFP language |
| Decision Process (D) | slack-researcher | Approval chain discussions, procurement timeline mentions |
| Paper Process (P) | slack-researcher | Legal/procurement threads, MSA/BAA mentions |
| Identify Pain (I) | transcript-analyzer (re-dispatch with pain-focused prompt) | Personal pain layer, cost-of-inaction statements |
| Champion (C) | slack-researcher | Champion advocacy signals, internal selling mentions, forwarded content |
| Competition (C) | company-researcher (conditional, see below) | Competitive landscape, vendor evaluation signals |
When multiple elements map to the same agent, consolidate into a single dispatch with all element prompts combined.
ONLY dispatch company-researcher if obsidian-analyzer reported that no Research doc exists for the account. If a Research doc was found in Regal/Accounts/<account>/, the competitive landscape and company context are already captured there. Web search is expensive (~77K tokens, ~3 min); skip it when Obsidian already has the data.
Skip Wave 2 entirely if no elements are at Low confidence.
Dispatch gap-fill agents in a single message (max 4). Include in each prompt:
/tmp/meddpicc/<agent>-gap-fill.mdAfter Wave 2 (or after Wave 1 if no gap-fill needed), compile the research document.
Save to Obsidian using the filesystem path:
/Users/nick.yebra/Library/Mobile Documents/iCloud~md~obsidian/Documents/Core Vault/Regal/Accounts/<Account Name>/MEDDPICC - <YYYY-MM-DD>.md
Use the template in references/template.md.
The document includes:
| Element | Answer | table. Each answer is H: <20-word answer>, M: <20-word answer>, or L. EB and Champion answers are a single contact name and title. No cross-section repetition (Metrics excludes pricing/seats/deal size).After saving the document to Obsidian, propose SF field updates in the main context:
mcp__Salesforce-Search__salesforce_dml_records. See references/queries.md for field-type-specific DML patterns (text, Contact lookup, picklist).