**MEDDPICC Coaching Guide Generator** for Diamanti sales PODs. Creates a Word document AI Coaching Report after a prospect meeting — role-specific feedback, MEDDPICC scoring, critical questions, and next steps. MANDATORY TRIGGERS: "coaching guide", "coaching report", "MEDDPICC coaching", "AI coaching report", "sales coaching", "rep coaching", "call coaching", "coach the reps" Use when: Generating a coaching report after processing a prospect meeting transcript. Usually called automatically by the prospect skill, but can be run standalone.
Use this skill to create a Word document coaching guide after processing a Prospect meeting. Called from ../prospect/SKILL.md. Before creating, read the existing YAML files and any prior coaching guides for this account — continuity and regression tracking are the whole point.
| Situation | Template to use |
|---|---|
| First call on this opportunity (no prior MEDDPICC YAML data) | TEMPLATE_MEDDPICC_Coaching_Guide_PLACEHOLDER.docx |
| Follow-up call (prior MEDDPICC entries exist in the YAML) | TEMPLATE_MEDDPICC_Coaching_Guide_FOLLOWUP_PLACEHOLDER.docx |
Both templates live in: Claude trackers/Templates/
How to determine which to use: Check meddpicc_analysis.yaml before scoring the current call. If the file has prior call_date entries, use the follow-up template. If the file is new or empty, use the first-call template.
Before writing a single word of content:
meddpicc_analysis.yaml — check whether prior call entries exist
{{X_PREV_SCORE}} values) and the date of that call (→ {{PREV_CALL_DATE}})opportunity_data.yaml — deal timeline, contacts, technical details already capturedCritical for follow-up calls: The trend comparison table is only meaningful if the previous scores are captured from the YAML before you append the current call's data. Always read the YAML first, extract the prev scores, then score the current call, then update the YAML.
# 1. Find current session mount
SESSION_MOUNT=$(ls -d /sessions/*/mnt 2>/dev/null | head -1)
# 2. Copy the appropriate template to working directory
# First call:
cp "${SESSION_MOUNT}/Claude trackers/Templates/TEMPLATE_MEDDPICC_Coaching_Guide_PLACEHOLDER.docx" \
"/tmp/coaching_guide_temp.docx"
# Follow-up call:
cp "${SESSION_MOUNT}/Claude trackers/Templates/TEMPLATE_MEDDPICC_Coaching_Guide_FOLLOWUP_PLACEHOLDER.docx" \
"/tmp/coaching_guide_temp.docx"
# 3. Unpack the template
python "${SESSION_MOUNT}/.skills/skills/docx/scripts/office/unpack.py" \
/tmp/coaching_guide_temp.docx /tmp/unpacked_coaching/
# 4. Edit /tmp/unpacked_coaching/word/document.xml (see placeholder maps below)
# 5. Pack the updated document
# First call — use original first-call template as the --original reference:
python "${SESSION_MOUNT}/.skills/skills/docx/scripts/office/pack.py" \
/tmp/unpacked_coaching/ /tmp/coaching_guide_temp.docx \
--original "${SESSION_MOUNT}/Claude trackers/Templates/TEMPLATE_MEDDPICC_Coaching_Guide_PLACEHOLDER.docx"
# Follow-up call — use follow-up template as the --original reference:
python "${SESSION_MOUNT}/.skills/skills/docx/scripts/office/pack.py" \
/tmp/unpacked_coaching/ /tmp/coaching_guide_temp.docx \
--original "${SESSION_MOUNT}/Claude trackers/Templates/TEMPLATE_MEDDPICC_Coaching_Guide_FOLLOWUP_PLACEHOLDER.docx"
# 6. Move to final location
mv /tmp/coaching_guide_temp.docx \
"${SESSION_MOUNT}/Claude trackers/POD{n}/Accounts/{AccountName}/Sales/Meetings/{YYYY-MM-DD Meeting Name}/AI Coaching Report - {YYYY-MM-DD} {Meeting Name}.docx"
After step 6, share the
computer://link to the saved file in chat so the user can open and verify it.
The template uses {{PLACEHOLDER}} tokens throughout the XML. Replace every one — leaving any unfilled will produce a broken-looking document. Use the Edit tool to do exact string replacements in the unpacked document.xml.
| Placeholder | What to put there |
|---|---|
{{ACCOUNT_NAME}} | Account name (e.g. "Acme Corp") |
{{POD}} | POD number (e.g. "POD 3") |
{{CE_NAME}} | Client Executive first + last name |
{{CALL_DATE}} | Full date (e.g. "February 5, 2026") |
{{CALL_TYPE}} | Meeting type (e.g. "Intro / Discovery", "POC Scoping") |
{{MEDDPICC_SCORE}} | Overall score as % (e.g. "52%") |
{{AE_NAME}} | Account Executive first + last name |
{{VP_TECH_NAME}} | VP Tech / SE first + last name |
{{CALL_RATING}} | Poor / Fair / Good / Excellent |
{{CALL_RATING_SUBTITLE}} | Short descriptor (e.g. "Strong Discovery, Demo Scheduled") |
{{OVERALL_ASSESSMENT}} | 1–2 paragraph narrative — what the call accomplished, key signals, most important next step |
{{METRICS_SCORE}} | Score % |
{{METRICS_NOTES}} | One-line observation |
{{EB_SCORE}} | Score % |
{{EB_NOTES}} | One-line observation |
{{DC_SCORE}} | Score % |
{{DC_NOTES}} | One-line observation |
{{DP_SCORE}} | Score % |
{{DP_NOTES}} | One-line observation |
{{PP_SCORE}} | Score % |
{{PP_NOTES}} | One-line observation |
{{IP_SCORE}} | Score % |
{{IP_NOTES}} | One-line observation |
{{CHAMPION_SCORE}} | Score % |
{{CHAMPION_NOTES}} | One-line observation |
{{COMPETITION_SCORE}} | Score % |
{{COMPETITION_NOTES}} | One-line observation |
{{HIGHLIGHT_1}} through {{HIGHLIGHT_8}} | Specific bullet observations from the call |
{{CE_ROLE_HEADER}} | "Client Executive (Full Name)" |
{{CE_COACHING_P1}} | What CE did well |
{{CE_COACHING_P2}} | Key gap / opportunity |
{{CE_COACHING_P3}} | Concrete action items for next interaction |
{{AE_ROLE_HEADER}} | "Account Executive (Full Name)" |
{{AE_COACHING_P1}} | What AE did well |
{{AE_COACHING_P2}} | Key gap / opportunity |
{{AE_COACHING_P3}} | Concrete action items |
{{VP_ROLE_HEADER}} | "VP of Technology (Full Name)" |
{{VP_COACHING_P1}} | What VP Tech did well |
{{VP_COACHING_P2}} | Key gap / opportunity |
{{VP_COACHING_P3}} | Concrete action items |
{{QUESTION_1}} through {{QUESTION_10}} | Critical questions for next interaction (with MEDDPICC tag already in place) |
{{CE_NEXTSTEP_1}} through {{CE_NEXTSTEP_4}} | CE action items |
{{AE_NEXTSTEP_1}} through {{AE_NEXTSTEP_5}} | AE action items |
{{VP_NEXTSTEP_1}} through {{VP_NEXTSTEP_4}} | VP Tech action items |
{{QUOTE_1_SPEAKER}} through {{QUOTE_5_SPEAKER}} | Speaker name for each quote box |
{{QUOTE_1_TEXT}} through {{QUOTE_5_TEXT}} | Exact or near-exact quote (will be rendered in italics) |
{{QUOTE_1_INSIGHT}} through {{QUOTE_5_INSIGHT}} | 1–2 sentence insight analysis |
Each row in the Service Attach Review table has two tokens: a Status and a Coaching Note.
| Placeholder | Service | Status values | Coaching Note guidance |
|---|---|---|---|
{{SVC_1_STATUS}} / {{SVC_1_COACHING}} | Fusion X — VMware Exit | ✓ Mentioned / ⚠ Weak / ✗ Missed / — N/A | Was the VMware licensing cost / migration path raised? Was urgency established? |
{{SVC_2_STATUS}} / {{SVC_2_COACHING}} | Fusion X — Database / PostgreSQL | ✓ Mentioned / ⚠ Weak / ✗ Missed / — N/A | Was Oracle/SQL Server licensing surfaced as a pain? Was PostgreSQL migration positioned? |
{{SVC_3_STATUS}} / {{SVC_3_COACHING}} | Fusion X — Multi-Cloud | ✓ Mentioned / ⚠ Weak / ✗ Missed / — N/A | Was multi-cloud sprawl or hybrid workload complexity in scope? |
{{SVC_4_STATUS}} / {{SVC_4_COACHING}} | Fusion X — Upgrade Services | ✓ Mentioned / ⚠ Weak / ✗ Missed / — N/A | Was platform version currency / upgrade anxiety raised? Blue/green deployment story told? |
{{SVC_5_STATUS}} / {{SVC_5_COACHING}} | Managed Services | ✓ Mentioned / ⚠ Weak / ✗ Missed / — N/A | Was managed services surfaced? Did the rep link it to team size / ops burden? |
{{SVC_6_STATUS}} / {{SVC_6_COACHING}} | GroundWork Monitoring | ✓ Mentioned / ⚠ Weak / ✗ Missed / — N/A | Was environment visibility / SLA reporting surfaced? "Flying blind" framing used? |
{{SVC_7_STATUS}} / {{SVC_7_COACHING}} | TurboAI | ✓ Mentioned / ⚠ Weak / ✗ Missed / — N/A | Was GPU utilization or cost-per-token raised? TurboAI ROI story told? |
Use — N/A for any service that is clearly out of scope for this account (e.g., no AI/ML workloads → TurboAI is N/A).
Important: If a call has fewer highlights, questions, next steps, or quotes than the template allows, replace the unused placeholders with a single space " " rather than leaving them empty — empty <w:t> elements can cause rendering issues.
The follow-up template has all the same placeholders as the first-call template except the MEDDPICC table. Instead of {{X_SCORE}} and {{X_NOTES}} in Section 2, the follow-up template uses a 4-column trend table with the following additional placeholders:
| Placeholder | What to put there |
|---|---|
{{PREV_CALL_DATE}} | Date of the previous call — used as the column header (e.g. "Jan 22, 2026") |
{{CURR_CALL_DATE}} | Date of the current call — used as the column header (e.g. "Feb 4, 2026") |
{{METRICS_PREV_SCORE}} | Metrics score from the previous call (read from YAML before updating) |
{{METRICS_CURR_SCORE}} | Metrics score from the current call |
{{METRICS_TREND}} | Trend indicator: ↑ +10%, ↓ -5%, or → No change |
{{EB_PREV_SCORE}} | EB score from previous call |
{{EB_CURR_SCORE}} | EB score from current call |
{{EB_TREND}} | Trend indicator |
{{DC_PREV_SCORE}} | DC score from previous call |
{{DC_CURR_SCORE}} | DC score from current call |
{{DC_TREND}} | Trend indicator |
{{DP_PREV_SCORE}} | DP score from previous call |
{{DP_CURR_SCORE}} | DP score from current call |
{{DP_TREND}} | Trend indicator |
{{PP_PREV_SCORE}} | PP score from previous call |
{{PP_CURR_SCORE}} | PP score from current call |
{{PP_TREND}} | Trend indicator |
{{IP_PREV_SCORE}} | IP score from previous call |
{{IP_CURR_SCORE}} | IP score from current call |
{{IP_TREND}} | Trend indicator |
{{CHAMPION_PREV_SCORE}} | Champion score from previous call |
{{CHAMPION_CURR_SCORE}} | Champion score from current call |
{{CHAMPION_TREND}} | Trend indicator |
{{COMPETITION_PREV_SCORE}} | Competition score from previous call |
{{COMPETITION_CURR_SCORE}} | Competition score from current call |
{{COMPETITION_TREND}} | Trend indicator |
Trend format rules:
↑ +{delta}% (e.g. ↑ +10%)↓ -{delta}% (e.g. ↓ -5%)→ No changeAlso note: The follow-up template retains {{EB_SCORE}}, {{DP_SCORE}}, and {{PP_SCORE}} in the AE coaching section body text (not the table). Fill these with the current call's scores — same values as {{EB_CURR_SCORE}}, {{DP_CURR_SCORE}}, {{PP_CURR_SCORE}}.
Read meddpicc_analysis.yaml before updating it and extract the score field from the most recent call_date entry for each element. Example:
import yaml
with open('meddpicc_analysis.yaml', 'r') as f:
data = yaml.safe_load(f)
# The YAML has a list of call entries; get the last one
last_call = data[-1] # or however your YAML is structured
prev_metrics = last_call['meddpicc']['metrics']['score'] # e.g. "45%"
prev_call_date = last_call['call_date'] # e.g. "2026-01-22"
After extracting the previous scores, proceed to score the current call, then update the YAML as normal.
The follow-up template also includes the Service Attach Review section. Use the same {{SVC_1_STATUS}} through {{SVC_7_COACHING}} placeholders documented in the first-call section above.
The output is called "AI MEDDPICC Coaching Report" — not "Coaching Guide". Subtitle: Role-Specific Call Analysis & Coaching Guide.
It contains exactly 7 sections in this order:
A 4-row, 4-column table at the top. No background fill. Replace all placeholder values:
| Account: | {AccountName} | POD: | POD {n} | | Rep: | {Name} ({CE}) | Call Date: | {Date} | | Call Type: | {Type} | MEDDPICC Score: | {Score%} | | AE: | {Name} | VP Tech: | {Name} |
Call Rating: {Poor/Fair/Good/Excellent} (bold, on its own line)
Followed by 1–2 narrative paragraphs covering:
For follow-up calls: explicitly call out which MEDDPICC elements improved, regressed, or stalled since the previous call. Reference the trend data.
Be direct and specific — not generic. This is a coaching document, not a summary.
First call template — A 3-column table: Element | Score | Notes
Follow-up template — A 4-column table: Element | {Prev Date} | {Curr Date} | Trend
Header row: dark navy fill (1B2A4A), white bold text.
Data rows: alternating white (FFFFFF) / light gray (F0F2F6).
Notes (first-call template only) should be specific observations — include what was identified AND what's missing.
A bullet list of 6–10 specific, factual observations from the call — things that happened, were said, or were surfaced. Not generic praise.
Open with an italicized intro line: "Individualized guidance tailored to each attendee's role and area of accountability."
Then one table per role. Each table is a single-column box with:
Role box colors (cell background fill):
| Role | Fill Color |
|---|---|
| Client Executive (CE) | EBF5FB (light blue) |
| Account Executive (AE) | F0FAF0 (light green) |
| VP of Technology (SE/VP Tech) | FEF9E7 (light yellow) |
Address each person by name. Reference specific things they said or did on the call.
Role focus areas by default:
A numbered list of 8–10 questions, each prefixed with its MEDDPICC tag in bold:
Questions should be specific to this account — reference actual details from the transcript. Each question should be phrased as something the rep could say verbatim on the next call.
Per-role section with a bold header and bulleted action items:
Client Executive ({Name})
Account Executive ({Name})
VP of Technology ({Name})
Keep to 3–5 bullets per role. Tied to the MEDDPICC gaps identified earlier.
One table per notable quote. Background fill: F5F6F8 (light gray). Format:
{Speaker Name} (bold)
"{Exact or near-exact quote from transcript}" (italic, smart quotes)
Insight: {1–2 sentence analysis of why this quote matters for the deal} (normal weight)
Include 3–6 quotes. Choose quotes that reveal pain, competitive signals, decision-making dynamics, buying signals, or relationship context.
SESSION_MOUNT=$(ls -d /sessions/*/mnt 2>/dev/null | head -1)
# Then read:
"${SESSION_MOUNT}/Claude trackers/knowledge-base/objections.yaml"
Go through every OBJ entry in the file. For each one, check whether any signal matching its description appeared in the transcript. List every single OBJ — do not skip any.
OBJECTION SCAN:
- OBJ-001 (Budget Pressure): MATCH / NO MATCH — "[quote if match]"
- OBJ-002 (Value Not Demonstrated): MATCH / NO MATCH
- OBJ-003 (Executive Change): MATCH / NO MATCH
- OBJ-004 (Competitive Pressure at Renewal): MATCH / NO MATCH
- OBJ-005 (Open Ticket Frustration): MATCH / NO MATCH
- OBJ-006 (Escalation Threat): MATCH / NO MATCH
- OBJ-007 (Tech Debt Concern): MATCH / NO MATCH
- OBJ-008 (Endpoint Security Conflict): MATCH / NO MATCH
- OBJ-009 (Knowledge Concentration Risk): MATCH / NO MATCH
- OBJ-010 (Missing Feature): MATCH / NO MATCH
- OBJ-011 (Roadmap Uncertainty): MATCH / NO MATCH
- OBJ-012 (Competitor Feature Comparison): MATCH / NO MATCH
- OBJ-013 (International Deployment): MATCH / NO MATCH
- OBJ-014 (Terraform / IaC Support Gap): MATCH / NO MATCH
- OBJ-015 (Compliance / Certification Gate): MATCH / NO MATCH
- OBJ-016 (Not Ready to Expand): MATCH / NO MATCH
- OBJ-017 (Budget Not Available for Expansion): MATCH / NO MATCH
- OBJ-018 (Use Case Unclear): MATCH / NO MATCH
- OBJ-019 (Price Too High): MATCH / NO MATCH
- OBJ-020 (Contract Terms): MATCH / NO MATCH
- OBJ-021 (Competitive Price Comparison): MATCH / NO MATCH
- OBJ-022 (Hardware Cost Pressure / Supply Chain): MATCH / NO MATCH
- OBJ-023 (Deal Registration Dispute): MATCH / NO MATCH
- OBJ-024 (Partner Enablement Gap): MATCH / NO MATCH
- OBJ-025 (Channel Conflict): MATCH / NO MATCH
- OBJ-026 (Rep Activation Without Closed Deal): MATCH / NO MATCH
- OBJ-027 (Executive Commitment to Incumbent): MATCH / NO MATCH
- OBJ-028 (VMware Migration Risk): MATCH / NO MATCH
- OBJ-029 (Migration Timeline): MATCH / NO MATCH
- OBJ-030 (VMware Tools Dependency): MATCH / NO MATCH
- OBJ-031 (Broadcom Price Shock): MATCH / NO MATCH — "[quote if match]"
- OBJ-032 (OpenShift as Default Alternative): MATCH / NO MATCH
- OBJ-033 (Oracle Replatforming Risk): MATCH / NO MATCH
- OBJ-034 (Oracle Performance Parity): MATCH / NO MATCH
- OBJ-035 (Oracle License Audit Fear): MATCH / NO MATCH
- OBJ-036 (Kubernetes Expertise Gap): MATCH / NO MATCH — "[quote if match]"
- OBJ-037 (Post-Purchase Operationalization): MATCH / NO MATCH
- OBJ-038 (Staffing / Headcount Concern): MATCH / NO MATCH
- OBJ-039 (Change Management / Learning Curve): MATCH / NO MATCH
- OBJ-040 (Vendor Onboarding Timeline): MATCH / NO MATCH
- OBJ-041 (Acquisition / Corporate IT Takeover): MATCH / NO MATCH
- OBJ-042 (No Platform Visibility / Monitoring Gap): MATCH / NO MATCH — "[quote if match]"
- OBJ-043 (AI Cost Overrun / GPU Underutilization): MATCH / NO MATCH — "[quote if match]"
- OBJ-044 (Upgrade Disruption / Version Sprawl): MATCH / NO MATCH — "[quote if match]"
For each MATCH: look up the OBJ's addressed_by field → read the full REB entry → apply coaching state (Step 3).
This is separate from objections. Check whether the rep proactively told any of the known win stories, AND whether there was an opening for stories that were NOT told. Run this for all 5 rebuttals on every call.
WIN STORY SCAN:
REB-001 (FPL Resident Engineer / Managed Services):
- Opening present? YES / NO — "[signal from transcript if yes]"
- Story told by rep? YES / NO — "[rep quote if yes]"
- State: [1-Handled well / 3-Landed weak / 4-Opening missed / N/A]
REB-002 (Third Option — Not Broadcom, Not OpenShift):
- Opening present? YES / NO — "[signal from transcript if yes]"
- Story told by rep? YES / NO
- State: [1 / 3 / 4 / N/A]
REB-003 (Server Consolidation ROI — 10→5 Servers, $1.1M):
- Opening present? YES / NO — "[signal from transcript if yes]"
- Story told by rep? YES / NO
- State: [1 / 3 / 4 / N/A]
REB-004 (Diamanti as VMware Complement):
- Opening present? YES / NO — "[signal from transcript if yes]"
- Story told by rep? YES / NO
- State: [1 / 3 / 4 / N/A]
REB-005 (First Deal as Partner Activation Catalyst):
- Opening present? YES / NO — "[signal from transcript if yes]"
- Story told by rep? YES / NO
- State: [1 / 3 / 4 / N/A]
REB-006 (GroundWork Monitoring Deploy):
- Opening present? YES / NO — "[signal from transcript if yes]"
- Story told by rep? YES / NO — "[rep quote if yes]"
- State: [1-Handled well / 3-Landed weak / 4-Opening missed / N/A]
REB-007 (TurboAI — AI Cost / GPU Efficiency):
- Opening present? YES / NO — "[signal from transcript if yes]"
- Story told by rep? YES / NO
- State: [1 / 3 / 4 / N/A]
REB-008 (Managed Upgrade & Lifecycle Services):
- Opening present? YES / NO — "[signal from transcript if yes]"
- Story told by rep? YES / NO
- State: [1 / 3 / 4 / N/A]
When REB-001 is relevant (any state other than N/A), use the specific language below in the coaching sections. Do not rely on the generic framing field alone — adapt these scripts to the customer's actual words from the transcript.
State 1 — Story told, customer engaged: Validate in CE Coaching P1, then coach forward on commitment extraction:
"Good instinct to bring up the FPL managed services model. To make it land harder on the next call, push past any 'that's interesting' response — ask directly: 'Does the model of embedding an engineer rather than hiring headcount make structural sense for how [customer] is organized? Is this a CapEx conversation we should be setting up with your finance team?' Get them to a yes or no. Don't let it sit as a nice anecdote."
State 3 — Story told, no next step generated: Use in CE Coaching P3 as the primary action item:
"The FPL story came up but didn't produce a commitment. On the follow-up, don't retell it — ask directly: 'When I mentioned that FPL embedded a Diamanti engineer instead of hiring five people — does that model make sense for how [customer] is structured? Is this a CapEx conversation we should be having with your finance team?' Force a yes or no. If yes, put it on the action plan with a finance stakeholder involved. If no, ask: 'What is your current plan for managing this after go-live?' That answer will tell you what the real blocker is."
State 4 — Opening present, story never told: Identify the specific signal from the transcript. Use in CE Coaching P2 as a flagged missed opportunity. Write the coaching like this:
"[Contact name] said '[exact signal from transcript — e.g., we don't have that in-depth Kubernetes knowledge].' That was a direct opening for the FPL Resident Engineer story. On the next call, lead with it conversationally — not in slides: 'Let me tell you how Florida Power & Light handled that exact situation. They were planning to hire five engineers to manage their Kubernetes environment. Instead, they embedded two Diamanti resident engineers — capitalized into the hardware purchase over five years as CapEx. That's how they justified it internally. What does your current plan look like for managing this after it's deployed?' Then stop and let them answer."
If the account is a utility, government, or regulated industry (any state): Add this to the coaching regardless of which state applied:
"[Account] is a [utility/government/regulated] account. Lean into the CapEx structure specifically — 'embedded in the hardware purchase, depreciated over five years' is the framing finance departments in this sector respond to. If there is any interest at all, the next step should include a finance/budget conversation, not just a technical follow-up."
State 1 — Story told, customer engaged: Validate in CE Coaching P1, then coach on making it a proposal line item:
"Good move surfacing the GroundWork monitoring story. Don't let it stay conceptual — on the next call, make it concrete: 'Let's scope what a monitoring deployment looks like for your environment. I want to show you what the SLA dashboard looks like for an account your size.' If it's not in the proposal as a line item, it won't close."
State 3 — Story told, no next step generated: Use in CE Coaching P3:
"GroundWork came up but didn't produce a commitment. Next call, ask directly: 'What are you currently using to monitor the environment, and does it cover AI workload observability and uptime SLA reporting?' If they can't say yes to both, follow with: 'What does a missed SLA cost you internally?' If there's a number, GroundWork pays for itself in one incident — make them do that math."
State 4 — Opening present, story never told: Use in CE Coaching P2 as a flagged missed opportunity:
"[Contact] said '[exact signal — e.g., we find out about problems when users call us].' That's the opening for GroundWork. Next call, lead with: 'Right now you're flying blind — you'll find out something is wrong when your users call you. GroundWork is how you get in front of that: SLA dashboards, uptime reporting, audit-ready exports. It's also how you prove to your CFO that the platform is working.' Then ask: 'How are you currently handling that today?' Stop and let them answer."
State 1 — Story told, customer engaged: Validate and coach on getting a baseline number on record:
"Good instinct to bring up TurboAI. Cement it by getting a baseline number before the next call: 'Can you pull your current GPU utilization rate and your approximate cost-per-token?' Once you have those numbers, the ROI case writes itself — and you'll have a benchmark to measure improvement against."
State 3 — Story told, no next step generated: Use in CE Coaching P3:
"TurboAI landed flat. Don't pitch it again — ask for the number instead: 'What is your current GPU utilization rate, and what are you paying per token?' If they can't answer both, that's the problem statement. If they can, note the numbers and revisit at the next QBR when they've drifted — and they will."
State 4 — Opening present, story never told: Use in CE Coaching P2:
"[Contact] mentioned [exact signal — e.g., GPU costs are rising, AI cloud spend is out of control]. That's the TurboAI opening. Next call, ask two questions before pitching anything: 'What is your current GPU utilization rate?' and 'What is your cost-per-token?' If they can't answer both, you just found the problem. Then: 'TurboAI is how we fix that — compression to lower cost-per-token, plus monitoring to prove what you're getting.' Every GPU conversation must include TurboAI."
State 1 — Story told, customer engaged: Validate and coach on converting to a recurring line item:
"Good move surfacing managed upgrade services. The next step is making it recurring — not a one-time engagement. On the follow-up: 'Let's talk about what a lifecycle management agreement looks like so you're never in a position of betting the platform on a change window again.' Get it scoped as an ongoing service, not a project."
State 3 — Story told, no next step generated: Use in CE Coaching P3:
"Upgrade services came up but didn't convert. Ask them to quantify the cost of the problem: 'How many unplanned change windows did you have last year, and what was the business impact?' Most teams don't track this. Help them estimate it — one significant downtime event typically costs more than a full year of managed upgrade services. Make them do that math."
State 4 — Opening present, story never told: Use in CE Coaching P2:
"[Contact] mentioned [exact signal — e.g., they've been putting off upgrades, last upgrade caused an outage, they're two versions behind]. That was the managed upgrade services opening. Next call, ask directly: 'When was the last time you upgraded, and how did it go?' Whatever they say, follow with: 'We have a managed service for exactly this — blue/green deployment with automatic rollback, so you're never betting the platform on a change window. Let me scope that as a line item in the next proposal.'"
REB-001 (Managed Services) must be evaluated on every prospect and check-in call — even if no explicit objection was raised. Customers rarely volunteer that they're worried about headcount; the rep must surface it proactively. REB-006 (GroundWork Monitoring) must be evaluated on every prospect call where deployment is in scope or near complete — monitoring should be in the proposal before the deal closes, not retrofitted afterward. REB-007 (TurboAI) must be evaluated on every call where GPU, AI/ML, or cloud cost is mentioned — no GPU conversation ends without TurboAI on the table. REB-008 (Managed Upgrade Services) must be evaluated on every check-in and follow-up call — version currency and upgrade anxiety surface late in the relationship; the rep must ask proactively.
Do NOT skip this step. Do NOT proceed to coaching sections without completing both 2A and 2B.
| State | Condition | Action |
|---|---|---|
| 1 — Handled well | Objection raised + rep responded effectively, OR win story told + customer engaged | Note in CE coaching P1. If it matches a known rebuttal, flag for KB update. |
| 2 — Handled poorly | Objection raised + rep's response was weak or absent | Surface the matching REB framing in CE coaching P2/P3. Recommend using it on the next call. |
| 3 — Story landed, no next step | Rep told a win story but customer reaction was neutral or no commitment generated | Use the REB coaching_if_landed_weak in CE coaching P3. Coach on converting story to a concrete next step or question. |
| 4 — Opening missed | A signal appeared (OBJ description matched OR REB trigger present) but story was never told | Use REB coaching_if_missed in CE coaching P2. Flag explicitly as a missed opportunity. |
For states 2, 3, and 4: quote the exact REB framing field in the coaching section so the rep has the exact language to use.
Each call is scored based on what was covered in that specific call, not cumulatively. Score per element reflects how well that element was qualified in this interaction.
| Call Type | Typical Overall Range |
|---|---|
| Discovery / Demo | 60–85% |
| Technical scoping | 45–65% |
| Business review | 70–90% |
Scores naturally vary by call type — lower technical call scores don't mean a bad call. The coaching text should explain what happened and why.
For 2nd+ calls on an opportunity, draw on the YAML files to identify:
Call this out specifically in Section 1 (Overall Assessment) and the role coaching boxes. Don't just describe the current call in isolation.
When editing XML directly:
Bullet lists — never use unicode bullets:
new Paragraph({
numbering: { reference: "bullets", level: 0 },
children: [new TextRun("Bullet text")]
})
Table fills — use ShadingType.CLEAR: