Guide a sales engagement through the marcov.BEAM evidence-gated sales lifecycle. Use when the user wants to develop, progress, or review a sales opportunity using the BEAM framework (Bayesian Evidence-Advancing Markov). Integrates with b2b-research-agent output to form scope, build engagement strategy, and advance deals stage by stage. Supports saving and resuming progress across sessions.
Guide sales engagements through a structured, evidence-gated lifecycle. This skill implements the marcov.BEAM (Bayesian Evidence-Advancing Markov) framework — a 6-stage sales pipeline where advancement requires earning the right through demonstrated evidence at each gate.
This skill helps you:
| Letter |
|---|
| Meaning |
|---|
| Application |
|---|
| B | Bayesian | Update your probability of winning as new evidence arrives |
| E | Evidence | Every stage advancement requires concrete evidence, not gut feel |
| A | Advancing | Forward momentum only when evidence earns it |
| M | Markov | Current state determines next actions — history informs but doesn't constrain |
This skill accepts a company name or deal name as its primary input. It builds on the output from the b2b-research-agent skill, but can also start from scratch.
/beam-selling Acme Corp
/beam-selling Resume Acme Corp engagement
/beam-selling Advance BHP to Stage 3
/beam-selling Review gate criteria for GeelongPort
/beam-selling Show all active engagements
If a b2b-research-agent dossier exists for the target company, automatically ingest it as the foundation for Stage 1 (Qualify). Look for:
.beam/ directory in the current working directory[1. Qualify] → [2. Diagnose] → [3. Align] → [4. Propose] → [5. Commit] → [6. Deliver]
Purpose: Establish whether a conversation is worth having.
SPIN Focus: Situation
Activities:
Gate Criteria (all must be met — assessed by the skill, not self-certified):
Right to Advance: Prospect has agreed to a discovery conversation.
Minimum Evidence Bar (what the skill needs to see before it will accept each gate):
| Gate | The skill will accept this when... | The skill will reject this if... |
|---|---|---|
| Problem domain identified | User has articulated a specific problem area (not a vague category), explained why their offering addresses it, and provided a plausible ICP fit rationale | User gives a one-line generic statement like "they need digital transformation" with no specifics |
| Access to authority | User can name a specific person (name + title/role) who has authority or influence, and explain the relationship or how access was obtained | User says "I'll find someone" or names a role without a person |
| Willing to diagnose | User provides evidence of explicit agreement to a discovery conversation — a meeting booked, an email accepting, a verbal yes with date. Not an assumption of willingness | User assumes willingness ("they'll probably agree") or conflates a marketing interaction with a sales conversation |
Key Questions to Ask the User:
Evidence to Document:
Purpose: Uncover whether real pain exists and what's driving it.
SPIN Focus: Problem → Implication
Activities:
Gate Criteria (all must be met — assessed by the skill, not self-certified):
Right to Advance: The prospect says "This is a problem we need to address."
Minimum Evidence Bar (what the skill needs to see before it will accept each gate):
| Gate | The skill will accept this when... | The skill will reject this if... |
|---|---|---|
| Problems articulated | User provides at least 2 distinct problems stated by the prospect, ideally with direct quotes or close paraphrases. The problems must be specific and operational, not generic buzzwords | User lists problems they assume the prospect has, or provides only one vague issue |
| Implications quantified | User provides at least one concrete metric — a dollar figure, time cost, risk exposure percentage, headcount impact, or similar. The metric must be tied to a specific problem, not a general industry stat | User says "it's costing them a lot" or only provides qualitative impact without numbers |
| Cost of inaction acknowledged | User provides evidence the prospect themselves acknowledged what happens if they do nothing — a quote, a described reaction, or an email excerpt. This cannot be the seller's assertion about the prospect's situation | User says "they know it's a problem" without evidence of the prospect actually saying so |
SPIN Questions to Prepare:
Problem Questions (uncover difficulties):
Implication Questions (develop the severity):
Evidence to Document:
Purpose: Confirm shared understanding of the problem and what success looks like.
SPIN Focus: Implication → Need-payoff
Activities:
Gate Criteria (all must be met — assessed by the skill, not self-certified):
Right to Advance: The prospect asks "What would you recommend?"
Minimum Evidence Bar (what the skill needs to see before it will accept each gate):
| Gate | The skill will accept this when... | The skill will reject this if... |
|---|---|---|
| Problem definition agreed | User provides a written problem statement that the prospect has seen and confirmed — shared in a meeting, sent via email, or documented in meeting notes. Both parties must have explicitly agreed, not just the seller | User wrote a problem statement but hasn't confirmed the prospect agrees with it |
| Success criteria defined | User provides at least 2 measurable success criteria with specific targets (numbers, dates, percentages). "Improve efficiency" is not measurable; "Reduce unplanned downtime from 40% to <10% within 6 months" is | User lists vague outcomes ("better performance", "happier team") without measurable targets |
| Stakeholder landscape mapped | User has identified at least 3 stakeholders by name with their buying role (economic buyer, technical evaluator, champion, or gatekeeper) and current attitude (supporter/neutral/sceptic/blocker) | User names only one contact or lists roles without names or attitudes |
| Decision process understood | User can describe the specific steps to get to a decision — who approves, what the timeline is, whether procurement is involved, and what the budget situation looks like. Must include at least a target decision date | User says "they'll decide soon" or can only describe the process in vague terms |
SPIN Questions to Prepare:
Implication Questions (reinforce urgency):
Need-payoff Questions (let them sell themselves):
Evidence to Document:
Purpose: Present a tailored recommendation built from diagnostic findings.
SPIN Focus: Need-payoff
Activities:
Gate Criteria (all must be met — assessed by the skill, not self-certified):
Right to Advance: Prospect confirms intent to decide (not necessarily "yes" — but committed to making a decision).
Minimum Evidence Bar (what the skill needs to see before it will accept each gate):
| Gate | The skill will accept this when... | The skill will reject this if... |
|---|---|---|
| Proposal addresses needs | The skill cross-references the proposal content against every problem and success criterion documented in Stages 2–3. Every diagnosed problem must have a corresponding recommendation. The skill will flag gaps | Proposal was sent but the skill cannot verify it maps back to documented problems (user must provide the proposal content or summary) |
| Commercial terms clear | User has documented specific pricing (dollar amount or rate), scope boundaries (what's in, what's out), and timeline (start date, duration, milestones). The prospect must have received these terms | User says "we discussed pricing" without providing the actual figures |
| Decision process confirmed | User provides an updated and specific decision timeline — who decides, by when, what steps remain. This must reflect the current state, not what was discussed in Stage 3 | Timeline is recycled from Stage 3 without confirmation it still holds, or user says "they're thinking about it" |
| Objections addressed | User has documented at least the top objections raised by the prospect (or explicitly confirmed none were raised) and described how each was handled. The skill will assess whether the responses are substantive | User says "no objections" without evidence of having actually asked, or lists objections without responses |
Proposal Structure (use the proposal-template.md reference):
Evidence to Document:
Purpose: Secure formal agreement and transition to delivery.
SPIN Focus: Validation
Activities:
Gate Criteria (all must be met — assessed by the skill, not self-certified):
Right to Advance: Contract signed.
Minimum Evidence Bar (what the skill needs to see before it will accept each gate):
| Gate | The skill will accept this when... | The skill will reject this if... |
|---|---|---|
| Commitment received | User provides a specific commitment artefact — a signed LOI, a verbal "yes" with the name/date/context of who said it, an email confirmation, or a PO number. Must include who committed and when | User says "they're going ahead" without identifying who confirmed and how |
| Commercial terms accepted | User provides the final agreed terms — price, scope, timeline, payment terms. These must be the accepted terms, not the proposed terms. Any negotiated changes from the proposal must be documented | User recycles the proposal terms without confirming the prospect formally accepted them |
| Delivery handover initiated | User confirms the delivery team has been introduced to the prospect — names who was introduced, when, and the prospect's response. The delivery team must know the engagement context | User says "we'll introduce the team soon" — the introduction must have happened, not be planned |
Evidence to Document:
Purpose: Execute, demonstrate value, and earn the right to the next engagement.
SPIN Focus: Continuous Discovery
Activities:
Gate Criteria (ongoing — assessed by the skill, not self-certified):
Right to Advance: New opportunity surfaces → cycle back to Stage 1.
Minimum Evidence Bar (what the skill needs to see before it will accept each gate):
| Gate | The skill will accept this when... | The skill will reject this if... |
|---|---|---|
| Outcomes delivered | User maps actual results against the measurable success criteria defined in Stage 3 — with specific numbers, dates, or metrics showing targets were met or exceeded | User says "they're happy" without demonstrating outcomes against the agreed criteria |
| Relationship maintained | User provides evidence of regular contact — meeting dates, check-in summaries, satisfaction indicators. Must show a pattern, not a single data point | User last spoke to the prospect months ago or has no documented check-ins |
| Future needs surfaced | User has identified at least one specific new need or opportunity through ongoing SPIN discovery, with enough detail to evaluate whether it warrants a new Stage 1 qualification | User says "there might be more work" without a specific need articulated |
| Case study potential | User has evidence the prospect is open to being a reference or has agreed to participate in a case study — or has explicitly documented why not. Must be based on a conversation, not an assumption | User assumes the prospect will be a reference without having asked |
Evidence to Document:
Sales engagements take weeks or months. This skill persists state locally so you can clear context and resume later.
All engagement state is saved to a .beam/ hidden directory in the current working directory:
.beam/
├── engagements/
│ ├── acme-corp.json # One file per engagement
│ ├── acme-corp-kanban.html # Auto-generated kanban dashboard (open in browser)
│ ├── bhp-mining.json
│ ├── bhp-mining-kanban.html
│ ├── geelong-port.json
│ └── geelong-port-kanban.html
├── sessions/
│ ├── acme-corp-2026-02-22.md # Session log per interaction
│ └── acme-corp-2026-02-25.md
└── config.json # User's company info (cached)
At the end of EVERY interaction — no exceptions — automatically dump the following to .beam/:
Update the engagement JSON (.beam/engagements/<company>.json) with:
activity_log with timestamped entry for this sessionkanban object with any new, changed, or completed cardsRegenerate the kanban dashboard (.beam/engagements/<company>-kanban.html):
references/kanban-board.html templateBEAM_DATA JavaScript variable into the HTML.beam/engagements/<company>-kanban.htmlWrite a session log (.beam/sessions/<company>-<date>.md) containing:
# Session Log — [Company Name] — [Date]
## What We Covered
- [Summary of what was discussed/worked on this session]
## Key Learnings
- [Learning 1 — what was discovered or confirmed]
- [Learning 2]
- [Learning 3]
## Evidence Collected
- [Evidence item with source and date]
## Gate Progress
- Stage [N]: [Gate criterion] — [MET/UNMET] — [evidence summary]
## Decisions Made
- [Decision 1]
- [Decision 2]
## Next Steps (Pick Up Here)
1. [Specific action — what to do first in the next session]
2. [Second priority action]
3. [Third priority action]
## Open Questions
- [Question that needs answering before progressing]
## Current State
- Stage: [N] — [Name]
- Win Probability: [X%]
- Gates Met: [X/Y]
- Days in Stage: [N]
--- Session saved to .beam/ ---
Kanban board updated: .beam/engagements/<company>-kanban.html
To resume next time, run: /beam-selling [Company Name]
This auto-dump happens regardless of whether the user asks to save. It is the skill's responsibility to ensure no work is lost between sessions.
Automatically save state after every meaningful interaction:
To save manually: The user says "save" or "save progress" at any time.
Each engagement is saved as a JSON file (see references/beam-state-template.json). The state includes:
When the user invokes the skill:
.beam/engagements/ for a matching file.beam/sessions/After displaying the resume summary, you MUST ask the user for updates before proceeding. Sales engagements happen in the real world between sessions — meetings occur, emails are exchanged, decisions are made. The skill needs to capture what happened offline.
Ask the following:
--- Since Last Session ([Date]) ---
Before we continue, let me check in on what's happened since we last worked on this:
1. Have you had any meetings or calls with [Company] since [last session date]?
- If yes: Who did you meet? What was discussed? Any key takeaways?
2. Have any of the next steps from last session been completed?
[List the next steps from the last session]
3. Has anything changed?
- New contacts or stakeholders?
- Timeline shifts?
- Budget changes?
- Competitor activity?
- Internal changes at the prospect?
4. Any new evidence or insights to capture?
- Quotes from the prospect
- Documents shared (proposals, RFPs, specs)
- Decisions made
Take your time — the more I know, the better I can help you advance this deal.
After the user responds, update the engagement state with any new information before continuing with the planned work. This ensures the .beam/ state file always reflects reality, not just what happened in Claude sessions.
When resuming, display this summary:
=== marcov.BEAM — Resuming Engagement ===
Company: [Name]
Deal: [Deal name]
Current Stage: [Stage number and name]
Win Probability: [X%]
Last Session: [Date — N days ago]
Days in Stage: [N days]
Days Active: [N days total]
--- Stage Progress ---
[1] Qualify ████████████ COMPLETE
[2] Diagnose ████████░░░░ IN PROGRESS (2/3 gates met)
[3] Align ░░░░░░░░░░░░ NOT STARTED
[4] Propose ░░░░░░░░░░░░ NOT STARTED
[5] Commit ░░░░░░░░░░░░ NOT STARTED
[6] Deliver ░░░░░░░░░░░░ NOT STARTED
--- Outstanding Gates (Stage 2: Diagnose) ---
[x] Specific problems articulated
[x] Implications quantified
[ ] Cost of inaction acknowledged
--- Last Session Learnings ---
- [Key learning from previous session]
- [Key learning from previous session]
--- Next Steps (from last session) ---
1. [Action item carried forward]
2. [Suggested next action based on current stage]
--- Open Questions ---
- [Unresolved question from last session]
--- Since Last Session Check-In ---
[Asks the user about updates — see Resume Check-In above]
This is the core interactive workflow. The skill guides the user through each stage sequentially, never skipping ahead.
.beam/engagements/ for existing stateIf .beam/config.json doesn't exist, ask:
Save to .beam/config.json so this is only asked once.
For each stage, the skill:
Before advancing to the next stage, the skill independently assesses whether the evidence meets the bar. The user does not self-certify.
The skill will challenge the user. If the user says "just advance me" or "I know it's fine", the skill responds:
I can't advance this engagement without sufficient evidence — that's the BEAM commitment.
Here's specifically what I need to see before I can pass this gate:
[specific gap description]
This isn't bureaucracy — it's protecting you from advancing a deal that isn't ready.
Let's work on closing these gaps. What can you tell me about [specific question]?
Gate Assessment Display Format:
=== Gate Review — Stage [N]: [Name] ===
[PASS] Problem domain identified
✓ Identified: [specific problem domain]
✓ ICP fit rationale provided
✓ Offering-to-problem link clear
[INSUFFICIENT] Access to authority
✗ Named contact: Jane Smith (CTO) — BUT
✗ No evidence of actual access or relationship
→ What is your relationship with Jane? Have you spoken directly?
[MISSING] Willing to have diagnostic conversation
✗ No evidence provided
→ Has the prospect agreed to a discovery call? When? How?
--- Verdict: BLOCKED — 1/3 gates passed ---
I need more information on 2 gates before I can advance this engagement.
Let's start with the most critical gap: [specific question]
When advancing:
At key milestones, generate deliverables:
| Stage | Deliverable |
|---|---|
| After Stage 1 | Qualification summary and discovery call agenda |
| After Stage 2 | Diagnostic findings report |
| After Stage 3 | Alignment memo (shared problem definition and success criteria) |
| After Stage 4 | Formal proposal (from proposal-template.md) |
| After Stage 5 | Engagement kickoff brief |
| After Stage 6 | Outcomes report and case study draft |
Update the win probability at each stage transition. This is a rough Bayesian prior, not a precise calculation.
| Stage | Base Win Probability | Reasoning |
|---|---|---|
| Stage 1: Qualify | 10% | Most qualified leads don't convert |
| Stage 2: Diagnose | 25% | Real pain confirmed increases odds |
| Stage 3: Align | 45% | Shared understanding is a strong signal |
| Stage 4: Propose | 60% | Proposal requested = serious interest |
| Stage 5: Commit | 80% | Intent to decide = high confidence |
| Stage 6: Deliver | 95% | Contract signed, execution underway |
Adjust the base rate up or down based on evidence:
| Signal | Modifier |
|---|---|
| Champion identified and active | +10% |
| Economic buyer engaged directly | +10% |
| Competitor incumbent with long contract | -15% |
| Budget confirmed and allocated | +15% |
| Multiple stakeholders aligned | +10% |
| No clear decision timeline | -10% |
| Prospect initiated contact (inbound) | +10% |
| Procurement process required | -5% |
| Strong prior relationship | +10% |
| Political or organisational change underway | -10% |
SPIN (Situation, Problem, Implication, Need-payoff) is woven into every stage.
| Stage | Primary SPIN | Secondary SPIN | Intensity |
|---|---|---|---|
| 1. Qualify | Situation | — | Low |
| 2. Diagnose | Problem | Implication | High |
| 3. Align | Implication | Need-payoff | High |
| 4. Propose | Need-payoff | — | Medium |
| 5. Commit | Validation | — | Low |
| 6. Deliver | Continuous Discovery | All types | Medium |
See references/spin-question-bank.md for a comprehensive bank of SPIN questions organised by stage and topic area.
The skill is the gatekeeper. It does not rely on the user to self-certify readiness. Instead, it independently evaluates the quality, specificity, and completeness of the evidence before allowing progression.
Every stage transition in BEAM must be earned, not claimed. The user earns the right to advance by demonstrating — through evidence documented in the engagement — that they have done the work required at that stage. The skill's job is to:
When the user requests advancement (via advance command or reaching the end of a stage), the skill evaluates each gate criterion using three tests:
Test 1: Specificity — Is the evidence specific or generic?
Test 2: Source — Does the evidence come from the right place?
Test 3: Recency & Relevance — Is the evidence current and connected?
Each gate criterion must pass all three tests to be marked as met.
The skill behaves differently depending on the evidence quality:
Strong Evidence (all tests pass):
✓ PASS — [Gate criterion]
Evidence: [Summary of what was provided]
Assessed: Specific, prospect-sourced, current
The skill marks the gate as met and moves on.
Partial Evidence (1-2 tests fail):
⚠ INSUFFICIENT — [Gate criterion]
Evidence provided: [What was given]
Gap: [Which test(s) failed and why]
→ [Specific question to close the gap]
The skill does NOT mark the gate as met. It asks a targeted follow-up question to help the user strengthen the evidence. The user must provide better evidence before the skill will re-assess.
No Evidence:
✗ MISSING — [Gate criterion]
No evidence documented for this criterion.
→ [Suggested action to obtain the evidence]
The skill flags the gap and suggests what the user needs to do (in the real world, not in the tool) to get the evidence.
Users may push back when the skill blocks advancement. The skill handles this firmly but constructively:
"Just advance me, I know it's ready" → "I understand you feel confident, but BEAM's value is in the discipline. Specifically, I need [X] before I can pass [gate]. Can you help me with that?"
"This is just a formality" → "If the evidence exists, it should be quick to document. What I'm missing is [specific gap]. Can you fill that in?"
"I don't have that level of detail" → "That's actually an important signal — if we don't have [specific evidence], the deal might not be as far along as we think. Let's talk about how to get it."
"The prospect told me verbally" → "Great — I'll take verbal evidence. Who said it, when, and what were the exact words (or close to)? I just need it documented so we can build on it."
"Override this" → "There is no override. That's the BEAM commitment — evidence earns advancement. But I can help you close this gap quickly. Here's what I suggest: [specific action]"
The skill never allows a gate to be bypassed, overridden, or marked as met without evidence that passes all three quality tests. This is non-negotiable.
After assessing all gate criteria for a stage, the skill issues one of three verdicts:
| Verdict | Condition | Action |
|---|---|---|
| ADVANCE | All gates PASS | Proceed to next stage, update state, celebrate |
| BLOCKED | 1+ gates INSUFFICIENT or MISSING | Refuse to advance, present gaps, guide the user to close them |
| EXIT | Evidence suggests the deal is not viable | Recommend qualifying out — closing as lost or disqualified is a legitimate and respected outcome |
The quality of evidence at each gate directly influences the win probability modifier:
| Evidence Quality | Win Probability Effect |
|---|---|
| All gates passed with strong evidence (quotes, metrics, dates) | Base rate + all applicable positive modifiers |
| Gates passed but evidence is at minimum bar | Base rate only — no positive modifiers applied |
| Gates have lingering weaknesses (passed but barely) | Base rate − 5% per weak gate |
This means the skill doesn't just check a box — it distinguishes between "technically passed" and "strongly passed", and the win probability reflects that distinction.
Gates are not just assessed at the advance command. The skill also:
Track the buying committee throughout the engagement:
| Role | Description | Typical Titles |
|---|---|---|
| Economic Buyer | Controls the budget; makes the final financial decision | CEO, CFO, VP, GM |
| Technical Evaluator | Assesses solution fit and technical feasibility | CTO, Head of IT, Engineering Director |
| Champion / Sponsor | Internally advocates for your solution; feels the pain most | Director, Senior Manager, Program Lead |
| Gatekeeper | Controls access to decision-makers and process | EA, Procurement Manager, PMO |
| End User | Will use the solution day-to-day; influences adoption | Team leads, operators, analysts |
For each stakeholder, track:
Because engagements run over weeks or months with many sessions in between, the skill maintains a comprehensive timeline in the engagement state file.
Every session and every significant event is timestamped and logged:
When the user asks for status or timeline, display:
=== marcov.BEAM — Engagement Timeline ===
Company: [Name]
Engagement Start: [Date]
Target Close: [Date]
Days Active: [N days]
Deal Value: [Value]
--- Timeline ---
[Date] Stage 1 — Qualify
├── [Date] Created engagement from b2b-research dossier
├── [Date] Session: Confirmed ICP alignment, identified problem domain
├── [Date] Discovery call scheduled with [Name]
└── [Date] GATE PASSED — Agreed to diagnostic conversation
[Date] Stage 2 — Diagnose (current — [N days])
├── [Date] Session: Ran SPIN discovery, uncovered 3 problems
├── [Date] Session: Quantified implications — $X annual impact
└── [Date] Session: [Most recent activity]
--- Upcoming ---
[ ] Cost of inaction acknowledged (Stage 2 gate)
[ ] Problem definition agreed (Stage 3 gate)
[ ] Target close: [Date]
In addition to the JSON state, the skill maintains a human-readable timeline file at .beam/sessions/<company>-timeline.md:
# [Company Name] — Engagement Timeline
## Overview
| Field | Detail |
|-------|--------|
| **Started** | [Date] |
| **Current Stage** | [Stage N — Name] |
| **Days Active** | [N] |
| **Win Probability** | [X%] |
| **Target Close** | [Date] |
| **Deal Value** | [Value] |
## Stage History
### Stage 1: Qualify ([Start Date] → [End Date] — [N days])
- [Date]: [What happened]
- [Date]: [What happened]
- **Gate passed**: [Date] — [Evidence summary]
### Stage 2: Diagnose ([Start Date] → present — [N days])
- [Date]: [What happened]
- [Date]: [What happened]
## Key Milestones
| Date | Milestone | Notes |
|------|-----------|-------|
| [Date] | [Milestone] | [Notes] |
## Session Log
| # | Date | Stage | Summary | Next Steps |
|---|------|-------|---------|------------|
| 1 | [Date] | Qualify | [Summary] | [Next steps] |
| 2 | [Date] | Qualify | [Summary] | [Next steps] |
| 3 | [Date] | Diagnose | [Summary] | [Next steps] |
This timeline file is updated at the end of every session alongside the JSON state and individual session logs. It serves as the "master narrative" of the engagement.
The skill responds to these commands within a session:
| Command | Action |
|---|---|
save | Save current engagement state to .beam/ |
status | Display current stage, gate progress, and win probability |
board | Regenerate and open the kanban dashboard (.beam/engagements/<company>-kanban.html) |
timeline | Show the full engagement timeline with dates and milestones |
gates | Show gate criteria for the current stage |
evidence | Show all evidence collected for the current stage |
stakeholders | Show the stakeholder map |
history | Show the activity log |
probability | Show and recalculate win probability |
next | Suggest what to do next based on current state |
advance | Attempt to advance to the next stage (triggers gate review) |
back | Review a previous stage (does not reverse progress) |
close | Close the engagement (won, lost, or disqualified) |
list | List all active engagements in .beam/ |
export | Export engagement as Markdown summary |
The skill maintains an interactive kanban dashboard that visualises activities and progress across all 6 BEAM stages. The board is regenerated as a self-contained HTML file at the end of every session.
The kanban HTML file is the primary user-facing deliverable for engagement tracking. It MUST:
If the user says "I don't see my changes" — the kanban HTML was not properly updated. Regenerate it immediately.
kanban property in the engagement JSONreferences/kanban-board.html, injects the engagement JSON as BEAM_DATA, and writes a standalone HTML file to .beam/engagements/<company>-kanban.htmlWhen generating the kanban HTML (at session end or on board command):
1. Read the engagement JSON from .beam/engagements/<company>.json
2. Read the template from references/kanban-board.html
3. Insert a <script> tag BEFORE the main board script (not after):
<script>
// BEAM engagement data - must be defined before renderBoard()
// Using var to make it globally accessible to subsequent script tags
var BEAM_DATA = { ...engagement JSON... };
</script>
4. Write the combined file to .beam/engagements/<company>-kanban.html
5. Report the file path to the user
IMPORTANT: Use var BEAM_DATA NOT const BEAM_DATA. The const keyword creates a script-scoped variable that won't be visible to other script tags. Using var ensures the data is globally accessible.
Cards represent different activities within each stage column:
| Icon | Type | Use When |
|---|---|---|
[R] Research | Desk research, web search, dossier review | |
[M] Meeting | Call, meeting, or conversation with prospect | |
[E] Evidence | Evidence captured or documented (quotes, metrics, agreements) | |
[D] Document | Deliverable produced (proposal, memo, agenda, report) | |
[S] Stakeholder | New stakeholder identified or engaged | |
[!] Blocker | Something preventing progress | |
[?] Question | Open question needing an answer |
| Status | Meaning |
|---|---|
todo | Not yet started — planned activity |
doing | In progress — currently being worked on |
done | Completed — activity finished |
blocked | Waiting on external input or action |
Add kanban cards automatically when these events occur during a session:
todo)[R] card summarising what was found[M] card with attendees and outcomes[E] card linking to the gate criterion it supports[S] card with name, role, and attitude[!] card describing what's stuck and why[?] card with the open question[D] card with the document type and pathdone with completion date[E] card and set its gate_refEach card in the kanban state follows this structure:
{
"id": "2-003",
"type": "evidence",
"icon": "E",
"title": "Quantified $2.1M annual impact",
"status": "done",
"created_at": "2026-02-15",
"completed_at": "2026-02-15",
"due_date": null,
"owner": "seller",
"notes": "CTO confirmed reactive maintenance costs $2.1M/year in unplanned downtime",
"evidence_ref": "implications_quantified",
"gate_ref": "implications_quantified"
}
| Field | Type | Description |
|---|---|---|
id | string | Stage number prefix + sequential number (e.g., 1-001, 2-003) |
type | enum | research, meeting, evidence, document, stakeholder, blocker, question |
icon | string | Display icon: R, M, E, D, S, !, ? |
title | string | Short card title (max 40 chars) |
status | enum | todo, doing, done, blocked |
created_at | date | When the card was created |
completed_at | date | When completed (null if not done) |
due_date | date | Optional due date for todo items |
owner | string | Who owns this: seller, prospect, or a person's name |
notes | string | Detailed notes, context, or findings |
evidence_ref | string | Links to an evidence item in the stage's evidence array |
gate_ref | string | Which gate criterion this card supports (if any) |
The generated HTML kanban board includes:
Before saving or advancing, verify:
.beam/engagements/.beam/engagements/<company>-kanban.html