Score a targeted list of companies using the full RemidiWorks v4 methodology and produce outreach-ready summaries with scores vs. database averages. Use this skill when the user wants to score a specific list of companies for prospecting, lead gen, or outreach — phrases like 'quick score these companies', 'score this list for outreach', 'score for LinkedIn', 'run a quick score batch', or 'score these for lead gen'. Also trigger when the user provides a short list of company URLs or names and wants scores plus outreach copy. This is DIFFERENT from score-day (which pulls from the Target_List_500 and runs 20-30 companies as a daily batch) and rapid-scan (which produces a full 3-page HTML report for one company). Quick-score takes a USER-PROVIDED list, runs the same v4 scoring methodology as score-day, updates the master database, and adds an outreach summary layer (comparison table, LinkedIn DM, email draft) on top.
Pre-Flight: Project Instructions. Before starting any work, read the project-level instructions at
RemidiWorks/CLAUDE.md(in the repo root). It contains critical rules about skill modification workflows, the schema definition file location, scoring methodology guardrails, and file structure conventions that override defaults. If there is a conflict between CLAUDE.md and this skill file, CLAUDE.md wins.
Shared Library: This skill depends on files in
~/.claude/skills/references/—scoring_canon.md(methodology source of truth),consistency_check.md(audit checklist), anddb_averages.json(database benchmarks). Always read the canon before scoring.
Methodology Reference: This skill implements rules from
references/scoring_canon.md. If any instruction in this file contradicts the canon, the canon is authoritative. Run after any edit.
references/consistency_check.mdThis skill scores a user-provided list of companies using the full RemidiWorks v4 methodology (Binary Fact Sheet, 28 sub-dimensions, constraint rules, cross-checks) and produces outreach-ready summaries comparing each company's scores to the database averages.
| Quick Score | Score Day | Rapid Scan | |
|---|---|---|---|
| Input | User provides specific companies | Pulls next batch from Target_List_500 | Single company |
| Scoring | Full v4 methodology | Full v4 methodology | Full v4 methodology |
| Output | Outreach summaries + DB update | DB update + batch summary | 3-page HTML report |
| Typical size | 3–10 companies | 20–30 companies | 1 company |
| Use case | Prospecting / lead gen | Database building | Client deliverable |
The scoring methodology is identical to score-day. The difference is the input source (user-provided list vs. Target_List_500) and the output format (outreach summaries on top of the standard DB update).
Phase 0 → Read rubric & config Phase 1 → Receive and validate the target list Phase 2 → Research (parallelized, same as score-day) Phase 3 → Score using full v4 framework Phase 4 → Write evidence files Phase 5 → Update master database Phase 6 → Generate outreach summaries (the extra layer)
Phases 0–5 follow the score-day skill exactly. Phase 6 is unique to quick-score.
Before doing anything else, read the canonical scoring rubric:
RemidiWorks/01_Methodology/Config/batch_scoring_agent_prompt_v3.md
This contains the v4 category architecture, Binary Fact Sheet (F1–F30), constraint rules, scoring rubrics for all 28 sub-dimensions, anti-uniformity checks, and cross-check verification. Do NOT proceed to research until this file has been read.
The user will provide companies in one of these formats:
For each company, you need at minimum: company name and URL. If the user provides only names, look up the URLs. If they provide only URLs, determine the company names from the websites.
Validate before proceeding:
Rapid_Scan_Master_Database.xlsx). If found, flag it: "FYI, [company] was already scored on [date] with overall=[X]. Want me to rescore or skip?"Follow the score-day skill for these phases. The process is identical:
batch_scoring_agent_prompt_v3.md. Search the buyer-relevant platform first, then G2 for the master database.assessment_scores.json, scoring_verification.md, research_notes.md per companyRemidiWorks/companies/{slug}/assessments/{date}_rapid_scan/Rapid_Scan_Master_Database.xlsx (all columns per the canon column map (currently 49 columns))This is the unique deliverable. After scoring is complete, produce a one-page outreach summary for each company plus a batch overview.
These are a snapshot from references/db_averages.json (last updated 2026-04-11, n=523 comparable companies). Always recompute from the live database after adding new companies — these numbers go stale with every batch.
| Dimension | DB Average | Median | Top Quartile |
|---|---|---|---|
| Overall | 55 | 55 | 69+ |
| Messaging & Positioning (MP) | 69 | 72 | 84+ |
| Pricing & Packaging (PP) | 58 | 62 | 80+ |
| Buyer Experience (BE) | 55 | 60 | 73+ |
| Trust & Credibility (TC) | 54 | 57 | 70+ |
| Competitive Position (CP) | 53 | 56 | 68+ |
Note: PP average is among the ~72% of companies with scored PP (n=374). CP average based on n=402 (192 legacy records missing CP). For companies with PP=N/A, show "N/A" in the table and note it.
Important: After adding the new companies to the database, recompute the averages from the updated database for accuracy. The pre-computed numbers above are a starting reference.
For each scored company, output this format:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
QUICK SCORE: [Company Name]
URL: [url] | Vertical: [vertical]
Scanned: [date] | Database: [N] B2B SaaS companies
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Company DB Avg Delta
─────────────────────────────────────────────────────
OVERALL [XX] 55 [+/-XX]
─────────────────────────────────────────────────────
Messaging & Positioning [XX] 71 [+/-XX]
Pricing & Packaging [XX] 62 [+/-XX]
Buyer Experience [XX] 51 [+/-XX]
Trust & Credibility [XX] 56 [+/-XX]
Competitive Position [XX] 54 [+/-XX]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Tier: [T0/T1/T2/T3] — [Tier Name]
Biggest Gap: [Dimension] at [XX] vs. avg [XX] — [1-sentence observation]
Biggest Strength: [Dimension] at [XX] vs. avg [XX] — [1-sentence observation]
── LINKEDIN DM ──────────────────────────────────────
I just ran a commercial health assessment on [Company Name] —
you scored [XX] against a database average of 55 across [N]
B2B SaaS companies. Your biggest gap is [dimension] at [XX]
vs. the average of [avg]. Happy to walk you through the
findings if useful.
── EMAIL ─────────────────────────────────────────────
Subject: [Company Name] scores [XX] on commercial health —
here's where the gaps are
[First name] — I evaluate B2B SaaS companies on 5 dimensions
of commercial health (messaging, pricing, buyer experience,
trust, competitive position) using a database of [N] companies.
[Company Name] scored [XX] overall. Your strongest area is
[dimension] at [XX] (vs. avg [XX]). Your biggest gap is
[dimension] at [XX] (vs. avg [XX]).
Worth a 15-minute call to walk through the details?
── INTERNAL NOTES ────────────────────────────────────
MP ([XX]): [One sentence — key observation]
PP ([XX]): [One sentence — key observation]
BE ([XX]): [One sentence — key observation]
TC ([XX]): [One sentence — key observation]
CP ([XX]): [One sentence — key observation]
Top 3 Opportunities:
1. [From assessment_scores.json top_opportunities]
2. [...]
3. [...]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
For PP = N/A companies:
Pricing & Packaging N/A 62 —
Add note: "PP excluded — no public pricing detected. Overall calculated from 4 dimensions."
After all per-company summaries, produce a batch overview:
═══════════════════════════════════════════════════
QUICK SCORE BATCH SUMMARY
Scored: [N] companies | Date: [date]
═══════════════════════════════════════════════════
Company Overall Tier Biggest Gap
────────────────────────────────────────────────────
[Company 1] [XX] [TX] [Dimension] ([XX] vs [avg])
[Company 2] [XX] [TX] [Dimension] ([XX] vs [avg])
[Company 3] [XX] [TX] [Dimension] ([XX] vs [avg])
...
────────────────────────────────────────────────────
Batch Average: [XX] DB Average: 55
Tier Breakdown: T3: [n] T2: [n] T1: [n] T0: [n]
Database updated: [total] companies now scored
═══════════════════════════════════════════════════
Save the outreach summaries to a single file:
RemidiWorks/companies/_outreach/quick_score_batch_{date}.txt
This keeps outreach copy separate from the evidence files (which go in each company's folder). Present the file to the user with a computer:// link.
The LinkedIn DM and email drafts are templates the user will customize. Keep them:
For companies scoring above the database average (55+), lead with their strength and position the gap as an optimization opportunity. For companies below average, lead with the gap — that's where the urgency is.
Add this section to every scoring skill (score-day, rapid-scan, quick-score, report-card, portfolio-scoring) as a mandatory post-scoring verification step. No scores should be written to the Master Database or presented to a client until ALL gates pass.
After calculating scores for any company, verify the math reproduces:
# TC verification: sum of 8 sub-scores / 40 * 100
tc_subs = [scores['4A'], scores['4B'], scores['4C'], scores['4E'],
scores['4F'], scores['4G'], scores['4H'], scores['4I']]
# NOTE: 4D (Review Platforms) is EXCLUDED from TC. Collected as enrichment only.
tc_calc = round(sum(tc_subs) / 40 * 100)
assert abs(tc_calc - stored_tc) <= 1, f"TC mismatch: calc={tc_calc}, stored={stored_tc}"
# Overall verification: canonical v4 weights
if pp_na:
overall_calc = round(mp*(25/75) + be*(15/75) + tc*(20/75) + cp*(15/75))