Evaluate tabulated subcontractor bids against specs and drawings — scope gap analysis, exclusion risk scoring, award recommendation. Triggers: 'evaluate bids', 'bid evaluation', 'lowest responsible bidder'.
Evaluate tabulated subcontractor bids against construction documents. This skill does NOT make the award decision — it builds the analytical foundation so the PE/PM can decide quickly and confidently.
/bid-tabulator → THIS SKILL → user confirms → /subcontract-writer
This skill requires tabulated bids as input. If the user has raw bid PDFs that haven't been tabulated yet, run /bid-tabulator first to produce the per-bidder JSON files and comparison Excel. This skill consumes that output.
/bid-tabulator (data capture) → then offer this skill/bid-tabulatorOptional: bid form/ITB, schedule, owner requirements, budget estimate, past experience with bidders.
Complete before analyzing any bids.
Check for .construction/ directory at the project root.
.construction/spec_text/{section}.txt. Use sheet index from .construction/index/sheet_index.yaml for drawing identification.CLAUDE.md paths or directory search (Specifications/, Specification Sections/). Read drawing sheets directly from PDF.Extract from each spec section: work included (Parts 1-3), work by others (GC/NIC/Owner), related section cross-references, performance requirements, coordination and temporary requirements.
Load references/drawing-review.md for guidance on what to look for
per drawing type. Vision-read key sheets to identify scope items,
conditions, and quantities not apparent from specs. Track drawing-
derived items separately — they differentiate thorough bidders.
Offload reference after completing drawing review.
Ask targeted questions to fill gaps documents can't answer. Do not dump a questionnaire — ask conversationally, skip what you already know.
Always ask: package definition (which specs/drawings), GC-provided items, bid count and format, drawing confirmation if provided.
Ask when relevant: multi-trade splits, alternates/allowances, qualification requirements, budget reference, schedule constraints.
Confirm assembled scope with user before proceeding.
Produce internal baseline document: items in sub scope (with source), drawing-identified scope, GC/owner exclusions, allowances, alternates, qualifications, schedule constraints.
Load references/buyout-domain.md for domain knowledge on bid
language interpretation, common scope splits, red flags, trade-
specific evaluation notes, and non-price factors. Keep loaded
through Steps 2a-2d, then offload.
Adjusted Base = Submitted Base
+ Excluded spec-required items
- Included GC-provided items
+ Accepted alternates ± Allowance adjustments
Valuation order: other bidder's line item → budget estimate → professional judgment (flagged) → qualitative risk if unknown. Document every adjustment. No silent price changes. Flag unit price outliers (>2x median). Calculate exposure ranges.
Map each bidder against every scope baseline item using five statuses:
| Status | Meaning |
|---|---|
| INCLUDED | Explicitly includes |
| EXCLUDED | Explicitly excludes |
| SILENT | Not addressed — most dangerous |
| PARTIAL | Includes with limitations |
| DIFFERENT | Offers substitution |
Flag all SILENT items on spec-required scope. Drawing-derived SILENT items suggest incomplete document review by that bidder.
| Level | Criteria |
|---|---|
| CRITICAL | Spec-required, uncarried, >1% bid or >$5K |
| SIGNIFICANT | Implied by spec/practice, $2K-$5K |
| MINOR | Common GC-provided, <$2K |
| INFO | Clarification only, no cost impact |
Assess per bidder: bonding, insurance, licensing, experience, capacity, DBE/MBE, bid completeness. Bust detection: >20-30% below field + scope gaps + incomplete submission. Flag for verification, never call it a bust.
Build JSON per schema in scripts/sample_input.json, then run:
${CLAUDE_SKILL_DIR}/../../bin/construction-python ${CLAUDE_SKILL_DIR}/scripts/export_bid_evaluation.py input.json output.xlsx
The script produces 5 sheets: Bid Comparison, Price Summary, Exclusion Detail, Qualification Summary, Recommendation.
Do NOT proceed to /subcontract-writer without user confirmation.
If .construction/ directory exists (AgentCM mode), record the evaluation:
${CLAUDE_SKILL_DIR}/../../bin/construction-python ${CLAUDE_SKILL_DIR}/../../scripts/graph/write_finding.py \
--type "bid_evaluation_complete" \
--title "Bid evaluation: {scope} — {N} bidders, recommended {company}" \
--output-file "{output.xlsx}" \
--data '{"scope": "...", "bidder_count": N, "recommended": "...", "adjusted_amount": ..., "pe_attention_items": [...]}'
If no .construction/ directory exists, skip — the Excel workbook is the deliverable.
| Step | Resource | Load Trigger | Offload After |
|---|---|---|---|
| 1b | references/drawing-review.md | Drawing sheets provided | Step 1b complete |
| 2 | references/buyout-domain.md | Entering bid analysis | Step 2d complete |
| 3 | scripts/sample_input.json | Building output JSON | Step 3 complete |
| 5 | ../../scripts/graph/write_finding.py | AgentCM mode detected | Step 5 complete |
Never overwrite an existing bid evaluation. The export script uses safe_output_path() which appends _v2, _v3, etc. automatically.
${CLAUDE_SKILL_DIR}/../../bin/construction-python${CLAUDE_SKILL_DIR}/scripts/export_bid_evaluation.py${CLAUDE_SKILL_DIR}/../../scripts/graph/write_finding.py