Create vendor evaluation framework and score vendor proposals
You are helping an enterprise architect create a vendor evaluation framework and score vendor proposals against requirements.
$ARGUMENTS
Note: Before generating, scan
projects/for existing project directories. For each project, list allARC-*.mdartifacts, checkexternal/for reference documents, and check000-global/for cross-project policies. If no external docs exist but they would improve output, ask the user.
Identify the project: The user should specify a project name or number
Read existing artifacts from the project context:
MANDATORY (warn if missing):
$arckit-requirements first$arckit-principles firstRECOMMENDED (read if available, note if missing):
OPTIONAL (read if available, skip silently if missing):
Read the templates (with user override support):
.arckit/templates/evaluation-criteria-template.md exists in the project root.arckit/templates/evaluation-criteria-template.md (default).arckit/templates/vendor-scoring-template.md first, then .arckit/templates/vendor-scoring-template.mdTip: Users can customize templates with
$arckit-customize evaluate
Read external documents and policies:
projects/{project-dir}/vendors/{vendor}/ — extract proposed solution, pricing, team qualifications, case studies, certifications, SLA commitmentsexternal/ files) — extract industry benchmarks, analyst reports, reference check notesprojects/000-global/external/ — extract enterprise evaluation frameworks, procurement scoring templates, cross-project vendor assessment benchmarksprojects/{project-dir}/vendors/{vendor-name}/ and re-run, or skip to create the evaluation framework only.".arckit/references/citation-instructions.md. Place inline citation markers (e.g., [PP-C1]) next to findings informed by source documents and populate the "External References" section in the template.Gathering rules (apply to all user questions in this command):
If creating a new framework:
Define mandatory qualifications (pass/fail):
Create scoring criteria (100 points total):
Define evaluation process:
CRITICAL - Auto-Populate Document Control Fields:
Before completing the document, populate ALL document control fields in the header:
Construct Document ID:
ARC-{PROJECT_ID}-EVAL-v{VERSION} (e.g., ARC-001-EVAL-v1.0)Populate Required Fields:
Auto-populated fields (populate these automatically):
[PROJECT_ID] → Extract from project path (e.g., "001" from "projects/001-project-name")[VERSION] → "1.0" (or increment if previous version exists)[DATE] / [YYYY-MM-DD] → Current date in YYYY-MM-DD format[DOCUMENT_TYPE_NAME] → "Vendor Evaluation Framework"ARC-[PROJECT_ID]-EVAL-v[VERSION] → Construct using format above[COMMAND] → "arckit.evaluate"User-provided fields (extract from project metadata or user input):
[PROJECT_NAME] → Full project name from project metadata or user input[OWNER_NAME_AND_ROLE] → Document owner (prompt user if not in metadata)[CLASSIFICATION] → Default to "OFFICIAL" for UK Gov, "PUBLIC" otherwise (or prompt user)Calculated fields:
[YYYY-MM-DD] for Review Date → Current date + 30 daysPending fields (leave as [PENDING] until manually updated):
[REVIEWER_NAME] → [PENDING][APPROVER_NAME] → [PENDING][DISTRIBUTION_LIST] → Default to "Project Team, Architecture Team" or [PENDING]Populate Revision History:
| 1.0 | {DATE} | ArcKit AI | Initial creation from `$arckit-evaluate` command | [PENDING] | [PENDING] |
Populate Generation Metadata Footer:
The footer should be populated with:
**Generated by**: ArcKit `$arckit-evaluate` command
**Generated on**: {DATE} {TIME} GMT
**ArcKit Version**: {ARCKIT_VERSION}
**Project**: {PROJECT_NAME} (Project {PROJECT_ID})
**AI Model**: [Use actual model name, e.g., "claude-sonnet-4-5-20250929"]
**Generation Context**: [Brief note about source documents used]
Before writing the file, read .arckit/references/quality-checklist.md and verify all Common Checks plus the EVAL per-type checks pass. Fix any failures before proceeding.
projects/{project-dir}/ARC-{PROJECT_ID}-EVAL-v1.0.mdIf scoring a specific vendor:
Create vendor directory: projects/{project-dir}/vendors/{vendor-name}/
Ask key questions to gather information:
Score each category based on the evaluation criteria:
Document findings:
Write outputs:
projects/{project-dir}/vendors/{vendor-name}/evaluation.md - Detailed scoringprojects/{project-dir}/vendors/{vendor-name}/notes.md - Additional notesprojects/{project-dir}/ARC-{PROJECT_ID}-EVAL-v*.md - Comparative tableIf comparing vendors:
Read all vendor evaluations from projects/{project-dir}/vendors/*/evaluation.md
Create comparison matrix:
Generate recommendation:
Write output to projects/{project-dir}/ARC-{PROJECT_ID}-VEND-v1.0.md
User: $arckit-evaluate Create evaluation framework for payment gateway project
You should:
projects/001-payment-gateway/ARC-001-EVAL-v1.0.mdUser: $arckit-evaluate Score Acme Payment Solutions proposal for project 001
You should:
projects/001-payment-gateway/vendors/acme-payment-solutions/User: $arckit-evaluate Compare all vendors for payment gateway project
You should:
projects/001-payment-gateway/ARC-001-VEND-v1.0.md< or > (e.g., < 3 seconds, > 99.9% uptime) to prevent markdown renderers from interpreting them as HTML tags or emoji