Analyze G-Cloud service gaps and generate supplier clarification questions
You are helping an enterprise architect validate G-Cloud services and generate clarification questions for suppliers.
$ARGUMENTS
After using $arckit-gcloud-search to find G-Cloud services, you have a shortlist but face challenges:
This command analyzes gaps between requirements and service descriptions, then generates structured clarification questions to send to suppliers.
Note: Before generating, scan for existing project directories. For each project, list all artifacts, check for reference documents, and check for cross-project policies. If no external docs exist but they would improve output, ask the user.
projects/ARC-*.mdexternal/000-global/MANDATORY (warn if missing):
$arckit-requirements first — need source requirements"$arckit-gcloud-search first — need service search results"RECOMMENDED (read if available, note if missing):
OPTIONAL (read if available, skip silently if missing):
For each shortlisted service, perform systematic gap analysis:
For each MUST requirement (BR-xxx, FR-xxx, NFR-xxx, INT-xxx with MUST priority):
Check Coverage:
Examples:
Identify vague marketing language that needs clarification:
For compliance requirements (NFR-C-xxx):
For integration requirements (INT-xxx):
For performance requirements (NFR-P-xxx):
For each gap or ambiguity, generate a structured question:
Question Format:
#### Q[N]: [Clear, specific question title]
**Requirement**: [REQ-ID] (MUST/SHOULD) - [requirement text]
**Gap**: [Describe what is missing, ambiguous, or unclear]
**Question**:
> [Specific question to supplier]
> - [Sub-question or specific aspect]
> - [Sub-question or specific aspect]
> - [Sub-question or specific aspect]
**Evidence Needed**:
- [Specific document or proof required]
- [Additional evidence needed]
**Priority**: [🔴 CRITICAL / 🟠 HIGH / 🔵 MEDIUM / 🟢 LOW]
🔴 CRITICAL (Blocking):
🟠 HIGH (Affects Scoring):
🔵 MEDIUM (Due Diligence):
🟢 LOW (Nice to Know):
Create risk matrix for each service:
## 📊 Service Risk Assessment
| Aspect | Status | Risk | Notes |
|--------|--------|------|-------|
| **[Requirement Category]** | [✅/⚠️/❌] | [🔴/🟠/🔵/🟢] | [Brief note] |
| ... | ... | ... | ... |
**Overall Risk**: [🔴 CRITICAL / 🟠 HIGH / 🔵 MEDIUM / 🟢 LOW]
**Risk Calculation**:
- ❌ [N] MUST requirements NOT confirmed
- ⚠️ [N] MUST requirements AMBIGUOUS
- 🔵 [N] SHOULD requirements missing
**Recommendation**:
- [Clear action: DO NOT PROCEED / CLARIFY FIRST / PROCEED WITH CAUTION / PROCEED TO DEMO]
Risk Levels:
CRITICAL - Auto-Populate Document Control Fields:
Before completing the document, populate ALL document control fields in the header:
Construct Document ID:
ARC-{PROJECT_ID}-GCLC-v{VERSION} (e.g., ARC-001-GCLC-v1.0)Populate Required Fields:
Auto-populated fields (populate these automatically):
[PROJECT_ID] → Extract from project path (e.g., "001" from "projects/001-project-name")[VERSION] → "1.0" (or increment if previous version exists)[DATE] / [YYYY-MM-DD] → Current date in YYYY-MM-DD format[DOCUMENT_TYPE_NAME] → "G-Cloud Clarification Questions"ARC-[PROJECT_ID]-GCLC-v[VERSION] → Construct using format above[COMMAND] → "arckit.gcloud-clarify"User-provided fields (extract from project metadata or user input):
[PROJECT_NAME] → Full project name from project metadata or user input[OWNER_NAME_AND_ROLE] → Document owner (prompt user if not in metadata)[CLASSIFICATION] → Default to "OFFICIAL" for UK Gov, "PUBLIC" otherwise (or prompt user)Calculated fields:
[YYYY-MM-DD] for Review Date → Current date + 30 daysPending fields (leave as [PENDING] until manually updated):
[REVIEWER_NAME] → [PENDING][APPROVER_NAME] → [PENDING][DISTRIBUTION_LIST] → Default to "Project Team, Architecture Team" or [PENDING]Populate Revision History:
| 1.0 | {DATE} | ArcKit AI | Initial creation from `$arckit-gcloud-clarify` command | [PENDING] | [PENDING] |
Populate Generation Metadata Footer:
The footer should be populated with:
**Generated by**: ArcKit `$arckit-gcloud-clarify` command
**Generated on**: {DATE} {TIME} GMT
**ArcKit Version**: {ARCKIT_VERSION}
**Project**: {PROJECT_NAME} (Project {PROJECT_ID})
**AI Model**: [Use actual model name, e.g., "claude-sonnet-4-5-20250929"]
**Generation Context**: [Brief note about source documents used]
Before writing the file, read .arckit/references/quality-checklist.md and verify all Common Checks plus the GCLC per-type checks pass. Fix any failures before proceeding.
Create projects/[project]/procurement/ARC-{PROJECT_ID}-GCLC-v1.0.md:
# G-Cloud Service Clarification Questions
**Project**: [PROJECT_NAME]
**Date**: [DATE]
**Services Analyzed**: [N]
---
## Executive Summary
**Purpose**: Validate G-Cloud services against requirements before procurement decision.
**Status**:
- Services Analyzed: [N]
- Critical Gaps Found: [N]
- High Priority Gaps: [N]
- Medium Priority Gaps: [N]
**Action Required**: [Send clarification questions to [N] suppliers / Eliminate [N] services due to critical gaps / Proceed with [Service Name]]
---
## Service 1: [Service Name] by [Supplier Name]
**Link**: [Service URL]
### 📋 Gap Summary
- ✅ **[N]** MUST requirements confirmed with evidence
- ⚠️ **[N]** MUST requirements mentioned ambiguously
- ❌ **[N]** MUST requirements NOT mentioned
- 🔵 **[N]** SHOULD requirements missing
**Overall Risk**: [🔴/🟠/🔵/🟢] [Risk Level]
---
### 🚨 Critical Questions (MUST address before proceeding)
[Generate Q1, Q2, Q3... for each critical gap using format above]
---
### ⚠️ High Priority Questions (Affects evaluation scoring)
[Generate Q[N]... for each high priority gap]
---
### 🔵 Medium Priority Questions (Due diligence)
[Generate Q[N]... for each medium priority gap]
---
### 🟢 Low Priority Questions (Nice to know)
[Generate Q[N]... for each low priority question]
---
### 📊 Service Risk Assessment
[Generate risk matrix table as defined above]
**Recommendation**:
[Clear recommendation based on risk level]
**Alternative**: [Suggest alternative service if this one has critical gaps]
---
### 📧 Email Template for Supplier
Subject: Technical Clarification Required - [Service Name]
Dear [Supplier Name] Team,
We are evaluating [Service Name] (Service ID: [ID]) for procurement via the Digital Marketplace. Before proceeding, we need clarification on several technical requirements:
**Critical Requirements (Blocking)**:
[List Q-numbers for critical questions]
**High Priority Requirements**:
[List Q-numbers for high priority questions]
Could you please provide:
- Written responses to questions [Q1-QN]
- Supporting documentation ([list evidence needed])
- Access to demo/trial environment for technical validation
We aim to make a procurement decision by [DATE + 2 weeks]. Please respond by [DATE + 1 week].
Thank you,
[User name if provided, otherwise: Your Name]
[Organization name if available]
---
[REPEAT FOR EACH SERVICE: Service 2, Service 3, etc.]
---
## 📊 Service Comparison - Risk Summary
| Service | Supplier | Critical Gaps | High Gaps | Medium Gaps | Overall Risk | Action |
|---------|----------|---------------|-----------|-------------|--------------|--------|
| [Service 1] | [Supplier 1] | [N] | [N] | [N] | [🔴/🟠/🔵/🟢] | [Action] |
| [Service 2] | [Supplier 2] | [N] | [N] | [N] | [🔴/🟠/🔵/🟢] | [Action] |
| [Service 3] | [Supplier 3] | [N] | [N] | [N] | [🔴/🟠/🔵/🟢] | [Action] |
**Recommended Priority Order**:
1. **[Service Name]** - [Risk Level] - [Action]
2. **[Service Name]** - [Risk Level] - [Action]
3. **[Service Name]** - [Risk Level] - [Action]
---
## 📋 Next Steps
### Immediate Actions (This Week)
1. ✅ **Send clarification questions**:
- [ ] Email sent to [Supplier 1]
- [ ] Email sent to [Supplier 2]
- [ ] Email sent to [Supplier 3]
2. ✅ **Set response deadline**: [DATE + 1 week]
3. ✅ **Schedule follow-up**: [DATE + 1 week] to review responses
### Upon Receiving Responses (Week 2)
4. ✅ **Review supplier responses**:
- [ ] Check all critical questions answered
- [ ] Validate evidence provided
- [ ] Update risk assessment
5. ✅ **Schedule technical demos**:
- [ ] Demo with [top-ranked service]
- [ ] Demo with [second-ranked service]
6. ✅ **Validate critical requirements**:
- [ ] Test integration in demo environment
- [ ] Confirm performance metrics
- [ ] Verify compliance certificates
### Decision Point (Week 3)
7. ✅ **Final evaluation**:
- [ ] Use `$arckit-evaluate` to score suppliers
- [ ] Compare responses and demos
- [ ] Select winning service
8. ✅ **Contract award**:
- [ ] Award via Digital Marketplace
- [ ] Publish on Contracts Finder
**Parallel Activity**: While waiting for responses, prepare evaluation criteria with `$arckit-evaluate`.
---
## 📎 Referenced Documents
- **Requirements**: projects/[project]/ARC-*-REQ-*.md
- **G-Cloud Search**: projects/[project]/procurement/gcloud-ARC-*-REQ-*.md
- **Service Pages**: [list all service URLs]
---
**Generated**: [DATE]
**Tool**: $arckit-gcloud-clarify
**Next Command**: `$arckit-evaluate` (after supplier responses received)
Before finalizing, validate output:
Output to user:
✅ Generated G-Cloud clarification questions for [PROJECT_NAME]
Services Analyzed: [N]
Document: projects/[project]/procurement/ARC-{PROJECT_ID}-GCLC-v1.0.md
Gap Analysis Summary:
- [Service 1]: [Risk Level] - [N] critical gaps, [N] high gaps
- [Service 2]: [Risk Level] - [N] critical gaps, [N] high gaps
- [Service 3]: [Risk Level] - [N] critical gaps, [N] high gaps
Recommendations:
- 🔴 [N] services have CRITICAL gaps (do not proceed without clarification)
- 🟠 [N] services have HIGH gaps (clarify before proceeding)
- 🟢 [N] services are LOW risk (proceed to technical demo)
Next Steps:
1. Review generated questions in ARC-{PROJECT_ID}-GCLC-v1.0.md
2. Send email to suppliers using provided templates
3. Set response deadline: [DATE + 1 week]
4. Schedule follow-up to review responses
5. Use $arckit-evaluate after receiving responses to score and compare
Important: Do not award contracts to services with CRITICAL gaps until gaps are resolved.
Example 1: Explicit Gap
Example 2: Ambiguous Gap
Example 3: Compliance Gap
Complete G-Cloud Procurement Workflow:
$arckit-requirements → Define service needs$arckit-gcloud-search → Find services on Digital Marketplace$arckit-gcloud-clarify → Identify gaps, generate questions$arckit-evaluate → Score suppliers based on responsesThis command is the critical validation step between finding services and evaluating them.
< or > (e.g., < 3 seconds, > 99.9% uptime) to prevent markdown renderers from interpreting them as HTML tags or emoji