Prepare for GDS Service Standard assessment - analyze evidence against 14 points, identify gaps, generate readiness report
You are an expert UK Government service assessor helping teams prepare for GDS Service Standard assessments.
$ARGUMENTS
Generate a comprehensive GDS Service Standard assessment preparation report that:
PHASE (required): , , or - The assessment phase to prepare for (optional): - Planned assessment date for timeline calculations
alphabetaliveYYYY-MM-DDNote: Before generating, scan
projects/for existing project directories. For each project, list allARC-*.mdartifacts, checkexternal/for reference documents, and check000-global/for cross-project policies. If no external docs exist but they would improve output, ask the user.
Read the template (with user override support):
.arckit/templates/service-assessment-prep-template.md exists in the project root.arckit/templates/service-assessment-prep-template.md (default)Tip: Users can customize templates with
$arckit-customize service-assessment
projects/*/ directories and find the highest NNN-* number (or start at 001 if none exist)002)projects/{NNN}-{slug}/README.md with the project name, ID, and date — the Write tool will create all parent directories automaticallyprojects/{NNN}-{slug}/external/README.md with a note to place external reference documents herePROJECT_ID = the 3-digit number, PROJECT_PATH = the new directory pathMANDATORY (warn if missing):
projects/000-global/)
$arckit-principles firstprojects/{project-dir}/
$arckit-requirements firstRECOMMENDED (read if available, note if missing):
OPTIONAL (read if available, skip silently if missing):
external/ files) — extract previous assessment results, assessor feedback, action items, evidence gaps identifiedprojects/000-global/external/ — extract enterprise service standards, previous GDS assessment reports, cross-project assessment benchmarksprojects/{project-dir}/external/ and re-run, or skip.".arckit/references/citation-instructions.md. Place inline citation markers (e.g., [PP-C1]) next to findings informed by source documents and populate the "External References" section in the template.For each of the 14 Service Standard points, map evidence from ArcKit artifacts:
Evidence Sources:
ARC-*-STKE-*.md - User groups, needs, pain points, driversARC-*-REQ-*.md - User stories, personas, user journeys, acceptance criteriaARC-*-PLAN-*.md - User research activities planned/completedreviews/ARC-*-HLDR-*.md - User needs validation, usability considerationsPhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Evidence Sources:
ARC-*-REQ-*.md - End-to-end user journeys, functional requirementsARC-*-STKE-*.md - User goals, desired outcomeswardley-maps/ARC-*-WARD-*.md - Value chain, user needs to components mappingdiagrams/ARC-*-DIAG-*.md - Service boundaries, external systemsreviews/ARC-*-HLDR-*.md - Integration strategy, channel coveragePhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Evidence Sources:
ARC-*-REQ-*.md - Multi-channel requirements, integration pointsreviews/ARC-*-HLDR-*.md - Channel strategy, integration architecturediagrams/ - System integration diagramsARC-*-DATA-*.md - Data consistency across channelsPhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Evidence Sources:
ARC-*-REQ-*.md - Usability requirements, simplicity NFRsreviews/ARC-*-HLDR-*.md - UX design review, simplicity assessmentARC-*-PLAN-*.md - Usability testing activitiesPhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Evidence Sources:
ARC-*-REQ-*.md - WCAG 2.1 AA requirements, accessibility NFRsARC-*-SECD-*.md - Accessibility considerationsreviews/ARC-*-HLDR-*.md - Accessibility design reviewreviews/ARC-*-DLDR-*.md - Assistive technology compatibilityPhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Evidence Sources:
ARC-*-STKE-*.md - RACI matrix, team rolesARC-*-PLAN-*.md - Team structure, roles, skillsARC-*-SOBC-*.md - Team costs, sustainability planPhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Evidence Sources:
ARC-*-PLAN-*.md - GDS phases, sprint structure, agile ceremoniesARC-*-RISK-*.md - Iterative risk managementreviews/ARC-*-HLDR-*.md, reviews/ARC-*-DLDR-*.md - Design iterationsPhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Evidence Sources:
reviews/ARC-*-HLDR-*.md, reviews/ARC-*-DLDR-*.md - Design iterations, review datesARC-*-ANAL-*.md - Governance improvements over timeARC-*-PLAN-*.md - Iteration cycles, review gatesARC-*-REQ-*.md - Requirements evolutionPhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Evidence Sources:
ARC-*-SECD-*.md - NCSC security principles, threat modelARC-*-DATA-*.md - GDPR compliance, data protection, PII handlingARC-*-ATRS-*.md - AI transparency and risk (if AI service)ARC-*-RISK-*.md - Security risks and mitigationsARC-*-REQ-*.md - Security and privacy NFRsARC-*-TCOP-*.md - TCoP security pointsPhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Evidence Sources:
ARC-*-REQ-*.md - KPIs, success metrics, NFRsARC-*-SOBC-*.md - Benefits realization, success criteria, ROIARC-*-PLAN-*.md - Milestones, success criteria per phaseARC-*-TCOP-*.md - Performance metrics approachPhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Evidence Sources:
research/ - Technology research, proof of conceptswardley-maps/ - Build vs buy analysis, technology evolutionARC-*-TCOP-*.md - Technology choices justified (TCoP Point 11)reviews/ARC-*-HLDR-*.md - Technology stack, architecture decisionsARC-*-SOW-*.md - Vendor selection, procurement justificationARC-*-EVAL-*.md - Technology/vendor scoringPhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Evidence Sources:
reviews/ARC-*-HLDR-*.md - Open source approach, repository linksARC-*-TCOP-*.md - TCoP Point 12 (Open source code)ARC-*-REQ-*.md - Open source licensing requirementsPhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Evidence Sources:
ARC-*-TCOP-*.md - TCoP Point 13 (Open standards)reviews/ARC-*-HLDR-*.md - GOV.UK Design System usage, API standards, common componentsARC-*-REQ-*.md - Standards compliance requirementsARC-*-DATA-*.md - Data standardsPhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Evidence Sources:
ARC-*-REQ-*.md - Availability/reliability NFRs, SLAsreviews/ARC-*-HLDR-*.md - Resilience architecture, failover, disaster recoveryreviews/ARC-*-DLDR-*.md - Infrastructure resilience, monitoringARC-*-RISK-*.md - Operational risks, incident responsePhase-Specific Evidence Requirements:
Alpha:
Beta:
Live:
Apply phase-appropriate criteria when assessing evidence:
Alpha Assessment - Focus on demonstrating viability:
Beta Assessment - Focus on demonstrating production readiness:
Live Assessment - Focus on demonstrating continuous improvement:
For each Service Standard point, assign a RAG rating based on evidence found:
🟢 Green (Ready):
🟡 Amber (Partial):
🔴 Red (Not Ready):
Overall Readiness Rating:
For each gap identified, generate specific, actionable recommendations:
Priority Levels:
Recommendation Format:
Priority: [Critical/High/Medium]
Point: [Service Standard point number]
Action: [Specific action to take]
Timeline: [Estimated time to complete]
Who: [Suggested role/person]
Evidence to create: [What artifact/documentation will this produce]
Provide practical guidance for the assessment day:
Documentation to Prepare (share with panel 1 week before):
Who Should Attend:
Show and Tell Structure (4-hour assessment timeline):
Tips for Assessment Day:
Before writing the file, read .arckit/references/quality-checklist.md and verify all Common Checks plus the SVCASS per-type checks pass. Fix any failures before proceeding.
Generate a comprehensive markdown report saved to:
projects/{project-dir}/ARC-{PROJECT_ID}-SVCASS-v1.0.md
Example: projects/001-nhs-appointment/ARC-001-SVCASS-v1.0.md
# GDS Service Assessment Preparation Report
**Project**: [Project Name from ArcKit artifacts]
**Assessment Phase**: [Alpha/Beta/Live]
**Assessment Date**: [If provided, else "Not yet scheduled"]
**Report Generated**: [Current date]
**ArcKit Version**: {ARCKIT_VERSION}
---
## Executive Summary
**Overall Readiness**: 🟢 Green / 🟡 Amber / 🔴 Red
**Readiness Score**: X/14 points ready
**Breakdown**:
- 🟢 Green: X points
- 🟡 Amber: X points
- 🔴 Red: X points
**Summary**:
[2-3 paragraph summary of overall readiness, highlighting strengths and critical gaps]
**Critical Gaps** (Must address before assessment):
- [Gap 1 with Service Standard point number]
- [Gap 2 with Service Standard point number]
- [Gap 3 with Service Standard point number]
**Key Strengths**:
- [Strength 1]
- [Strength 2]
- [Strength 3]
**Recommended Timeline**:
- [X weeks/days until ready based on gap analysis]
- [If assessment date provided: "Assessment in X days - [Ready/Need to postpone]"]
---
## Service Standard Assessment (14 Points)
[For each of the 14 points, include the following detailed section]
### 1. Understand Users and Their Needs
**Status**: 🟢 Ready / 🟡 Partial / 🔴 Not Ready
**What This Point Means**:
[Brief 2-3 sentence explanation of what this Service Standard point requires]
**Why It Matters**:
[1-2 sentences on importance]
**Evidence Required for [Alpha/Beta/Live]**:
- [Evidence requirement 1 for this phase]
- [Evidence requirement 2 for this phase]
- [Evidence requirement 3 for this phase]
**Evidence Found in ArcKit Artifacts**:
✅ **ARC-*-STKE-*.md** (lines XX-YY)
- [Specific evidence found]
- [What this demonstrates]
✅ **ARC-*-REQ-*.md** (Section X: User Stories)
- [Specific evidence found]
- [What this demonstrates]
❌ **Missing**: [Specific gap 1]
❌ **Missing**: [Specific gap 2]
⚠️ **Weak**: [Evidence exists but lacks quality/detail]
**Gap Analysis**:
[2-3 sentences assessing completeness: what's strong, what's weak, what's missing]
**Readiness Rating**: 🟢 Green / 🟡 Amber / 🔴 Red
**Strengths**:
- [Strength 1]
- [Strength 2]
**Weaknesses**:
- [Weakness 1]
- [Weakness 2]
**Recommendations**:
1. **Critical**: [Action with specific details]
- Timeline: [X days/weeks]
- Owner: [Suggested role]
- Evidence to create: [What this will produce]
2. **High**: [Action with specific details]
- Timeline: [X days/weeks]
- Owner: [Suggested role]
- Evidence to create: [What this will produce]
3. **Medium**: [Action with specific details]
- Timeline: [X days/weeks]
- Owner: [Suggested role]
- Evidence to create: [What this will produce]
**Assessment Day Guidance**:
- **Prepare**: [What to prepare for presenting this point]
- **Show**: [What to demonstrate/show]
- **Bring**: [Who should be ready to present]
- **Materials**: [Specific artifacts/demos to have ready]
- **Likely Questions**:
- [Expected question 1]
- [Expected question 2]
---
[Repeat above structure for all 14 Service Standard points]
---
## Evidence Inventory
**Complete Traceability**: Service Standard Point → ArcKit Artifacts
| Service Standard Point | ArcKit Artifacts | Status | Critical Gaps |
|------------------------|------------------|--------|---------------|
| 1. Understand users | ARC-*-STKE-*.md, ARC-*-REQ-*.md | 🟡 Partial | Prototype testing with users |
| 2. Solve whole problem | ARC-*-REQ-*.md, wardley-maps/ | 🟢 Complete | None |
| 3. Joined up experience | reviews/ARC-*-HLDR-*.md, diagrams/ | 🟡 Partial | Channel integration testing |
| 4. Simple to use | ARC-*-REQ-*.md, reviews/ARC-*-HLDR-*.md | 🟢 Complete | None |
| 5. Everyone can use | ARC-*-REQ-*.md, ARC-*-SECD-*.md | 🔴 Not Ready | WCAG 2.1 AA testing |
| 6. Multidisciplinary team | ARC-*-STKE-*.md, ARC-*-PLAN-*.md | 🟢 Complete | None |
| 7. Agile ways of working | ARC-*-PLAN-*.md | 🟢 Complete | None |
| 8. Iterate frequently | reviews/ARC-*-HLDR-*.md, reviews/ARC-*-DLDR-*.md | 🟡 Partial | Iteration log |
| 9. Secure and private | ARC-*-SECD-*.md, ARC-*-DATA-*.md | 🟢 Complete | None |
| 10. Success metrics | ARC-*-REQ-*.md, ARC-*-SOBC-*.md | 🟡 Partial | Performance dashboard |
| 11. Right tools | research/, wardley-maps/, ARC-*-TCOP-*.md | 🟢 Complete | None |
| 12. Open source | reviews/ARC-*-HLDR-*.md | 🔴 Not Ready | Public code repository |
| 13. Open standards | ARC-*-TCOP-*.md, reviews/ARC-*-HLDR-*.md | 🟢 Complete | None |
| 14. Reliable service | ARC-*-REQ-*.md, reviews/ARC-*-HLDR-*.md | 🟡 Partial | Load testing results |
**Summary**:
- ✅ Strong evidence: Points X, Y, Z
- ⚠️ Adequate but needs strengthening: Points A, B, C
- ❌ Critical gaps: Points D, E
---
## Assessment Preparation Checklist
### Critical Actions (Complete within 2 weeks)
Priority: Complete these before booking assessment - they address Red ratings
- [ ] **Action 1**: [Specific action]
- Point: [Service Standard point number]
- Timeline: [X days]
- Owner: [Role]
- Outcome: [What evidence this creates]
- [ ] **Action 2**: [Specific action]
- Point: [Service Standard point number]
- Timeline: [X days]
- Owner: [Role]
- Outcome: [What evidence this creates]
### High Priority Actions (Complete within 4 weeks)
Priority: Should complete to strengthen Amber points to Green
- [ ] **Action 3**: [Specific action]
- Point: [Service Standard point number]
- Timeline: [X days]
- Owner: [Role]
- Outcome: [What evidence this creates]
- [ ] **Action 4**: [Specific action]
- Point: [Service Standard point number]
- Timeline: [X days]
- Owner: [Role]
- Outcome: [What evidence this creates]
### Medium Priority Actions (Nice to Have)
Priority: Strengthens overall case but not blocking
- [ ] **Action 5**: [Specific action]
- Point: [Service Standard point number]
- Timeline: [X days]
- Owner: [Role]
- Outcome: [What evidence this creates]
---
## Assessment Day Preparation
### Timeline and Booking
**Current Readiness**:
[Assessment of whether ready to book now, or need to complete critical actions first]
**Recommended Booking Timeline**:
- Complete critical actions: [X weeks]
- Complete high priority actions: [X weeks]
- Buffer for preparation: 1 week
- **Ready to book after**: [Date if assessment date provided]
**How to Book**:
1. Contact GDS Central Digital & Data Office assessment team
2. Book 5 weeks in advance minimum
3. Assessments typically on Tuesday, Wednesday, or Thursday
4. Duration: 4 hours
5. Provide: Service name, department, phase, preferred dates
### Documentation to Share with Panel
**Send 1 week before assessment**:
Required documentation:
- [ ] Project overview (1-2 pages) - Use `ARC-*-PLAN-*.md` summary
- [ ] User research repository or summary - From `ARC-*-STKE-*.md` and user research findings
- [ ] Service architecture diagrams - From `diagrams/` directory
- [ ] Prototype/demo environment URL (if applicable)
Recommended documentation:
- [ ] Key ArcKit artifacts:
- `ARC-*-STKE-*.md` - Stakeholders and user needs
- `ARC-*-REQ-*.md` - Requirements and user stories
- `reviews/ARC-*-HLDR-*.md` - Architecture decisions
- `ARC-*-SECD-*.md` - Security approach
- [List other relevant phase-specific artifacts]
Optional supplementary:
- [ ] Design history showing iterations
- [ ] Research findings (videos, playback slides)
- [ ] Technical documentation or developer docs
- [ ] Performance metrics dashboard (if available)
### Who Should Attend
**Core Team** (required):
- ✅ **Product Manager / Service Owner** - Overall service vision and decisions
- ✅ **Lead User Researcher** - User needs, research findings, testing
- ✅ **Technical Architect / Lead Developer** - Technology choices, architecture
- ✅ **Delivery Manager** - Agile practices, team dynamics
**Phase-Specific Additions**:
[For Alpha]:
- ✅ **Lead Designer** - Prototype design, user interface
- ✅ **Business Analyst** - Requirements, user stories
[For Beta]:
- ✅ **Accessibility Specialist** - WCAG compliance, assistive technology testing
- ✅ **Security Lead** - Security testing, threat model
- ✅ **Content Designer** - Content approach, plain English
[For Live]:
- ✅ **Operations/DevOps Lead** - Service reliability, monitoring
- ✅ **Performance Analyst** - Metrics, analytics, performance data
**Optional Attendees**:
- Senior Responsible Owner (for context, may not be there whole time)
- Business owner or policy lead
- Clinical safety officer (health services)
- Data protection officer (high PII services)
### Show and Tell Structure
**4-Hour Assessment Timeline**:
**0:00-0:15 - Introductions and Context**
- Team introductions (name, role, experience)
- Service overview (2 minutes)
- Project context and phase progress
**0:15-1:00 - User Research and Needs (Points 1, 2, 3, 4)**
- User Researcher presents:
- Research findings and methodology
- User needs and problem definition
- Prototype/design testing results
- How user needs inform service design
- Be ready to discuss: diversity of research participants, accessibility
**1:00-1:45 - Service Demonstration (Points 2, 3, 4, 5)**
- Show the service or prototype:
- End-to-end user journey demonstration
- Key features and functionality
- Accessibility features
- Multi-channel experience
- Use real examples and test data
- Show iterations based on feedback
**1:45-2:30 - Technical Architecture and Security (Points 9, 11, 12, 13, 14)**
- Tech Lead presents:
- Architecture decisions and rationale
- Technology choices (build vs buy)
- Security and privacy approach
- Open source strategy
- Reliability and monitoring
- Use diagrams from ArcKit artifacts
- Explain trade-offs and decisions
**2:30-3:00 - Team and Ways of Working (Points 6, 7, 8, 10)**
- Delivery Manager presents:
- Team composition and skills
- Agile practices and ceremonies
- Iteration approach and cadence
- Success metrics and performance data
- Show real examples: sprint boards, retro actions
**3:00-3:45 - Open Q&A**
- Panel asks questions on any Service Standard points
- Team responds with evidence and examples
- Opportunity to address panel concerns
- Provide additional context as needed
**3:45-4:00 - Panel Deliberation**
- Team steps out
- Panel discusses and decides on ratings
- Panel may call team back for clarifications
### Tips for Success
**Do**:
- ✅ Show real work, not polished presentations (max 10 slides if any)
- ✅ Have people who did the work present it
- ✅ Be honest about what you don't know yet
- ✅ Explain your problem-solving approach
- ✅ Demonstrate iteration based on learning
- ✅ Show enthusiasm for user needs
- ✅ Provide evidence for claims
- ✅ Reference ArcKit artifacts by name
**Don't**:
- ❌ Over-prepare presentations (panel wants to see artifacts)
- ❌ Hide problems or pretend everything is perfect
- ❌ Use jargon or assume panel knows your context
- ❌ Let senior leaders dominate (panel wants to hear from doers)
- ❌ Argue with panel feedback
- ❌ Rush through - panel will interrupt with questions
**Materials to Have Ready**:
- Prototype or working service with test data loaded
- Laptops for team members to show their work
- Backup plan if demo breaks (screenshots, videos)
- Links to ArcKit artifacts and other documentation
- Research videos or clips (if appropriate)
- Architecture diagrams printed or on screen
---
## After the Assessment
### If You Pass (Green)
**Immediate Actions**:
- [ ] Celebrate with the team
- [ ] Share assessment report with stakeholders
- [ ] Plan for next phase
- [ ] Book next assessment (if moving to beta/live)
**Continuous Improvement**:
- [ ] Act on panel feedback and recommendations
- [ ] Continue user research and iteration
- [ ] Update ArcKit artifacts as service evolves
- [ ] Maintain Service Standard compliance
### If You Get Amber
**Understanding Amber**:
- Service can proceed to next phase
- Must fix amber issues within 3 months
- Progress tracked in "tracking amber evidence" document
- GDS assessment team will monitor progress
**Immediate Actions**:
- [ ] Create "tracking amber evidence" document
- [ ] Assign owners to each amber point
- [ ] Set deadlines for addressing amber issues (within 3 months)
- [ ] Schedule regular check-ins with GDS assessment team
**Tracking Amber Evidence**:
Create a public document (visible to assessment team) showing:
- Each amber point and the specific concern raised
- Actions taken to address the concern
- Evidence created (with links/dates)
- Status (not started, in progress, complete)
- Next assessment date
### If You Fail (Red)
**Understanding Red**:
- Service cannot proceed to next phase
- Must address red issues before reassessment
- Team remains in current phase
- Requires another full assessment
**Immediate Actions**:
- [ ] Review assessment report carefully with team
- [ ] Identify root causes of red ratings
- [ ] Create action plan to address each red point
- [ ] Re-run `$arckit-service-assessment` command weekly to track progress
- [ ] Book reassessment once red issues resolved (typically 3-6 months)
---
## Next Steps
### This Week
**Immediate actions** (within 7 days):
1. [Action 1 from critical list]
2. [Action 2 from critical list]
3. [Action 3 from critical list]
**Quick wins** (can complete in 1-2 days):
- [Quick win 1]
- [Quick win 2]
### Next 2 Weeks
**Priority actions** (complete before booking):
1. [Action from critical list]
2. [Action from critical list]
3. [Action from high priority list]
### Next 4 Weeks
**Strengthening actions** (improve Amber to Green):
1. [Action from high priority list]
2. [Action from high priority list]
3. [Action from medium priority list]
### Continuous Improvement
**Weekly**:
- [ ] Re-run `$arckit-service-assessment PHASE=[phase]` to track progress
- [ ] Update this report as evidence is gathered
- [ ] Review checklist and mark completed items
- [ ] Sprint planning includes Service Standard prep tasks
**Fortnightly**:
- [ ] Team review of assessment readiness
- [ ] Practice show and tell with colleagues
- [ ] Gather feedback on presentation approach
**Before Booking**:
- [ ] All critical actions complete
- [ ] At least 10/14 points rated Green or Amber
- [ ] Team confident and prepared
- [ ] Documentation ready to share
- [ ] Demo environment tested and working
---
## Resources
### GDS Service Standard Resources
**Official Guidance**:
- [Service Standard](https://www.gov.uk/service-manual/service-standard) - All 14 points explained
- [What happens at a service assessment](https://www.gov.uk/service-manual/service-assessments/how-service-assessments-work) - Assessment process
- [Book a service assessment](https://www.gov.uk/service-manual/service-assessments/book-a-service-assessment) - Booking information
- [Service Standard Reports](https://www.gov.uk/service-standard-reports) - Browse 450+ published assessment reports
**Phase-Specific Guidance**:
- [Alpha phase](https://www.gov.uk/service-manual/agile-delivery/how-the-alpha-phase-works) - What to do in alpha
- [Beta phase](https://www.gov.uk/service-manual/agile-delivery/how-the-beta-phase-works) - What to do in beta
- [Live phase](https://www.gov.uk/service-manual/agile-delivery/how-the-live-phase-works) - What to do when live
**Deep Dives by Service Standard Point**:
[Links to all 14 individual point pages on GOV.UK]
### Related ArcKit Commands
**Complementary Analysis**:
- `$arckit-analyze` - Comprehensive governance quality analysis
- `$arckit-traceability` - Requirements traceability matrix showing evidence chains
**Overlap with TCoP**:
- `$arckit-tcop` - Technology Code of Practice assessment (points 11, 13 overlap)
**Generate Missing Evidence**:
- `$arckit-requirements` - If user stories or NFRs weak
- `$arckit-hld-review` - If architecture decisions not documented
- `$arckit-secure` - If security assessment incomplete
- `$arckit-diagram` - If architecture diagrams missing
- `$arckit-wardley` - If technology strategy not clear
### Community Resources
**Blog Posts and Lessons Learned**:
- [Preparing for a GDS assessment](https://www.iterate.org.uk/10-things-to-remember-when-preparing-for-a-service-standard-assessment/)
- [What I learned as a user researcher](https://dwpdigital.blog.gov.uk/2020/08/17/what-ive-learned-about-gds-assessments-as-a-user-researcher/)
- [Service assessments: not Dragon's Den](https://medium.com/deloitte-uk-design-blog/service-assessments-no-longer-dragons-den-909b56c43593)
**Supplier Support** (G-Cloud):
- Search Digital Marketplace for "GDS assessment preparation" support services
- Many suppliers offer assessment prep workshops and mock assessments
---
## Appendix: Assessment Outcome Examples
### Example: Strong Alpha Pass (Green)
**Typical characteristics**:
- 12-14 points rated Green
- Excellent user research with diverse participants
- Working prototype tested extensively
- Clear technology choices with justification
- Strong multidisciplinary team
- Agile practices established and working well
**Panel feedback themes**:
- "Strong user research foundation"
- "Clear evidence of iteration based on feedback"
- "Team has right skills and working well together"
- "Technology choices well justified"
### Example: Alpha with Amber
**Typical characteristics**:
- 8-11 points Green, 3-5 Amber, 0-1 Red
- Good user research but gaps in diversity
- Prototype exists but limited testing
- Technology chosen but not fully tested
- Team in place but some skills gaps
**Common amber points**:
- Point 1: Need more diverse user research participants
- Point 5: Accessibility considerations identified but not tested
- Point 8: Iterations happening but not clearly documented
- Point 12: Open source approach decided but not yet implemented
**Panel feedback themes**:
- "Good start, needs more evidence of [X]"
- "Continue to build on [strength] and address [gap]"
- "By beta, we expect to see [specific improvement]"
### Example: Beta with Critical Issues (Red)
**Typical characteristics**:
- Major gaps in 2-3 points
- Often accessibility (Point 5) or performance data (Point 10)
- Service working but quality issues
- Security or privacy concerns
**Common red points**:
- Point 5: WCAG 2.1 AA testing not completed (critical for beta)
- Point 9: Security testing not done or serious vulnerabilities found
- Point 10: No performance data being collected
- Point 14: Service unreliable, frequent downtime
**Panel feedback themes**:
- "Cannot proceed to public beta until [critical issue] resolved"
- "This is essential for a beta service"
- "Team needs to prioritise [issue] immediately"
---
**Report Generated by**: ArcKit v{ARCKIT_VERSION} `$arckit-service-assessment` command
**Next Actions**:
1. Review this report with your team
2. Prioritize critical actions in your sprint planning
3. Re-run `$arckit-service-assessment PHASE=[phase]` weekly to track progress
4. Use checklist to track completion of preparation tasks
**Questions or Feedback**:
- Report issues: https://github.com/tractorjuice/arc-kit/issues
- Contribute improvements: PRs welcome
- Share your assessment experience: Help improve this command for others
---
*Good luck with your assessment! Remember: assessments are conversations about your service, not exams. Show your work, explain your thinking, and be open to feedback. The panel wants you to succeed.* 🚀
Tone and Approach:
Quality Standards:
Important Notes:
$arckit-service-assessment PHASE=alpha DATE=2025-12-15
Generates: projects/001-nhs-appointment/ARC-001-SVCASS-v1.0.md
$arckit-service-assessment PHASE=beta
Generates: projects/002-payment-gateway/ARC-002-SVCASS-v1.0.md
This command succeeds when:
Transform ArcKit documentation into Service Standard compliance evidence. Demonstrate governance excellence. ✨
< or > (e.g., < 3 seconds, > 99.9% uptime) to prevent markdown renderers from interpreting them as HTML tags or emoji