Orchestrate the full game development workflow — Phase 0 design pipeline, sprint cycles, lifecycle gates. Reads workflow state to ensure correct sequencing across sessions. Run this before starting any game development work.
This skill is the workflow enforcement layer for the agent team. It reads a state file to determine the current position in the workflow, then executes the correct next step — invoking the right skills, spawning teams at the right phase, and enforcing user approval at every gate.
| Field | Value |
|---|---|
| Assigned Agent | Orchestrator (main Claude instance) |
| Sprint Phase | All phases — this skill manages the entire workflow |
| Directory Scope | docs/.workflow-state.json (state file only) |
| Workflow Reference | See docs/agent-team-workflow.md |
Users can trigger this skill by saying:
/project-orchestratorEvery time this skill is invoked, immediately do the following:
Look for: docs/.workflow-state.json
Read docs/.workflow-state.json
Parse workflow_position to determine current step
Display the status dashboard (see Status Display Format below)
Branch to the appropriate section based on workflow_position.phase:
pre_phase_0 → Begin Phase 0 (step 0.1)phase_0 → Resume Phase 0 at the current stepsprint → Resume Sprint orchestration at the current sprint phaseIf status is awaiting_user_approval: The previous session was interrupted before the user approved. Re-present the artifact and use AskUserQuestion to ask for approval.
If status is in_progress: The previous session was interrupted mid-step. Check if the artifact file exists:
awaiting_user_approval)scripts/autoloads/, scenes/gameplay/, etc.)/project-bootstrap first to set up the directory structure."Create docs/.workflow-state.json with:
{
"version": "1.0.0",
"project_name": "[read from project.godot or CLAUDE.md]",
"created_at": "[current ISO timestamp]",
"updated_at": "[current ISO timestamp]",
"lifecycle_phase": "not_started",
"workflow_position": {
"phase": "pre_phase_0",
"step": null,
"status": "pending",
"substep": null
},
"phase_0_progress": {
"game_ideator": { "status": "pending", "artifact": null, "approved_at": null },
"concept_validator": { "status": "pending", "artifact": null, "approved_at": null },
"design_bible_updater": { "status": "pending", "artifact": null, "approved_at": null },
"art_reference_collector": { "status": "pending", "artifact": null, "approved_at": null },
"audio_reference_collector": { "status": "pending", "artifact": null, "approved_at": null },
"narrative_reference_collector": { "status": "pending", "artifact": null, "approved_at": null },
"game_vision_generator": { "status": "pending", "artifact": null, "approved_at": null },
"gdd_generator": { "status": "pending", "artifact": null, "approved_at": null },
"roadmap_planner": { "status": "pending", "artifact": null, "approved_at": null },
"feature_pipeline": {
"sprint_1_features": [],
"sprint_2_features": [],
"sprint_3_features": [],
"sprint_4_features": []
}
},
"epics": [],
"sprints": [],
"lifecycle_gates": {
"prototype_gate": { "status": "pending", "decision": null, "decided_at": null },
"vertical_slice_gate": { "status": "pending", "decision": null, "decided_at": null }
}
}
Every Phase 0 step and every sprint agent task requires reading a specific skill file.
The mapping is defined in skill-registry.json (in this skill's directory).
Before executing ANY step:
skill-registry.json to find the required skill pathA PreToolUse hook (enforce-skill-read.sh) enforces this — artifact writes are blocked unless the required skill was read first. A PostToolUse hook (track-skill-read.sh) automatically records skill reads in the state file's skill_reads field.
This is NON-NEGOTIABLE even if you "know" what the skill does. The hooks track reads mechanically and will block writes if the read didn't happen in the current step.
All status updates, deliveries, and transitions MUST use the standardized report formats defined in:
.claude/skills/project-orchestrator/report-formats.md
Read this file at the start of every session. It defines templates for:
Rules:
══) for milestones only (sprint start/end, epic reviews, gates)──) for deliverables (features, phase completions, artifacts)A PreToolUse hook (enforce-report-delivery.sh) blocks ALL artifact writes when pending_reports in the state file is non-empty. This ensures reports are always delivered at transition points.
pending_reports in the state file with the required report type(s)report-formats.mdpending_reports (set to []) in the state file| Transition | Set pending_reports to | Report Format |
|---|---|---|
| Sprint Phase D approved (CONTINUE) | ["sprint_end_review"] | Sprint End Review (#4) |
| Starting a new sprint (before Phase A) | ["sprint_start"] | Sprint Start Card (#4) |
| All sprints in epic completed | ["epic_review"] | Epic Review Banner (#5) |
| Lifecycle gate reached | ["lifecycle_gate"] | Lifecycle Gate Banner (#6) |
| Phase B.5 completed | ["integration_check"] | Integration Check Report (#8) |
| Phase A→B, B→B.5, C→D | ["phase_transition"] | Phase Transition Card (#2) |
When multiple reports are required at the same transition (e.g., sprint completes AND epic completes), set all of them:
"pending_reports": ["sprint_end_review", "epic_review"]
Present each report in order, then clear pending_reports to [].
1. User approves Sprint N Phase D → CONTINUE
2. Write state: sprint.current_phase = "completed", pending_reports = ["sprint_end_review"]
3. Present Sprint End Review card (format #4)
4. Check: is this the last sprint in the epic?
- If YES: write state: pending_reports = ["epic_review", "sprint_start"]
- If NO: write state: pending_reports = ["sprint_start"]
5. Present Epic Review banner if required (format #5)
6. Present Sprint Start card (format #4)
7. Write state: pending_reports = []
8. NOW you can write feature specs and other artifacts
This is NON-NEGOTIABLE. The hook will block writes to docs/features/, scripts/, scenes/, and all other artifact paths until pending_reports is empty.
Mode: No team. You (the orchestrator) act as the design-lead agent directly, working interactively with the user. Do NOT create a team for Phase 0.
Before starting: Read .claude/agents/design-lead.md for role context.
Precondition: State is at pre_phase_0 or phase_0 with step game_ideator
Action:
workflow_position → { phase: "phase_0", step: "game_ideator", status: "in_progress" }.claude/skills/game-ideator/SKILL.mddocs/ideas/game-concepts.mdphase_0_progress.game_ideator.artifact → "docs/ideas/game-concepts.md"status → "awaiting_user_approval"User Gate:
STOP. Present a summary of the generated concepts and reference file docs/ideas/game-concepts.md, then use AskUserQuestion with these options:
Question: "How do you want to proceed with Game Concept Generation?" Options:
If the user selects APPROVE, use a follow-up AskUserQuestion to ask which concept to select (list concepts as options).
On APPROVE: Update status → "completed", record approved_at, proceed to Step 0.2
On MODIFY: Update status → "user_requested_changes", gather feedback, re-run skill
On REJECT: Update status → "pending", re-run from scratch
Precondition: game_ideator status is completed or skipped
Action:
"concept_validator", status → "in_progress".claude/skills/concept-validator/SKILL.mddocs/ideas/concept-validation.md"awaiting_user_approval"User Gate:
STOP. Present a summary of the feasibility assessment, risks identified, and recommended mitigations. Reference file docs/ideas/concept-validation.md, then use AskUserQuestion with these options:
Question: "How do you want to proceed with Concept Validation?" Options:
On APPROVE: Mark completed, proceed to Step 0.3
On MODIFY: Gather feedback, re-validate with adjusted scope
On REJECT: Reset this step AND Step 0.1 to pending, return to Step 0.1
Precondition: concept_validator status is completed or skipped
Action:
"design_bible_updater", status → "in_progress".claude/skills/design-bible-updater/SKILL.mddocs/design-bible.md"awaiting_user_approval"User Gate:
STOP. Present a summary of the design pillars, vision statement, and creative direction. Reference file docs/design-bible.md. Emphasize that the design bible guides ALL future decisions — this is the most important approval. Then use AskUserQuestion with these options:
Question: "How do you want to proceed with the Design Bible?" Options:
On APPROVE: Mark completed, proceed to Steps 0.3a/0.3b/0.3c (Reference Collection) or skip to Step 0.4 if user declines On MODIFY: Gather specific feedback on pillars/tone, revise On REJECT: Reset to pending, start fresh
Precondition: design_bible_updater status is completed or skipped
Action:
"art_reference_collector", status → "in_progress".claude/skills/art-reference-collector/SKILL.mddocs/art-direction.md and style anchors to assets/references/"awaiting_user_approval"User Gate:
STOP. Present a summary of: reference images collected per category, style analysis highlights (palette, line style, scale), which images were selected as style anchors. Reference file docs/art-direction.md, then use AskUserQuestion:
Question: "How do you want to proceed with Art Direction?" Options:
Precondition: design_bible_updater status is completed or skipped
Action:
"audio_reference_collector", status → "in_progress".claude/skills/audio-reference-collector/SKILL.mddocs/audio-direction.md"awaiting_user_approval"User Gate:
STOP. Present a summary of: audio identity analysis (genre, mood, instrumentation), Epidemic Sound matches found per context, search anchor table. Reference file docs/audio-direction.md, then use AskUserQuestion:
Question: "How do you want to proceed with Audio Direction?" Options:
Precondition: design_bible_updater status is completed or skipped
Action:
"narrative_reference_collector", status → "in_progress".claude/skills/narrative-reference-collector/SKILL.mddocs/narrative-direction.md"awaiting_user_approval"User Gate:
STOP. Present a summary of: narrative structure (model, lore hierarchy, dialogue voice), emotional design targets, key templates created. Reference file docs/narrative-direction.md, then use AskUserQuestion:
Question: "How do you want to proceed with Narrative Direction?" Options:
Precondition: design_bible_updater status is completed or skipped. Reference collectors (0.3a-c) are completed, skipped, or pending (they don't block the vision).
Action:
"game_vision_generator", status → "in_progress".claude/skills/game-vision-generator/SKILL.mddocs/game-vision.mdphase_0_progress.game_vision_generator.artifact → "docs/game-vision.md"status → "awaiting_user_approval"User Gate:
STOP. Present a summary of the vision including: game identity, total mechanics cataloged, scope map breakdown (how many features per lifecycle phase), and key design decisions. Reference file docs/game-vision.md. Emphasize that this vision is the master plan — GDDs will scope down from it. Then use AskUserQuestion with these options:
Question: "How do you want to proceed with the Full Game Vision?" Options:
On APPROVE: Mark completed, proceed to Step 0.5 On MODIFY: Gather specific feedback, revise relevant sections On REJECT: Reset to pending, start fresh
Precondition: game_vision_generator status is completed or skipped. The GDD should reference docs/game-vision.md to scope down to prototype-appropriate features.
Action:
"gdd_generator", status → "in_progress".claude/skills/gdd-generator/SKILL.mddocs/prototype-gdd.md"awaiting_user_approval"User Gate:
STOP. Present a summary of the GDD including core loop, mechanics, systems needed, and scope. Reference file docs/prototype-gdd.md, then use AskUserQuestion with these options:
Question: "How do you want to proceed with the Prototype GDD?" Options:
On APPROVE: Mark completed, proceed to Step 0.6 On MODIFY: Gather feedback, revise specific sections On REJECT: Reset to pending
Precondition: gdd_generator status is completed or skipped
Action:
"roadmap_planner", status → "in_progress".claude/skills/roadmap-planner/SKILL.mddocs/prototype-roadmap.md"awaiting_user_approval"User Gate:
STOP. Present a summary of the sprint breakdown including how many sprints, what each delivers, and dependencies. Reference file docs/prototype-roadmap.md, then use AskUserQuestion with these options:
Question: "How do you want to proceed with the Prototype Roadmap?" Options:
On APPROVE: Mark completed, proceed to Step 0.7 (Feature Pipeline) On MODIFY: Gather feedback on sprint scope/ordering, revise On REJECT: Reset to pending
Precondition: roadmap_planner status is completed or skipped
This step is iterative — it runs once per feature in Sprint 1.
Action:
Read the roadmap (docs/prototype-roadmap.md) to identify Sprint 1 features
Update state: step → "feature_pipeline", status → "in_progress"
For each Sprint 1 feature that hasn't been processed:
a) Idea Brief (feature-spec-generator):
.claude/skills/feature-spec-generator/SKILL.mddocs/ideas/[feature-name]-idea.mdb) Feature Spec (feature-spec-generator):
.claude/skills/feature-spec-generator/SKILL.mddocs/features/[feature-name].mdTrack each feature's progress in phase_0_progress.feature_pipeline.sprint_N_features using this per-feature structure:
{
"name": "feature-name",
"idea_brief": { "status": "pending|skipped|completed", "artifact": null },
"feature_spec": { "status": "pending|completed", "artifact": "docs/features/feature-name.md", "approved_at": null }
}
Populate entries for ALL sprints defined in the roadmap (not just Sprint 1). Future sprint features start with "status": "pending".
When ALL Sprint 1 features have approved specs → Phase 0 is complete
User Gate (per feature): STOP after each idea brief and each spec. Present a summary of the document produced, then use AskUserQuestion with these options:
For idea briefs: Question: "How do you want to proceed with the [feature-name] Idea Brief?" Options:
For feature specs: Question: "How do you want to proceed with the [feature-name] Feature Spec?" Options:
When all Phase 0 steps are completed (or skipped):
lifecycle_phase → "prototype"workflow_position → { phase: "sprint", step: "A", status: "pending" }sprints arrayMode: Multi-agent team. Use TeamCreate, Task, TaskCreate, SendMessage tools.
Reference: See docs/agent-team-workflow.md for full sprint details and phase-transitions.md for valid transitions.
When beginning a new sprint:
{
"epic_number": E,
"name": "[Player-facing goal from roadmap]",
"lifecycle_phase": "[current lifecycle phase]",
"status": "in_progress",
"sprint_numbers": [N],
"review": null
}
If the epic already exists, append this sprint number to sprint_numbers.git checkout -b sprint/[N]-[short-description]TeamCreate: team_name="sprint-[N]", description="Sprint [N]: [deliverable]"{
"sprint_number": N,
"name": "[deliverable slice name]",
"epic_number": E,
"branch": "sprint/N-description",
"lifecycle_phase": "[current lifecycle phase]",
"current_phase": "A",
"team_name": "sprint-N",
"godot_path": null,
"phases": {
"A": { "status": "pending", "agents": [], "completed_at": null, "notes": null },
"B": { "status": "pending", "agents": [], "completed_at": null, "notes": null },
"B5": { "status": "pending", "checklist": {}, "issues_found": 0, "issues_fixed": 0, "smoke_test": null, "notes": null },
"C": { "status": "pending", "agents": [], "completed_at": null, "notes": null },
"D": { "status": "pending", "substep": null, "fix_loop": { "iterations": [] }, "approval": {} }
},
"features": ["feature-name-1", "feature-name-2"],
"tasks": []
}
docs/prototype-roadmap.md (or equivalent) and create a task entry for EVERY task listed in this sprint, using this schema:
{
"id": "SYS-N.M",
"agent": "systems-dev|gameplay-dev|ui-dev|content-architect|asset-artist|team-lead",
"phase": "A|B|B5|C|D",
"feature": "feature-name or null for integration/bug tasks",
"description": "What this task does — one line",
"status": "pending|completed",
"files_created": [],
"files_modified": [],
"completed_at": null,
"notes": null
}
Task ID prefixes: (systems-dev), (gameplay-dev), (ui-dev), (content-architect), (asset-artist), (team-lead).
Include tasks for ALL phases (A, B, B5, C, D) that are defined in the roadmap.IMPORTANT — Task Tracking During Sprint:
status to "completed", fill in files_created, files_modified, completed_at, and notesTL- tasks for each fixcompleted_at and notes on the phase entrychecklist, issues_found, issues_fixed, smoke_testapproval object with per-feature decisionsUpdate sprint current_phase → "A", Phase A status → "in_progress"
Write state file
Spawn agents:
design-lead (if specs need refinement):
Task: name="design-lead", subagent_type="general-purpose", team_name="sprint-N"
Prompt: "You are the design-lead agent. Read .claude/agents/design-lead.md for your role.
Read CLAUDE.md and docs/agent-team-workflow.md for context.
Your task: refine and finalize feature specs for Sprint N: [list features]"
systems-dev:
Task: name="systems-dev", subagent_type="general-purpose", team_name="sprint-N"
Prompt: "You are the systems-dev agent. Read .claude/agents/systems-dev.md for your role.
Read CLAUDE.md and docs/agent-team-workflow.md for context.
Your task: implement system-level features for Sprint N. Read these specs: [list].
Use the feature-implementer skill. Signal when foundation APIs are ready."
asset-artist:
Task: name="asset-artist", subagent_type="general-purpose", team_name="sprint-N"
Prompt: "You are the asset-artist agent. Read .claude/agents/asset-artist.md for your role.
Read CLAUDE.md and docs/agent-team-workflow.md for context.
Check for docs/art-direction.md — if it exists, read it for style anchors and palette.
Check for docs/audio-direction.md — if it exists, read it for music/SFX search anchors.
Your task: generate visual and audio assets for Sprint N features: [list]"
Create tasks via TaskCreate for each agent's work, with dependencies
Record spawned agents in Phase A state: phases.A.agents = ["systems-dev", "asset-artist"]
As tasks complete: Update the corresponding task entry in sprints[N].tasks with status: "completed", files_created, files_modified, completed_at, and notes.
Transition to Phase B when:
Update sprint current_phase → "B", Phase B status → "in_progress"
Write state file
Spawn additional agents:
gameplay-dev:
Task: name="gameplay-dev", subagent_type="general-purpose", team_name="sprint-N"
Prompt: "You are the gameplay-dev agent. Read .claude/agents/gameplay-dev.md for your role.
Read CLAUDE.md, docs/agent-team-workflow.md, and docs/systems-bible.md for context.
Your task: implement gameplay features for Sprint N. Read these specs: [list].
Use the feature-implementer skill."
ui-dev:
Task: name="ui-dev", subagent_type="general-purpose", team_name="sprint-N"
Prompt: "You are the ui-dev agent. Read .claude/agents/ui-dev.md for your role.
Read CLAUDE.md, docs/agent-team-workflow.md, and docs/systems-bible.md for context.
Your task: implement UI features for Sprint N. Read these specs: [list].
Use the feature-implementer skill."
content-architect:
Task: name="content-architect", subagent_type="general-purpose", team_name="sprint-N"
Prompt: "You are the content-architect agent. Read .claude/agents/content-architect.md for your role.
Read CLAUDE.md and docs/agent-team-workflow.md for context.
Your task: create data files for Sprint N features: [list]"
Keep asset-artist running from Phase A
Create tasks via TaskCreate for each feature, with addBlockedBy for dependencies
As tasks complete: Update the corresponding task entry in sprints[N].tasks with status: "completed", files_created, files_modified, completed_at, and notes. When the phase finishes, update phases.B.completed_at and phases.B.notes.
Per-Feature Progress Reports:
As each feature is completed by an implementing agent, present a Feature Completion Report to the user. This is a non-blocking progress update — the sprint continues. Use this format:
## Feature Complete: [Feature Name]
### What Was Built
[2-3 sentences: what this feature does and why]
### What's Different Now
- [Visible change 1]
- [Visible change 2]
### How to Playtest
1. [Steps to test]
2. Success: [what confirms it works]
### Known Limitations
- [Anything not yet wired up or placeholder]
The user does not need to respond. If they raise concerns, address them before continuing. Otherwise proceed to the next feature.
Transition to Phase B.5 when:
This phase is MANDATORY. The team lead (you) personally verifies that all pieces built by independent agents actually connect. Skipping this phase was the #1 source of bugs in early sprints.
current_phase → "B5", Phase B5 status → "in_progress"Scene Instantiation Verification:
.tscn created this sprint is instantiated or loaded by at least one other scene or scriptExtResource or load()/preload() references to each new scene)hud.tscn)project.godot Verification:
run/main_scene points to the correct scene[autoload] sectionSignal Wiring Verification:
Collision Layer Verification:
docs/known-patterns.md registrycollision_layer (what I am) and collision_mask (what I detect) are not confusedcollision_layer = 0 and collision_mask set to the target layerGroup Membership Verification:
"player", "enemies") are added via .add_to_group() or scene propertyget_nodes_in_group() or is_in_group() references groups that actually existNaming Convention Verification:
class_name that matches the autoload name (use FooClass pattern)class_name conflicts between scriptsSpatial/Dimension Verification:
docs/known-patterns.md Reference DimensionsCross-Feature Integration:
godot --headless --editor --quitgodot --headless --quit 2>&1"B5": {
"status": "completed",
"checklist": {
"orphaned_scenes": "passed",
"project_godot": "passed",
"signals": "passed",
"collision_layers": "passed",
"groups": "passed",
"naming": "passed",
"dimensions": "passed",
"cross_feature": "passed"
},
"issues_found": 0,
"issues_fixed": 0,
"smoke_test": "passed"
}
Transition to Phase C when:
Update sprint current_phase → "C", Phase C status → "in_progress"
Write state file
Spawn qa-docs:
qa-docs:
Task: name="qa-docs", subagent_type="general-purpose", team_name="sprint-N"
Prompt: "You are the qa-docs agent. Read .claude/agents/qa-docs.md for your role.
Read CLAUDE.md and docs/agent-team-workflow.md for context.
Your tasks:
1. Run code-reviewer on all new/modified scripts
2. Update docs/systems-bible.md
3. Update docs/architecture.md
4. Update CHANGELOG.md
Save reviews to docs/code-reviews/"
Keep developer agents running for critical issue fixes
Optionally spawn design-lead to pipeline next sprint's specs (parallel)
Transition to Phase D when:
Before transitioning to Phase D, run the Godot headless smoke test.
IMPORTANT — Godot Path: The Godot binary may not be in PATH. Check the sprint state for godot_path. If not set, try these locations in order:
godot (if in PATH)/Users/*/Downloads/Godot.app/Contents/MacOS/Godot (macOS download)Once found, record it in the sprint state as "godot_path" so future sprints don't need to search again.
Procedure:
After all QA fixes are applied, rebuild the class cache first:
[godot_path] --headless --editor --quit 2>&1
This is required whenever new class_name declarations were added this sprint. Skipping it causes stale cache errors that look like missing classes.
Run the headless smoke test:
[godot_path] --headless --quit 2>&1
This catches compile/parse errors that would waste user time:
class_name conflicts with autoload singletonsIf errors are found:
class_name declarationsIf clean: Update state and transition to Phase D
Record the smoke test result in the sprint state:
"smoke_test": { "status": "passed", "attempts": 2, "errors_fixed": ["class_name conflict", "missing resource"] }
Review captured output: The capture-smoke-test.sh hook automatically logs all smoke test output to:
docs/sprint-logs/sprint-{N}-smoke-test.log — Human-readable log with categorized errors/warningsdocs/sprint-logs/sprint-{N}-smoke-test-latest.json — Machine-readable summary with error countsRead the latest JSON summary to confirm the captured result matches your assessment before proceeding.
CRITICAL: Never present Phase D sprint review to the user until the headless smoke test passes. The user should never encounter compile errors.
Phase D is iterative — it includes a fix loop where the user can report issues via screenshots or descriptions, the team lead fixes them, and the user re-tests. Bug reports during review are first-class workflow events, not interruptions.
Track the current sub-state in workflow_position.substep:
| Sub-State | Description |
|---|---|
smoke_test | Running initial headless smoke test |
generating_playtest_guide | Generating playtest guide from feature specs + sprint tasks |
presenting_review | Compiling and presenting the sprint review + playtest guide |
user_testing | User is playtesting, may report issues |
fix_loop | User reported issues, team lead is fixing |
final_approval | Fix loop complete, presenting formal approval gate |
current_phase → "D", Phase D status → "in_progress", substep → "smoke_test"SendMessage: type="shutdown_request" to eachTeamDeleteteam_name in sprint stategodot --headless --quit 2>&1"generating_playtest_guide""generating_playtest_guide" (if not already set)docs/features/{name}.md) — what each feature does, acceptance criteriasprints[N].tasks) — what was actually built, files created/modifiedproject.godot — run/main_scene for the entry point scenedocs/systems-bible.md) — navigation flow between scenes (how to reach each feature)docs/sprint-logs/sprint-{N-1}-playtest-guide.md, if exists) — for the "Cumulative Game State" section baselinedocs/sprint-logs/sprint-{N}-playtest-guide.md using the Playtest Guide template from report-formats.md (format #10)"presenting_review"Compile Sprint Review using this format (from workflow doc):
# Sprint [N] Review: "[Deliverable Slice Name]"
## Completed Features
For each feature:
- **Feature name** — status: COMPLETE | PARTIAL | BLOCKED
- What was built (1-2 sentences)
- Files created/modified
- Deviations from spec (if any)
## QA Summary
- Critical issues found: [count]
- Performance warnings: [count]
- Code quality suggestions: [count]
- All critical issues resolved: YES/NO
## Smoke Test
- Status: PASSED (N attempts, errors fixed: [list or "none"])
- Full log: `docs/sprint-logs/sprint-{N}-smoke-test.log`
## Assets Produced
- [list with paths]
## Content Produced
- [list with paths]
## Documentation Updated
- Systems bible: YES/NO
- Architecture doc: YES/NO
- Changelog: YES/NO
## Metrics
- Features planned: [N] | Completed: [N] | Carried over: [N]
## Next Sprint Preview
- Proposed deliverable: "[what player can do next]"
- Feature specs ready: [list]
## Questions for User
- [decisions needed]
Present the review and reference the playtest guide: "The playtest guide is at docs/sprint-logs/sprint-{N}-playtest-guide.md — it has step-by-step test instructions for each feature. Report any bugs or issues (screenshots welcome), or proceed to the formal approval when ready."
Update substep → "user_testing"
When the user reports issues (screenshots, text descriptions, bug reports):
"fix_loop", increment fix_loop_iteration in stategodot --headless --quit 2>&1) to verify the fix doesn't introduce new errors"user_testing"Track fix loop history in the sprint state, and add a corresponding TL- task for each fix:
"fix_loop": {
"iterations": [
{ "reported_by": "user", "issue": "Parser error: class_name hides autoload", "fix": "Removed class_name from autoload scripts", "smoke_test": "passed", "task_id": "TL-N.1" },
{ "reported_by": "user", "issue": "UI text too small at 1080p", "fix": "Increased theme font size and panel dimensions", "smoke_test": "passed", "task_id": "TL-N.2" }
]
}
Each fix loop iteration must also create a task entry in sprints[N].tasks:
{ "id": "TL-N.M", "agent": "team-lead", "phase": "D", "feature": null, "description": "Fix: [issue summary]", "status": "completed", "files_created": [], "files_modified": ["affected files"], "completed_at": "[timestamp]", "notes": "[details]" }
When the user indicates they're satisfied (says "looks good", "ready to approve", or asks to proceed):
Update substep → "final_approval"
Present formal approval gate using AskUserQuestion for each decision point:
Per-feature decisions — For each feature, use AskUserQuestion: Question: "What is your decision for [feature-name]?" Options:
Next sprint scope — Use AskUserQuestion: Question: "How do you want to proceed with the next sprint?" Options:
Overall sprint decision — Use AskUserQuestion: Question: "What is your overall decision for Sprint [N]?" Options:
Record approval decisions in the sprint state:
"approval": {
"feature-name-1": "accepted|request_changes|rejected",
"feature-name-2": "accepted|request_changes|rejected",
"next_sprint": "approved|modified|reordered",
"overall": "continue|pause|pivot"
}
On Continue (all features accepted):
"completed", sprint current_phase → "completed"git checkout main && git merge sprint/N-descriptionsprint_numbers:
On Request Changes (any feature):
When the final sprint in an epic is completed (all sprint numbers in epic.sprint_numbers have current_phase: "completed"), present an Epic Review before starting the next epic.
Compile and present this format:
## Epic Review: "[Epic Goal]"
### Goal Assessment
- Epic goal: [What we set out to achieve]
- Achieved: [YES / PARTIAL / NO]
- Evidence: [What the user can now do in-game that proves it]
### Sprints Completed
- Sprint N: "[deliverable]" — [accepted / partially accepted]
- Sprint N+1: "[deliverable]" — [accepted / partially accepted]
### Lessons Learned
- [What worked well]
- [What to improve for next epic]
### Next Epic Preview
- Epic: "[Next goal]"
- Sprints planned: [count]
- First sprint: "[deliverable]"
Then use AskUserQuestion:
Question: "How do you want to proceed after Epic [N]: '[Goal]'?" Options:
On PROCEED:
"completed", set review.goal_achieved and review.user_decision → "proceed", review.reviewed_atOn ITERATE:
"iterated", set review.goal_achieved → "partial", review.user_decision → "iterate"sprint_numbersOn PAUSE:
review.user_decision → "pause", review.reviewed_at/project-orchestrator invocation, re-present the pause statePresent a summary to the user including sprints completed, core loop assessment, and known issues. Then use AskUserQuestion:
Question: "PROTOTYPE GO/NO-GO — Is this fun? How do you want to proceed?" Options:
On GO:
lifecycle_phase → "vertical_slice"lifecycle_gates.prototype_gate → { status: "completed", decision: "GO" }.claude/skills/game-vision-generator/SKILL.md and update docs/game-vision.md based on prototype learnings (what worked, what didn't, scope adjustments). Present changes for user approval.workflow_position → { phase: "phase_0", step: "vertical_slice_gdd" }gdd-generator (vertical slice mode, scoping from updated vision) → then roadmap-planner (vertical slice mode) → then feature pipelineOn PIVOT:
prototype, record pivot reasongdd_generator step, revise GDDOn KILL:
lifecycle_phase → "killed"game_ideatorPresent a summary to the user including quality bar assessment and polish level. Then use AskUserQuestion:
Question: "VERTICAL SLICE GO/NO-GO — Can this be a good game? How do you want to proceed?" Options:
Handle each decision similarly to the Prototype Gate, adjusting lifecycle phase and workflow position accordingly.
After EVERY state change:
updated_at to current timestampdocs/.workflow-state.jsonWhen reading state on session start, validate:
completed: check that the artifact file existscompleted step: warn the user and use AskUserQuestion to ask whether to re-run or mark as skippedIf the user says "skip this" or "I don't need concept validation":
WARN: Present the consequences of skipping, then use AskUserQuestion:
Question: "Skipping [step] means [specific consequence]. The workflow recommends this step because [reason]. Are you sure?" Options:
Consequence reference:
If user selects CONFIRM SKIP: set step status to "skipped", proceed to next step
Never skip silently. Always warn and require explicit confirmation via AskUserQuestion.
If the user says "go back to the GDD" or "redo the design bible":
Identify the target step
Present the consequences, then use AskUserQuestion:
Question: "Going back to [step] will reset the following steps to pending: [list dependent steps]. Do you want to proceed?" Options:
If user selects CONFIRM BACKTRACK:
"pending" (cascade reset)See phase-transitions.md for the full cascade reset table.
If the user says "start Sprint 2" or "jump to vertical slice":
Check preconditions for the target position
List any incomplete prerequisites, then use AskUserQuestion:
Question: "The following steps haven't been completed: [list]. Do you want to skip them all and jump to [target]?" Options:
If user selects CONFIRM JUMP, skip all prerequisites and jump to target
On every invocation, this skill reads the state file. Recovery logic:
| State Found | Action |
|---|---|
in_progress + artifact exists | Present artifact for approval |
in_progress + no artifact | Re-run the step |
awaiting_user_approval | Re-present artifact and ask for approval |
user_requested_changes | Ask user for their feedback again |
completed | Proceed to next step |
Sprint with team_name set | Check if team exists; recreate if needed |
Phase D substep smoke_test | Re-run the headless smoke test |
Phase D substep generating_playtest_guide | Check if playtest guide file exists; if yes, advance to presenting_review; if no, regenerate it |
Phase D substep presenting_review | Re-compile and present the sprint review + reference playtest guide |
Phase D substep user_testing | Remind user they were playtesting, reference playtest guide, ask for status |
Phase D substep fix_loop | Show the last reported issue and ask if the fix is still needed |
Phase D substep final_approval | Re-present the formal approval gate |
Every user gate follows this consistent pattern using the AskUserQuestion tool:
Present context — Display a brief summary (1-3 sentences) of what was produced, the artifact file path, and any key decisions or tradeoffs.
Use AskUserQuestion — Instead of asking the user to type a response, always use the AskUserQuestion tool to present selectable options. This ensures a clean, clickable UX.
Standard approval gate format:
AskUserQuestion:
Question: "How do you want to proceed with [Step Name]?"
Options:
- APPROVE — Accept and proceed to [next step]
- MODIFY — Provide feedback for revision
- REJECT — Discard and restart this step
Follow-up questions — If additional input is needed after a selection (e.g., which concept to pick, what feedback to give), use additional AskUserQuestion calls or ask the user for free-text input as appropriate.
Lifecycle gates use GO/PIVOT/KILL (or GO/ITERATE/RESCOPE/KILL) options instead of APPROVE/MODIFY/REJECT.
Confirmation gates (skip, backtrack, jump forward) use CONFIRM/CANCEL options.
CRITICAL: Do NOT proceed past any approval gate without an explicit user selection via AskUserQuestion. If the user changes the subject or asks an unrelated question, the gate remains active. When they return to the workflow, re-present the gate using AskUserQuestion.
SYS-GP-UI-CON-ART-TL-