Comprehensive game playtesting through automated AI gameplay to verify functionality, find bugs, and test game mechanics. Use when testing Race to the Crystal gameplay, verifying win conditions, testing generators and crystal capture, or debugging game issues.
Automated playtesting for Race to the Crystal game using AI-driven gameplay.
Local Testing (headless simulation)
GameState manipulationNetwork Testing (join live game)
uv run race-http-ai-client --join <game_id>Choose the appropriate testing approach based on user request:
Quick Test (5-10 turns)
Standard Test (30-50 turns)
Comprehensive Test (multiple scenarios)
Targeted Test (specific feature)
Use the AI API to play the game:
from game.game_state import GameState
from game.ai_observation import AIObserver
from game.ai_actions import AIActionExecutor, MoveAction, AttackAction, DeployAction, EndTurnAction
Turn Structure:
AIObserver.get_game_state()AIObserver.list_available_actions()AIActionExecutor.execute_action()Strategic Priorities:
Critical Issues (CRITICAL):
High Priority Issues (HIGH):
Medium Priority Issues (MEDIUM):
Low Priority Issues (LOW):
# Test: Deploy 2 tokens at generator, hold for 2 turns
# Expected: Generator becomes disabled
# Watch for: Capture progress, turn counting, multiple players contesting
# Test: Hold crystal with required tokens for 3 turns
# Expected: Player wins, game ends
# Watch for: Token counting, turn counting, disabled generator effects
# Test: Attack with various token health values
# Expected: Damage = attacker.health // 2, defender takes damage, attacker takes none
# Watch for: Damage calculation, token destruction, health < 0
# Test: Move tokens with different health values
# Expected: health <= 6 moves 2 spaces, health >= 7 moves 1 space
# (damaged tokens gain mobility as they take damage)
# Watch for: Boundary checking, pathfinding, occupied cells
# Test: Deploy all token types, exceed reserve limits
# Expected: Can deploy 5 of each type (10hp, 8hp, 6hp, 4hp)
# Watch for: Reserve counting, position validation, corner restrictions
# Test: End turn in different phases, pass without action
# Expected: Turn advances to next player, phase resets
# Watch for: Player order, turn number, phase transitions
After testing, provide a clear report:
# Playtesting Report
**Test Type:** [Quick/Standard/Comprehensive/Targeted]
**Duration:** [X turns / Y minutes]
**Date:** [Current date]
## Summary
- ✅ Tests Passed: X
- ❌ Issues Found: Y
- ⚠️ Warnings: Z
## Test Results
### [Mechanic Name]
**Status:** ✅ PASS / ❌ FAIL / ⚠️ WARNING
**Details:** [What was tested and what happened]
### Critical Issues Found
1. **[Issue Title]** (CRITICAL)
- Location: [file:line]
- Description: [What's wrong]
- Reproduction: [How to reproduce]
- Impact: [Why this matters]
### Recommendations
- [Action items to fix issues]
- [Suggested improvements]
- [Areas needing more testing]
Join an Existing Game: When a user creates a game and wants Claude to join:
# User creates game via web or desktop client
# User provides game_id (from lobby or logs)
# Claude joins via HTTP AI client
uv run race-http-ai-client --join <game_id> --name "Claude_AI" --strategy aggressive
Process:
uv run race-unified-servergame_idgame_id with Claude (or Claude checks server logs)uv run race-http-ai-client --join <game_id>Finding the Game ID:
If user doesn't provide game_id, Claude can:
Created game session <game_id> or Game <game_id> createdCREATE_GAME messages in logsStrategy Options:
random: Random valid actions (good for general testing)aggressive: Prioritizes attacks and movement toward objectivesdefensive: Prioritizes deploying and holding positionsMonitoring:
AIObserver analysisPattern 1: Full Game Playtest
# Create game, play 50 turns using random valid actions
# Log all errors, track game state health
# Report on: crashes, stuck states, win conditions
Pattern 2: Feature-Specific Test
# Set up specific scenario (e.g., tokens at generator)
# Execute targeted actions to test one mechanic
# Verify expected behavior occurs
Pattern 3: Edge Case Hunting
# Try to break the game with unusual actions
# Test boundary conditions (board edges, 0 health, etc.)
# Attempt invalid operations
# Verify error handling
Pattern 4: Win Condition Verification
# Manually set up near-win scenario
# Execute actions to trigger win
# Verify game ends correctly
Local Testing:
game/game_state.pygame/ai_observation.pygame/ai_actions.pygame/generator.py, game/crystal.py, game/combat.py, game/movement.pyNetwork Testing:
client/http_ai_client.pyuv run race-http-ai-clientuv run race-unified-server)Two players compete to capture the same generator
All players try to reach crystal simultaneously
Multiple consecutive attacks
Deploy all tokens of one type
Try actions in wrong phases
Always conclude playtesting with:
game_state manipulation for setting up test scenarios (bypass deployment validation)