Generate AI agent personas with authentic backstories, demographics, and personality traits for TraitorSim. Use when creating new characters, generating personas, building character libraries, or when asked about persona generation, backstory creation, or character development. Orchestrates Deep Research + Claude synthesis pipeline.
Generate complete AI agent personas with realistic backstories, demographics, and OCEAN personality traits for the TraitorSim game. This skill orchestrates the 5-stage pipeline that uses Gemini Deep Research for demographic grounding and Claude Opus for narrative synthesis.
# Generate 15 personas (estimated cost: $6-7)
./scripts/generate_persona_library.sh --count 15
# Or run stages individually:
python scripts/generate_skeleton_personas.py --count 15
python scripts/batch_deep_research.py --input data/personas/skeletons/test_batch_001.json
python scripts/poll_research_jobs.py --jobs data/personas/jobs/test_batch_001_jobs.json
python scripts/synthesize_backstories.py --reports data/personas/reports/test_batch_001_reports.json
python scripts/validate_personas.py --library data/personas/library/test_batch_001_personas.json
The persona generation pipeline consists of 5 stages:
Stage 1: Skeleton Generation
data/personas/skeletons/*.jsonStage 2: Deep Research Submission
data/personas/jobs/*_jobs.json (job IDs)Stage 3: Research Job Polling
data/personas/reports/*_reports.jsonStage 4: Backstory Synthesis
data/personas/library/*_personas.jsonStage 5: Validation
When asked to generate a complete persona library:
Determine batch size (recommend 15-20 for testing, 100+ for production)
Run the master orchestration script:
./scripts/generate_persona_library.sh --count 15
Monitor quota limits:
Verify results:
When asked to add more personas to an existing library:
Load existing personas:
with open('data/personas/library/test_batch_001_personas.json') as f:
existing_personas = json.load(f)
existing_ids = {p['skeleton_id'] for p in existing_personas}
Generate new skeletons (avoiding duplicate IDs)
Submit Deep Research jobs for new skeletons only
Synthesize only new personas:
# Extract only new reports
python -c "import json; ..." # Filter to new reports
# Synthesize
python scripts/synthesize_backstories.py --reports /tmp/new_reports_only.json --output /tmp/new_personas
# Merge
python -c "import json; ..." # Combine existing + new
Validate merged library
Deep Research API has rate limits (specific limits not publicly documented). Use these strategies:
Strategy 1: Wave Submission
# Submit in waves with delays
wave_sizes = [6, 4, 2, 2, 1] # Observed pattern
for wave_size in wave_sizes:
jobs = submit_batch(wave_size)
time.sleep(300) # 5 min between waves
Strategy 2: Client-Side Tracking
from scripts.quota_tracker import QuotaTracker
tracker = QuotaTracker(rpm_limit=10, rpd_limit=100)
if tracker.can_make_request():
submit_job()
tracker.record_request()