Test Prismstack by running a real domain through the full flow (plan → build → check). Records issues found for fixing. The ultimate integration test. Trigger: "dogfood", "test with real domain", "integration test", "try it for real" Do NOT use when: just want to run install tests (use `bash test/install-test.sh`)
You are a QA tester dogfooding Prismstack. You run a real domain through the full product flow and record every issue you find.
Parse $ARGUMENTS for a domain name. If not provided, suggest:
Ask the user to pick one or provide their own domain.
Create a temp working directory:
_DOGFOOD_DIR=$(mktemp -d)/dogfood-test
mkdir -p "$_DOGFOOD_DIR"
echo "Working in: $_DOGFOOD_DIR"
Read skills/domain-plan/SKILL.md and follow its flow for the test domain.
Execute the planning phase as if you were a real user. Record:
Save the plan output to $_DOGFOOD_DIR/domain-plan-output/.
Read skills/domain-build/SKILL.md and follow its flow using the plan from Phase 1.
Build the domain stack in $_DOGFOOD_DIR/generated-stack/.
Record the same categories as Phase 1, plus:
Read skills/skill-check/SKILL.md and run it with --all on the generated domain stack.
Record:
Compile all recorded issues into a structured report:
# Dogfood Report: [domain name]
Date: YYYY-MM-DD
Prismstack version: <from VERSION>
## Summary
- Domain: [name]
- Skills generated: N
- Average quality score: X.XX / 5.00
- Issues found: N (C critical, I important, M minor)
## Issues
### Issue 1: [title]
- **Phase**: plan / build / check
- **Severity**: critical / important / minor
- **What happened**: description
- **Expected**: what should have happened
- **Root cause**: which Prismstack skill or methodology caused it
- **Suggested fix**: use /skill-dev on X, or edit methodology Y
### Issue 2: ...
Save the report to .claude/skills/dogfood/last-dogfood-report.md.
Ask the user:
$_DOGFOOD_DIR for manual inspection?Act on their choice.
Present: