USE WHEN a plan, decision, or architecture needs adversarial stress-testing. Spawns multiple agents to attack from different angles.
Red Team spawns three adversarial agents, each attacking a target (plan, architecture, decision, or code) from a different angle. The agents produce structured vulnerability reports that are synthesized into a unified threat model with prioritized action items.
Three agents attack simultaneously from different angles:
Security Adversary (security-reviewer, Sonnet)
Scale Adversary (architect, Opus)
User Adversary (critic, Opus)
Each agent produces a report with entries in this format:
## [Vulnerability Name]
- **Severity**: critical | high | medium | low
- **Category**: security | scale | usability
- **Exploit Scenario**: How this vulnerability is triggered in practice
- **Impact**: What happens when it is exploited
- **Mitigation**: Recommended fix or defense
## Red Team Report
### Critical Findings
[Must-fix before shipping]
### High Findings
[Should-fix before shipping]
### Medium Findings
[Fix in next iteration]
### Low Findings
[Track for future improvement]
### Overall Risk Assessment
[Summary: is this safe to ship? What is the biggest risk?]