Let's brainstorm, I need ideas, help me think through, brainstorm with me, what should I build, how should I approach this — open-ended exploration of possibilities. Handles blank slate to refining an existing concept.
The ultimate brainstorming partner. Help turn problems, ideas, and blank slates into fully formed designs through natural collaborative dialogue, proven ideation methodologies, and structured evaluation.
<HARD-GATE> Do NOT invoke any implementation skill, write any code, scaffold any project, or take any implementation action until you have completed the brainstorming process and the user has approved the output. This applies regardless of perceived simplicity. </HARD-GATE>This skill chains many phases with heavy sub-agent dispatch. To keep wall-clock time and context usage sane:
domain-researcher concurrently with the local file reads — do not serialize.spec-reviewerOpus 4.7 and Sonnet 4.6 handle parallel batching well; Haiku 4.5 may prefer serial execution for reliability.
This skill honors two mode flags when passed in the invocation (/rad-brainstormer:brainstorm-session --non-interactive, etc.):
--non-interactive — Skip all user-approval gates. Produce a best-effort output, commit the artifact, and emit a trailing JSON block listing awaiting_user_review items (e.g., unconfirmed scope, unvalidated top pick, unresolved spec-review escalations). For agent/CI callers that deadlock on interactive menus.--resume <run-id> — Load checkpoint state from .brainstorm/state/<run-id>.json and continue from the last saved phase. See "Checkpoint & Resume" below.If neither flag is present, run interactively as documented.
Long brainstorms (domain research + full divergent/convergent + design + spec review) are compaction-prone. Save state to .brainstorm/state/<run-id>.json at these transitions:
Checkpoint schema:
{
"run_id": "string",
"skill": "brainstorm-session",
"phase": "1 | 3 | 5 | 6 | 10 | 11-iter-N",
"started_at": "ISO-8601",
"last_saved": "ISO-8601",
"topic": "string",
"starting_state": "blank_slate | vague_idea | clear_idea | improving_existing | needs_evaluation",
"domain_brief": "JSON from domain-researcher or null",
"ideas_generated": ["string"],
"finalists": ["string"],
"recommended": "string or null",
"output_path": "string or null",
"spec_review_history": [{"iteration": 1, "status": "issues_found", "issues": []}],
"awaiting_user_review": ["string"]
}
On --resume <run-id>, load the file, announce the phase you're resuming from, and continue. Do not re-run completed phases.
Every topic goes through at minimum a light brainstorming process. A simple utility, a quick feature, a business question — all of them. "Simple" topics are where unexamined assumptions cause the most wasted work. The process can be short (a few exchanges for truly simple things), but MUST explore before committing.
Complete these steps in order:
Detect Starting State
│
▼
Explore Context ──► Anti-Anchoring: Draw out user ideas first
│
▼
Domain Research Needed? ──yes──► Dispatch domain-researcher agent
│ no │
▼ ▼
Visual Questions Ahead? ──yes──► Offer Visual Companion (own message)
│ no │
▼◄───────────────────────────────┘
SELECT METHODOLOGY (based on starting state)
│
▼
┌──────────────────────────────────────┐
│ DIVERGENT PHASE │
│ "We're in idea generation mode — │
│ all ideas welcome, no filtering" │
│ │
│ Apply selected techniques │
│ Capture ALL ideas (user's + yours) │
│ NO evaluation during this phase │
└──────────────┬───────────────────────┘
│
▼
┌──────────────────────────────────────┐
│ CONVERGENT PHASE │
│ "Let's switch to evaluation mode" │
│ │
│ Apply evaluation framework │
│ Narrow to 2-3 top candidates │
│ Optionally dispatch idea-challenger │
└──────────────┬───────────────────────┘
│
▼
Propose 2-3 Approaches (with trade-offs + recommendation)
│
▼
Present Design/Output (section by section, get approval after each)
│
▼
Write Output Document + Commit
│
▼
Domain-Specific Routing:
Software → spec review loop → writing-plans skill
Business → business model canvas → action plan
Content → content strategy doc → editorial calendar
Research → research plan → methodology outline
General → action plan with next steps
Before asking a single question about the topic, determine WHERE the user is:
| Signal | Starting State | Approach |
|---|---|---|
| "I want to build something but don't know what" | Blank slate | Creative unblocking → Starbursting → HMW |
| "I have this vague idea about..." | Vague idea | Clarifying questions → SCAMPER → Reverse Brainstorm |
| "I want to build X but need to think through the design" | Clear idea | Six Thinking Hats → Morphological Analysis → Design |
| "I have X but want to make it better" | Improving existing | SCAMPER → 5 Whys → TRIZ (if technical) |
| "I have several ideas and need help choosing" | Needs evaluation | Jump to Convergent Phase |
If unclear, ask: "Where are you in your thinking? Are you starting from scratch, exploring a vague idea, or refining something specific?"
CRITICAL — Do this BEFORE offering any ideas.
Research on human–AI ideation consistently shows that when AI suggests ideas first, humans anchor on them — producing fewer, less varied, less original ideas. The brainstormer must counteract this:
Assess whether domain research would improve the brainstorming:
If research is needed, dispatch the domain-researcher agent directly (it is defined in this plugin with model: opus, JSON-first output):
Agent tool:
subagent_type: domain-researcher
description: "Research [domain] for brainstorming"
prompt: <substitute references/subagent-prompts/domain-research.md with {topic} and {session_context}>
Parse the JSON response (markdown fallback accepted — see the agent's output contract). Weave the brief into questions naturally — do not dump a research report on the user. If the brief contains items in its surprises field, flag them prominently before continuing.
Reference references/methodology-catalog.md for detailed technique instructions.
references/creative-unblocking.md)During ideation:
Reference references/evaluation-frameworks.md for detailed framework instructions.
Adapt the output format to the domain:
--non-interactive mode, skip approvals and mark unconfirmed sections in awaiting_user_review)docs/plans/YYYY-MM-DD-<topic>-design.md (user preferences override)spec-reviewer agent (max 5 iterations — see escalation below)/rad-planner:plan-project for implementation planning. If rad-planner is not installed, surface this to the user and suggest installing it.Dispatch spec-reviewer with the substituted references/subagent-prompts/spec-review.md template, passing the current iteration and max_iterations (default 5) and any prior iteration's blocking_issues. Parse the JSON response:
status: approved → proceed to user reviewstatus: issues_found and iteration < max_iterations → fix blocking issues, increment iteration, re-dispatchescalation_required: true (or iteration >= max_iterations with issues remaining) → stop looping. Surface the unresolved_issues JSON to the user with: "Spec review hit iteration cap with unresolved issues. Please decide: (a) accept these as known gaps, (b) rewrite the affected sections yourself, or (c) drop back to design phase." In --non-interactive mode, commit the current spec and add the unresolved issues to awaiting_user_review.docs/brainstorm/YYYY-MM-DD-<topic>-strategy.mddocs/brainstorm/YYYY-MM-DD-<topic>-discovery.mddocs/brainstorm/YYYY-MM-DD-<topic>-content.mddocs/brainstorm/YYYY-MM-DD-<topic>-plan.mdA browser-based companion for showing mockups, diagrams, and visual options. Available as a tool — not a mode.
Offering: When upcoming questions involve visual content, offer once:
"Some of what we're working on might be easier to explain if I can show it in a browser. I can put together mockups, diagrams, and comparisons as we go. Want to try it?"
This offer MUST be its own message. Don't combine with questions.
Per-question decision: Even after acceptance, decide FOR EACH QUESTION whether to use browser or terminal.
If accepted, read the visual companion guide:
scripts/ directory contains the visual companion server (start-server.sh, stop-server.sh, frame-template.html, helper.js, server.js)
Before asking detailed questions, assess scope. If the request describes multiple independent subsystems: