Generate and critically evaluate grounded improvement ideas for the current project. Use when asking what to improve, requesting idea generation, exploring surprising improvements, or wanting the AI to proactively suggest strong project directions before brainstorming one in depth. Triggers on phrases like 'what should I improve', 'give me ideas', 'ideate on this project', 'surprise me with improvements', 'what would you change', or any request for AI-generated project improvement suggestions rather than refining the user's own idea.
Note: The current year is 2026. Use this when dating ideation documents and checking recent ideation artifacts.
ce:ideate precedes ce:brainstorm.
ce:ideate answers: "What are the strongest ideas worth exploring?"ce:brainstorm answers: "What exactly should one chosen idea mean?"ce:plan answers: "How should it be built?"This workflow produces a ranked ideation artifact in docs/ideation/. It does not produce requirements, plans, or code.
Use the platform's blocking question tool when available (AskUserQuestion in Claude Code, request_user_input in Codex, ask_user in Gemini). Otherwise, present numbered options in chat and wait for the user's reply before proceeding.
Ask one question at a time. Prefer concise single-select choices when natural options exist.
<focus_hint> #$ARGUMENTS </focus_hint>
Interpret any provided argument as optional context. It may be:
DX improvementsplugins/compound-engineering/skills/low-complexity quick winstop 3, 100 ideas, or raise the barIf no argument is provided, proceed with open-ended ideation.
ce:brainstorm defines the selected one precisely enough for planning.Look in docs/ideation/ for ideation documents created within the last 30 days.
Treat a prior ideation doc as relevant when:
If a relevant doc exists, ask whether to:
If continuing:
Infer three things from the argument:
Issue-tracker intent triggers when the argument's primary intent is about analyzing issue patterns: bugs, github issues, open issues, issue patterns, what users are reporting, bug reports, issue themes.
Do NOT trigger on arguments that merely mention bugs as a focus: bug in auth, fix the login issue, the signup bug — these are focus hints, not requests to analyze the issue tracker.
When combined (e.g., top 3 bugs in authentication): detect issue-tracker intent first, volume override second, remainder is the focus hint. The focus narrows which issues matter; the volume override controls survivor count.
Default volume:
Honor clear overrides such as:
top 3100 ideasgo deepraise the barUse reasonable interpretation rather than formal parsing.
Before generating ideas, gather codebase context.
Run agents in parallel in the foreground (do not use background dispatch — the results are needed before proceeding):
Quick context scan — dispatch a general-purpose sub-agent with this prompt:
Read the project's AGENTS.md (or CLAUDE.md only as compatibility fallback, then README.md if neither exists), then discover the top-level directory layout using the native file-search/glob tool (e.g.,
Globwith pattern*or*/*in Claude Code). Return a concise summary (under 30 lines) covering:
- project shape (language, framework, top-level directory layout)
- notable patterns or conventions
- obvious pain points or gaps
- likely leverage points for improvement
Keep the scan shallow — read only top-level documentation and directory structure. Do not analyze GitHub issues, templates, or contribution guidelines. Do not do deep code search.
Focus hint: {focus_hint}
Learnings search — dispatch compound-engineering:research:learnings-researcher with a brief summary of the ideation focus.
Issue intelligence (conditional) — if issue-tracker intent was detected in Phase 0.2, dispatch compound-engineering:research:issue-intelligence-analyst with the focus hint. If a focus hint is present, pass it so the agent can weight its clustering toward that area. Run this in parallel with agents 1 and 2.
If the agent returns an error (gh not installed, no remote, auth failure), log a warning to the user ("Issue analysis unavailable: {reason}. Proceeding with standard ideation.") and continue with the existing two-agent grounding.
If the agent reports fewer than 5 total issues, note "Insufficient issue signal for theme analysis" and proceed with default ideation frames in Phase 2.
Consolidate all results into a short grounding summary. When issue intelligence is present, keep it as a distinct section so ideation sub-agents can distinguish between code-observed and user-reported signals:
Do not do external research in v1.
Follow this mechanism exactly:
Generate the full candidate list before critiquing any idea.
Each sub-agent targets about 7-8 ideas by default. With 4-6 agents this yields 30-40 raw ideas, which merge and dedupe to roughly 20-30 unique candidates. Adjust the per-agent target when volume overrides apply (e.g., "100 ideas" raises it, "top 3" may lower the survivor count instead).
Push past the safe obvious layer. Each agent's first few ideas tend to be obvious — push past them.
Ground every idea in the Phase 1 scan.
Use this prompting pattern as the backbone:
If the platform supports sub-agents, use them to improve diversity in the candidate pool rather than to replace the core mechanism.
Give each ideation sub-agent the same:
When using sub-agents, assign each one a different ideation frame as a starting bias, not a constraint. Prompt each agent to begin from its assigned perspective but follow any promising thread wherever it leads — cross-cutting ideas that span multiple frames are valuable, not out of scope.
Frame selection depends on whether issue intelligence is active:
When issue-tracker intent is active and themes were returned:
confidence: high or confidence: medium becomes an ideation frame. The frame prompt uses the theme title and description as the starting bias.When issue-tracker intent is NOT active (default):
Ask each ideation sub-agent to return a standardized structure for each idea so the orchestrator can merge and reason over the outputs consistently. Prefer a compact JSON-like structure with:
Merge and dedupe the sub-agent outputs into one master candidate list.
Synthesize cross-cutting combinations. After deduping, scan the merged list for ideas from different frames that together suggest something stronger than either alone. If two or more ideas naturally combine into a higher-leverage proposal, add the combined idea to the list (expect 3-5 additions at most). This synthesis step belongs to the orchestrator because it requires seeing all ideas simultaneously.
Spread ideas across multiple dimensions when justified:
The mechanism to preserve is:
The sub-agent pattern to preserve is:
Review every generated idea critically.
Prefer a two-layer critique:
Do not let critique agents generate replacement ideas in this phase unless explicitly refining.
Critique agents may provide local judgments, but final scoring authority belongs to the orchestrator so the ranking stays consistent across different frames and perspectives.
For each rejected idea, write a one-line reason.
Use rejection criteria such as:
Use a consistent survivor rubric that weighs:
Target output:
Present the surviving ideas to the user before writing the durable artifact.
This first presentation is a review checkpoint, not the final archived result.
Present only the surviving ideas in structured form:
Then include a brief rejection summary so the user can see what was considered and cut.
Keep the presentation concise. The durable artifact holds the full record.
Allow brief follow-up questions and lightweight clarification before writing the artifact.
Do not write the ideation doc yet unless:
ce:brainstorm, Proof sharing, or session endWrite the ideation artifact after the candidate set has been reviewed enough to preserve.
Always write or update the artifact before:
ce:brainstormTo write the artifact:
docs/ideation/ existsdocs/ideation/YYYY-MM-DD-<topic>-ideation.mddocs/ideation/YYYY-MM-DD-open-ideation.md when no focus existsUse this structure and omit clearly irrelevant fields only when necessary:
---