Explores user workflows through targeted questions about pipeline modes, pain points, routine tasks, and human intervention points to inform agent system design. Use when exploring workflows after analysis. Use when user says "explore workflows", "brainstorm workflows", "what should I automate". Use when called by analyzing-agent-systems.
Brainstorming workflows IS targeted exploration of how users actually work, what frustrates them, and what can be automated.
The analysis report tells you what the system looks like; brainstorming tells you what the user needs. These are different data sources — never skip one because the other exists.
Core principle: Ask about failures before wishes.
Violating the letter of the rules is violating the spirit of the rules.
Pattern: Chain
Handoff: user-confirmation
Next: planning-agent-systems
Chain: main
Before ANY action, create task list using TaskCreate:
TaskCreate for EACH task below:
- Subject: "[brainstorming-workflows] Task N: <action>"
- ActiveForm: "<doing action>"
Tasks:
Announce: "Created 7 tasks. Starting execution..."
Execution rules:
TaskUpdate status="in_progress" BEFORE starting each taskTaskUpdate status="completed" ONLY after verification passesTaskList to confirm all completedGoal: Load analysis report and let user select which findings to address.
If analysis report path was provided:
If no analysis report: Note that no analysis was done, proceed to Task 2.
Skip questions in later tasks that the analysis already answered.
Verification: User has confirmed which findings to address (or no analysis exists).
Goal: Determine how workflows connect and what state management they need.
Important: Read references/exploration-questions.md for the question bank (Pipeline section).
Rules:
Verification: Each identified workflow has mode + state management decision.
Goal: Find where the current agent system fails or is missing.
Important: Read references/exploration-questions.md for the question bank (Pain Point section).
Rules:
Verification: Pain points documented with root cause and component type.
Goal: Find repetitive small tasks that could be automated.
Important: Read references/exploration-questions.md for the question bank (Routine Task section).
Rules:
Verification: Routine tasks documented with automation approach.
Goal: Find where humans must review, approve, or intervene in the workflows.
Important: Read references/exploration-questions.md for the question bank (Human-in-the-Loop section).
Rules:
user-confirmation handoff vs auto-invokeRequired question — quality gate loop:
Ask: "When a reviewer reports issues, do you want fixes applied automatically before re-reviewing (auto loop), or do you want to confirm each fix before continuing (manual loop)?"
Record the answer — this determines whether review skills use auto-invoke or user-confirmation handoff to the next fixing step.
Verification: Intervention points documented with type and affected workflow step. Quality gate loop preference recorded.
Goal: Map every discovered need to the right component type.
For each pain point, routine task, intervention point, and analysis finding:
Present the mapping to user for confirmation.
Challenge any over-engineering: "Can this be a rule instead of a skill?"
Read references/anthropic-patterns.md for the complexity ladder — prefer lowest level that works.
Verification: Every need mapped to component type, user confirmed.
Goal: Write structured summary to .rcc/{timestamp}-workflows.md.
Important: Read references/summary-template.md for the full summary format.
Include: pipeline mode mapping, pain points, routine tasks, human intervention points, component recommendations.
Handoff: "Workflow summary complete. Continue to plan agent system components?"
planning-agent-systems skill, pass workflow summary pathVerification: Summary written with all sections filled.
These thoughts mean you're rationalizing. STOP and reconsider:
| Thought | Reality |
|---|---|
| "I already know from the analysis" | Analysis finds system weaknesses. Users reveal workflow needs. Different data. |
| "Skip the questions" | Questions surface needs that code scanning can never find. |
| "Multiple questions saves time" | Multiple questions overwhelm. One at a time. |
| "This obviously needs a skill" | Most workflows need less than you think. Check the complexity ladder. |
| "Skip past failures" | Past failures are the highest-value context. Always ask. |
| "The user knows what they want" | Users describe solutions, not problems. Dig for the actual need. |
digraph brainstorm_workflows {
rankdir=TB;
start [label="Brainstorm\nworkflows", shape=doublecircle];
import [label="Task 1: Import\nanalysis findings", shape=box];
has_analysis [label="Analysis\nexists?", shape=diamond];
select [label="User selects\nfindings to address", shape=box];
pipeline [label="Task 2: Pipeline\nmode exploration", shape=box];
pain [label="Task 3: Pain point\ndiscovery", shape=box];
routine [label="Task 4: Routine task\nidentification", shape=box];
hitl [label="Task 5: Human\nintervention points", shape=box];
component [label="Task 6: Component\ntype judgment", shape=box];
summary [label="Task 7: Produce\nworkflow summary", shape=box];
handoff [label="Invoke\nplanning-agent-systems", shape=box];
done [label="Brainstorm complete", shape=doublecircle];
start -> import;
import -> has_analysis;
has_analysis -> select [label="yes"];
has_analysis -> pipeline [label="no"];
select -> pipeline;
pipeline -> pain;
pain -> routine;
routine -> hitl;
hitl -> component;
component -> summary;
summary -> handoff [label="continue"];
summary -> done [label="stop here"];
handoff -> done;
}