Strategy skill (orchestrator-backed Diverge→Discuss→Converge). Owns exploratory DUF decomposition; deterministic remember/refine moved to /r.
General-purpose strategic thinking engine for any situation requiring multi-perspective analysis.
BrainstormOrchestrator execution/r firstCore principle: /s applies fresh strategic thinking to the current context. It works for plans, solutions, or anything needing multi-perspective analysis. It detects conversation focus from /q context, session activity, or chat history.
This project uses Director Model: Human director + AI agent implementation.
Workflow:
What this means for strategy:
Key distinction: LLM-generated code under user direction = ✅. Autonomous background services = ❌.
/r: deterministic remember + refine (what did we forget, predictable improvements, deterministic pre-mortem, plan validation)/s: exploratory multi-persona strategy (high-upside options, adversarial tradeoffs, and uncertainty handling)/s automatically detects conversation context and applies strategic thinking to ongoing work. It does NOT need an explicit topic when you're mid-conversation.
When you invoke /s without arguments, it follows this inference chain:
q_context (confidence 0.9): If you've run /q, uses the strategic work summarysession_activity (confidence 0.75): Infers from recent file edits in the sessionchat_context (confidence 0.6): Analyzes recent conversation for the focusfallback (confidence 0.4): "General strategic brainstorming"This means /s automatically applies strategic thinking to:
There's an important distinction:
/q and session activity — that's the input to strategic thinking, not a constraintExample correct flow:
# You're working on a consolidation plan for /p
/q # Generates work summary about /p consolidation
/s # AUTOMATICALLY detects context, applies strategic thinking to the /p plan
What /s does:
/q work summary → detects you're working on /p consolidationWhat /s does NOT do:
/p implementation docs → would anchor on current solution/s owns exploratory parts of removed skills:
/opt: multi-option optimization strategy, high-upside alternatives, adversarial tradeoffs/oops: independent perspective escalation for recurring failure patterns/opts: exploratory opportunity expansion beyond deterministic quick scans/value: exploratory value-creation options and upside hypotheses/value-maximization: expansion of high-upside alternatives when deterministic pass marks exclusions/analysis-profile: exploratory architecture alternatives for performance tradeoffs/analysis-logs: exploratory failure-hypothesis expansion when deterministic checks are inconclusive/s does not own deterministic triage (/r), command standards validation (/val decomposition), verification tiers (/verify decomposition), or promotion execution gates (/p*).
Before running the script, resolve the user's topic to a filesystem path:
package/handoff, packages/arch, skills/s), resolve it to the actual directory path--context-path so the external LLM receives all file contents from that directory--context-pathYou have project context. The external LLM does not. Always pass --context-path when the topic refers to something in this repo.
Run:
python P:/.claude/skills/s/scripts/run_heavy.py \
--topic "{{USER_PROMPT}}" \
--context-path "{{RESOLVED_PATH_OR_OMIT}}" \
--personas "{{PERSONAS_CSV_OR_EMPTY}}" \
--timeout "{{TIMEOUT_OR_180}}" \
--ideas "{{IDEAS_OR_10}}" \
--output "{{json|markdown|text}}" \
{{--fresh-mode to prevent anchoring bias}} \
{{--local-llm-repetition N for free diversity (N=2-3 recommended)}} \
{{--local-only to skip external LLMs and use local only}} \
{{--provider-tier T1,T2 to filter by quality tier}} \
{{--mock if requested}}
Local LLM Repetition (Free Diversity Improvement):
--local-llm-repetition N runs the brainstorm N times with different cognitive approach variations:
This is free compared to external LLMs — you get 2-3x the idea diversity without additional API costs.
Local-Only Mode:
--local-only skips external LLM providers entirely and uses only local agents with prompt variations. Useful for:
Provider Tier Filtering:
--provider-tier T1,T2 filters external LLM providers by quality tier to avoid lower-quality models. Tiers are:
Examples:
# Use only top-tier providers (claude/anthropic)
/s "strategy topic" --provider-tier T1
# Use high-quality providers (T1 + T2)
/s "strategy topic" --provider-tier T1,T2
# Default allows all tiers
/s "strategy topic" # Equivalent to --provider-tier T1,T2,T3
This prevents "stupid LLMs" by filtering out experimental/lower-quality providers from your brainstorm.
Heavy output must include:
session_id (unique per invocation, no cross-terminal coordination needed)[85/100])metrics (phase timings, agents spawned)value_map (top opportunities, expected upside, confidence)filtered_out)decision, alternatives, why_not, risks, rollback)Multi-terminal guarantees:
/s invocation is independent — no shared state, no coordination required/s in multiple terminals simultaneously# Mid-conversation: /s detects context automatically
/q # Generate work summary
/s # Automatically applies strategic thinking to ongoing work
# Explicit topic:
/s "architecture options for auth migration"
# Output variants:
/s "service boundary redesign" --output json
# Multi-terminal: /s works reliably across concurrent sessions
# No TTL: Ideas stay valid regardless of when generated
# No stale data: Every run produces fresh strategic thinking
✅ Appropriate:
/q or /r❌ Not appropriate:
/r)/p)/search)# Typical strategic analysis flow:
/q # What are we working on? (context summary)
/r # Did we forget anything? (deterministic checks)
/s # What are our options? (strategic thinking)
/p # Make it work (implementation)
/llm-brainstorm -> /s/llm-debate -> /s/strat removedVersion: 2.4.0