create build, new project, start build, plan build, build [X].
Before creating a project, AI MUST check user-config.yaml for incomplete onboarding:
# Check learning_tracker.completed in user-config.yaml
learn_projects: false → SUGGEST 'learn projects' skill FIRST
If learn_projects: false AND this is user's FIRST build:
Before creating your first project, would you like a quick 8-minute tutorial
on how Beam Next builds work? It covers:
- When to use builds vs skills (avoid common mistakes)
- Project structure and lifecycle
- How to track progress effectively
Say 'learn projects' to start the tutorial, or 'skip' to create directly.
If user says 'skip': Proceed with project creation but add this note at the end:
Tip: Run 'learn projects' later if you want to understand the project system deeply.
If learn_projects: true: Proceed normally without suggestion.
plan-project is the ONLY entry point for all project creation.
WORKFLOW SEQUENCE (DO NOT SKIP STEPS)
CRITICAL CHECKPOINTS:
8 Build Types - AI semantically matches user input against descriptions:
| Type | When to Use | Discovery Method |
|---|---|---|
| build | Creating software, features, tools | Inline |
| integration | Connecting APIs, external services | Skill: add-integration |
| research | Academic papers, systematic analysis | Skill: create-research-build |
| strategy | Business decisions, planning | Inline |
| content | Marketing, documentation, creative | Inline |
| process | Workflow optimization, automation | Inline |
| skill | Creating Beam Next skills | Skill: create-skill |
| generic | Anything else | Inline |
User: "plan build for X"
│
├── Read all templates/types/*/_type.yaml
├── Compare user input against each description
├── Select best match OR ask user to choose
│
└── Proceed with detected type
No keyword triggers - Type detection is semantic from description field.
CRITICAL: Before starting any workflow, detect which mode to use.
ls -d 03-projects/ 2>/dev/null
03-projects/ exists?
├── YES → BUILD_CREATION mode
└── NO → WORKSPACE_SETUP mode
# 1.1 Detect type from user input (semantic matching)
# 1.2 Create build structure
beam-next-init-project "Build Name" --type {type} --path 03-projects/active
# 1.3 Load templates from types/{type}/
# 1.4 Initialize resume-context.md
# 1.5 Check roadmap and link if match found (see below)
Output: Build folder with 4 directories + planning file templates
If 02-memory/roadmap.yaml exists, check if build name matches a roadmap item:
# Roadmap linking logic (executed inline by AI)
from pathlib import Path
import re
def slugify(name: str) -> str:
"""Convert name to slug: lowercase, replace spaces with hyphens, remove non-alphanumeric."""
slug = name.lower().replace(" ", "-")
slug = re.sub(r"[^a-z0-9-]", "", slug)
slug = re.sub(r"-+", "-", slug).strip("-")
return slug
roadmap_path = Path("02-memory/roadmap.yaml")
if roadmap_path.exists():
import yaml
with open(roadmap_path) as f:
roadmap = yaml.safe_load(f)
project_name = "Build Name" # The name being created
project_id = "XX-build-name" # The generated build ID
for item in roadmap.get("items", []):
if item.get("project_id"):
continue # Already linked
# Match: slugified item name contained in build ID slug
item_slug = slugify(item["name"])
build_slug = slugify(project_id)
if item_slug in build_slug:
# LINK: Set project_id on roadmap item
item["project_id"] = project_id
with open(roadmap_path, "w") as f:
yaml.dump(roadmap, f, default_flow_style=False, allow_unicode=True)
print(f"Linked roadmap item '{item['name']}' to build '{project_id}'")
break
CRITICAL - Slug Matching Rules:
Discovery follows ASK → RESEARCH → ASK (+ optional loop) pattern:
Phase 2a: Discovery Questions (understand the project)
↓
Phase 2b: Active Research (targeted by answers)
↓
Phase 2c: Informed Follow-ups + Optional Follow-up Research
↓ (if gaps found)
Loop back to 2b for deeper research
Check _type.yaml for discovery method:
# Update resume-context.md with current_skill
# Load skill normally:
beam-next-load --skill {skill-name}
# Skill runs its workflow and writes to build's 02-discovery.md
# Clear current_skill when complete
| Type | Skill to Load |
|---|---|
| integration | add-integration |
| research | create-research-build |
| skill | create-skill |
Note: Skill-based types skip the 3-phase flow - their skills handle discovery.
Phase 2a: Discovery Questions
Ask type-specific discovery questions FIRST to understand what the user wants:
Let me understand what you're building:
1. **What are you building?**
- Describe the feature/system in 1-2 sentences
2. **What problem does this solve?**
- Why is this needed? What pain point does it address?
3. **Who/what will use this?**
- Users? Other systems? Internal tools?
4. **Any constraints or requirements?**
- Must integrate with existing systems?
- Performance requirements?
- Technology preferences or restrictions?
5. **What does success look like?**
- How will you know this project is complete?
AI then uses these answers to target research (see active-discovery-guide.md).
Phase 2b: Active Research
AI-driven research BASED ON discovery answers from Phase 2a:
Step 1: Determine Search Targets from Answers
Extract search targets from user's discovery answers:
Step 2: Assess Complexity
Simple (single file/function) → 0 agents, direct Grep/Glob
Medium (component/feature) → 1-2 targeted agents
Complex (multi-component) → 3-5 specialized agents
Step 3: Dynamic Subagent Exploration (for medium/complex)
AI determines BOTH number AND purpose of agents based on:
# Spawn ALL agents in a single message for parallel execution
for purpose in agent_purposes:
Task(
subagent_type="Explore",
prompt=generate_agent_prompt(purpose, build_description, discovery_answers),
description=f"Exploring {purpose['focus_area']}"
)
Step 4: Related Builds Check
03-projects/active/ and 03-projects/complete/ for similar names/purposesStep 5: Related Skills Check
01-skills/ and 00-system/skills/ for reusable skillsStep 6: Integration Check (if relevant)
02-memory/integrations/ for relevant integrationsStep 7: Web Search (for best practices)
Step 8: Present Consolidated Findings
Display research results BEFORE follow-up questions:
RESEARCH COMPLETE
----------------------------------------------------
Based on your description of {build_summary}:
Codebase: Found {N} related files in {areas}
- {file1} - {why relevant}
- {file2} - {why relevant}
Related Work: {N} builds, {N} skills could be affected
- {project/skill} - {relationship}
Web Research: {N} best practices found (if performed)
- {practice} - {source}
I have some follow-up questions based on what I found...
Phase 2c: Informed Follow-ups + Optional Follow-up Research
Ask follow-up questions INFORMED by research findings:
Given I found {finding from research}:
- {Question about how to handle this}
Given {another finding}:
- {Another question informed by what AI discovered}
Example informed questions:
Follow-up Research (if needed):
If user's answers reveal new areas to explore:
Your answer mentions {new_area} that I didn't search earlier.
Should I research {new_area} before we continue?
If yes → Loop back to Phase 2b with targeted search If no → Continue to mental models
Write all findings to build's 02-discovery.md (not just chat output).
CRITICAL: Discovery phases MUST complete before mental models. Max 2 research loops to avoid infinite discovery.
DO NOT SKIP THIS PHASE. Mental models ensure build quality.
# List available mental models
beam-next-mental-models --format brief
Based on discovery findings, select 2-3 relevant models:
| Build Type | Recommended Models |
|---|---|
| Build/Skill | First Principles, Pre-Mortem, Inversion |
| Integration | Pre-Mortem, Systems Thinking |
| Research | First Principles, Socratic Method |
| Strategy | SWOT, Pre-Mortem, Second-Order Thinking |
| Content | Jobs-to-be-Done, First Principles |
| Process | Systems Thinking, Inversion |
Present options to user:
Based on your [build_type] build, I recommend these mental models:
1. [Model A] - [Why relevant to this project]
2. [Model B] - [Why relevant to this project]
Which would you like to apply? (or suggest others)
Load selected model file and apply questions:
# Read model file
cat 00-system/mental-models/models/{category}/{model-slug}.md
Key Questions (informed by discovery):
Update 03-plan.md with:
Before proceeding to Phase 4/5:
IF gaps exist AND rediscovery_round < 2:
→ Increment rediscovery_round in resume-context.md
→ Focus discovery on identified gaps
→ Return to Phase 3
ELSE IF gaps exist AND rediscovery_round >= 2:
→ Log unknowns in plan.md "Open Questions" section
→ Note: "Proceeding with known unknowns after 2 rounds"
→ Continue to Phase 5
DO NOT leave templates with placeholder text. All files must have real content.
The plan.md file MUST contain:
## Approach
[Actual strategy description - NOT placeholder text]
## Key Decisions
| Decision | Choice | Rationale |
|----------|--------|-----------|
| [Real decision] | [Real choice] | [Real rationale] |
## Success Criteria (from Mental Models)
- [ ] [Specific, measurable criterion 1]
- [ ] [Specific, measurable criterion 2]
## Risks & Mitigations (from Mental Models)
| Risk | Likelihood | Impact | Mitigation |
|------|------------|--------|------------|
| [Real risk] | [H/M/L] | [H/M/L] | [Real mitigation] |
## Dependencies (from Discovery)
- [Real dependencies from 02-discovery.md]
VERIFY: No {{placeholder}} or [To be filled] text remains.
Replace generic phases with concrete tasks:
## Phase 2: [Actual Phase Name]
- [ ] [Concrete task with expected output]
- [ ] [Concrete task with expected output]
- [ ] **CHECKPOINT**: Verify [what] works
## Phase 3: [Actual Phase Name]
- [ ] [Concrete task]
- [ ]* [Optional task] (marked with *)
VERIFY: No [Step 1] or [Name this phase] text remains.
current_phase: "execution"
next_action: "execute-project"
discovery_complete: true
files_to_load:
- "01-planning/01-overview.md"
- "01-planning/02-discovery.md"
- "01-planning/03-plan.md"
- "01-planning/04-steps.md"
Before declaring planning complete:
Build Ready for Execution
PreCompact hook automatically syncs these fields:
session_ids - List of all sessions that touched this projectlast_updated - Timestamp of last activitytotal_tasks, tasks_completed - Checkbox counts from 04-steps.mdcurrent_section, current_task - Position trackingcurrent_phase - Detected from Phase 1 completionnext_action - "plan-project" or "execute-project"Update these fields at session end:
continue_at - Specific pointer for next agent:
continue_at: "02-discovery.md Phase 2" # or specific line reference
blockers - List any blockers:
blockers: [] # or ["Waiting for user input on scope"]
files_to_load - Context files (AUTO-LOADED in COMPACT mode):
files_to_load:
- "01-planning/02-discovery.md" # Research findings
- "01-planning/03-plan.md" # Approach decisions
- "02-resources/decisions.md" # Key decisions made
Pattern: Write context to FILES → Add to files_to_load
02-resources/decisions.md → Add to listContext for Next Agent - Prose that POINTS to files:
### Latest Session (YYYY-MM-DD)
**Completed this session:**
- [x] Created 02-discovery.md with 12 problems
- [x] Applied Pre-Mortem mental model
**Key files:**
- See `decisions.md` for approach rationale
**Next steps:**
1. Continue at `continue_at` location
Philosophy: Don't capture context in prose. Write it to FILES, add to
files_to_load. Prose just POINTS to files.
| Field | Values | Description |
|---|---|---|
current_phase | planning, execution, complete | Build lifecycle stage |
next_action | plan-project, execute-project | Which skill to load on resume |
build_type | build, integration, research, strategy, content, process, skill, generic | Type from init |
Planning Start:
current_phase: "planning"
next_action: "plan-project"
files_to_load: [overview, discovery, plan, steps]
discovery_complete: false
Planning Complete (ready for execution):
current_phase: "execution"
next_action: "execute-project"
files_to_load: [discovery, plan, steps] # drop overview
discovery_complete: true
Build Complete:
current_phase: "archive"
next_action: "execute-project" # or archive-project
For build and skill project types, discovery.md includes EARS-formatted requirements:
| Pattern | Template |
|---|---|
| Ubiquitous | THE <system> SHALL <response> |
| Event-driven | WHEN <trigger>, THE <system> SHALL <response> |
| State-driven | WHILE <condition>, THE <system> SHALL <response> |
| Unwanted | IF <condition>, THEN THE <system> SHALL <response> |
| Optional | WHERE <option>, THE <system> SHALL <response> |
| Complex | [WHERE] [WHILE] [WHEN/IF] THE <system> SHALL <response> |
See: references/ears-patterns.md for full guide.
--type flag
beam-next-init-project "Name" --type build --path 03-projects/activetypes/
├── build/ # Inline discovery, EARS requirements
├── integration/ # Routes to add-integration skill
├── research/ # Routes to create-research-build skill
├── strategy/ # Inline discovery, decision frameworks
├── content/ # Inline discovery, creative brief
├── process/ # Inline discovery, workflow optimization
├── skill/ # Routes to create-skill skill, EARS requirements
└── generic/ # Minimal inline discovery
Each type folder contains:
_type.yaml - Type configuration and descriptionoverview.md - Overview templatediscovery.md - Discovery questions/structureplan.md - Plan templatesteps.md - Steps templateWhy Mandatory Router?
Why Discovery BEFORE Mental Models?
Why 8 Types?
Why Skills Invoked Normally?
Why Separate Sessions?
Integration:
Remember: This is a COLLABORATIVE DESIGN SESSION with proper discovery and mental model application. The router ensures every project gets the depth it deserves!