Initializes an AI development session by reading workflow guides, developer identity, git status, active tasks, and project guidelines from .trellis/. Classifies incoming tasks and routes to brainstorm, direct edit, or task workflow. Use when beginning a new coding session, resuming work, starting a new task, or re-establishing project context.
Initialize your AI development session and begin working on tasks.
| Marker | Meaning | Executor |
|---|---|---|
[AI] | Bash scripts or Task calls executed by AI | You (AI) |
[USER] | Skills executed by user | User |
[AI]First, read the workflow guide to understand the development process:
cat .trellis/workflow.md
Follow the instructions in workflow.md - it contains:
python3 ./.trellis/scripts/get_context.py
This shows: developer identity, git status, current task (if any), active tasks.
python3 ./.trellis/scripts/get_context.py --mode packages
This shows available packages and their spec layers. Read the relevant spec indexes:
cat .trellis/spec/<package>/<layer>/index.md # Package-specific guidelines
cat .trellis/spec/guides/index.md # Thinking guides (always read)
Important: The index files are navigation — they list the actual guideline files (e.g.,
error-handling.md,conventions.md,mock-strategies.md). At this step, just read the indexes to understand what's available. When you start actual development, you MUST go back and read the specific guideline files relevant to your task, as listed in the index's Pre-Development Checklist.
Report what you learned and ask: "What would you like to work on?"
When user describes a task, classify it:
| Type | Criteria | Workflow |
|---|---|---|
| Question | User asks about code, architecture, or how something works | Answer directly |
| Trivial Fix | Typo fix, comment update, single-line change | Direct Edit |
| Simple Task | Clear goal, 1-2 files, well-defined scope | Quick confirm → Implement |
| Complex Task | Vague goal, multiple files, architectural decisions | Brainstorm → Task Workflow |
Trivial/Simple indicators:
Complex indicators:
If in doubt, use Brainstorm + Task Workflow.
Task Workflow ensures specs are injected to agents, resulting in higher quality code. The overhead is minimal, but the benefit is significant.
For questions or trivial fixes, work directly:
$finish-workFor simple, well-defined tasks:
For complex or vague tasks, automatically start the brainstorm process — do NOT skip directly to implementation.
See the $brainstorm skill for the full process. Summary:
prd.mdSubtask Decomposition: If brainstorm reveals multiple independent work items, consider creating subtasks using
--parentflag oradd-subtaskcommand. See the brainstorm skill's Step 8 for details.
Why this workflow?
From Brainstorm (Complex Task):
PRD confirmed → Research → Configure Context → Activate → Implement → Check → Complete
From Simple Task:
Confirm → Create Task → Write PRD → Research → Configure Context → Activate → Implement → Check → Complete
Key principle: Research happens AFTER requirements are clear (PRD exists).
PRD and task directory already exist from brainstorm. Skip directly to Phase 2.
Step 1: Confirm Understanding [AI]
Quick confirm:
Step 2: Create Task Directory [AI]
TASK_DIR=$(python3 ./.trellis/scripts/task.py create "<title>" --slug <name>)
Step 3: Write PRD [AI]
Create prd.md in the task directory with:
# <Task Title>
## Goal
<What we're trying to achieve>
## Requirements
- <Requirement 1>
- <Requirement 2>
## Acceptance Criteria
- [ ] <Criterion 1>
- [ ] <Criterion 2>
## Technical Notes
<Any technical decisions or constraints>
Both paths converge here. PRD and task directory must exist before proceeding.
Step 4: Code-Spec Depth Check [AI]
If the task touches infra or cross-layer contracts, do not start implementation until code-spec depth is defined.
Trigger this requirement when the change includes any of:
Must-have before proceeding:
Step 5: Research the Codebase [AI]
Based on the confirmed PRD, call Research Agent to find relevant specs and patterns:
Task(
subagent_type: "research",
prompt: "Analyze the codebase for this task:
Task: <goal from PRD>
Type: <frontend/backend/fullstack>
Please find:
1. Relevant spec files in .trellis/spec/
2. Existing code patterns to follow (find 2-3 examples)
3. Files that will likely need modification
Output:
## Relevant Specs
- <path>: <why it's relevant>
## Code Patterns Found
- <pattern>: <example file path>
## Files to Modify
- <path>: <what change>",
model: "opus"
)
Step 6: Configure Context [AI]
Initialize default context:
python3 ./.trellis/scripts/task.py init-context "$TASK_DIR" <type>
# type: backend | frontend | fullstack
Add specs found by Research Agent:
# For each relevant spec and code pattern:
python3 ./.trellis/scripts/task.py add-context "$TASK_DIR" implement "<path>" "<reason>"
python3 ./.trellis/scripts/task.py add-context "$TASK_DIR" check "<path>" "<reason>"
Step 7: Activate Task [AI]
python3 ./.trellis/scripts/task.py start "$TASK_DIR"
This sets .current-task so hooks can inject context.
Step 8: Implement [AI]
Call Implement Agent (specs are auto-injected by hook):
Task(
subagent_type: "implement",
prompt: "Implement the task described in prd.md.
Follow all specs that have been injected into your context.
Run lint and typecheck before finishing.",
model: "opus"
)
Step 9: Check Quality [AI]
Call Check Agent (specs are auto-injected by hook):
Task(
subagent_type: "check",
prompt: "Review all code changes against the specs.
Fix any issues you find directly.
Ensure lint and typecheck pass.",
model: "opus"
)
Step 10: Complete [AI]
$record-session to record this sessionIf get_context.py shows a current task:
prd.md to understand the goaltask.json for current status and phaseIf yes, resume from the appropriate step (usually Step 7 or 8).
[USER]| Command | When to Use |
|---|---|
$start | Begin a session (this skill) |
$parallel | Complex tasks needing isolated worktree |
$finish-work | Before committing changes |
$record-session | After completing a task |
[AI]| Script | Purpose |
|---|---|
python3 ./.trellis/scripts/get_context.py | Get session context |
python3 ./.trellis/scripts/task.py create | Create task directory |
python3 ./.trellis/scripts/task.py init-context | Initialize jsonl files |
python3 ./.trellis/scripts/task.py add-context | Add spec to jsonl |
python3 ./.trellis/scripts/task.py start | Set current task |
python3 ./.trellis/scripts/task.py finish | Clear current task |
python3 ./.trellis/scripts/task.py archive | Archive completed task |
[AI]| Agent | Purpose | Hook Injection |
|---|---|---|
| research | Analyze codebase | No (reads directly) |
| implement | Write code | Yes (implement.jsonl) |
| check | Review & fix | Yes (check.jsonl) |
| debug | Fix specific issues | Yes (debug.jsonl) |
Specs are injected, not remembered.
The Task Workflow ensures agents receive relevant specs automatically. This is more reliable than hoping the AI "remembers" conventions.