Write production-ready, A+ quality specifications for software releases. Use when planning a release, designing system architecture, or commissioning work to an autonomous agent. Trigger phrases: 'write a release spec', 'spec this feature', 'create a release specification', 'design the architecture', 'ground this in the codebase'.
OpenClaw Integration: This skill is invoked by the Dojo Genesis plugin via
/dojo specor/dojo run release-specification. The agent receives project context automatically via thebefore_agent_starthook. Usedojo_get_contextfor full state,dojo_save_artifactto persist outputs, anddojo_update_stateto record phase transitions and decisions.
Version: 2.1 Created: 2026-02-02 Updated: 2026-02-07 Author: Manus AI Purpose: Write production-ready, A+ quality specifications for software releases
A specification is not documentation—it is a contract. It is a formal agreement between the architect and the builder about what will be created, how it will work, and what success looks like. A vague specification invites confusion, rework, and failure. A rigorous specification is an act of respect for the builder's time and an investment in quality.
This skill transforms specification writing from a creative exercise into a disciplined engineering practice. By following this structure, we create specifications that are:
The standard: 111/100 (A+). Good enough is not good enough.
Use this skill when:
Do not use this skill for:
Before starting, determine the right spec format:
Use the Full Template (Section IV) when:
Use the Lean Format when:
Lean Format structure: Route layouts, component tables, behavior lists. No preamble. "Sonnet level chunks" — direct, precise, implementable.
Rule: The receiving agent should not default to the full template. Match format to scope.
Before writing, immerse yourself in the problem space:
/repo-context-sync to understand the current architecture/strategic-scout if you're choosing between multiple approachesOutput: A clear understanding of the problem, the current state, and the desired future state.
Before writing the spec, measure the codebase. Specs describe the delta from measured reality, not from assumptions.
Run these audits against the target codebase:
find . -name "*.test.*" | wc -lgrep -r "aria-\|role=" --include="*.tsx" | wc -lgrep -r "ErrorBoundary" --include="*.tsx" | wc -lgrep -r "React.memo\|useMemo\|useCallback" --include="*.tsx" | wc -lgrep -r "React.lazy" --include="*.tsx" | wc -lfind src -name "*.ts" -o -name "*.tsx" | wc -lInclude the results as a "Current State" section at the top of the spec. The spec then describes what changes FROM this measured baseline.
Key triggers: "codebase audit", "audit before spec", "current state", "ground the spec"
Start with the "why" before the "what":
Output: A clear, inspiring vision that motivates the work and sets boundaries.
This is the heart of the specification:
Output: A complete technical design that a skilled developer could implement without asking questions.
Break the work into manageable phases:
Output: A realistic timeline with clear milestones and success criteria for each phase.
Anticipate what could go wrong:
Output: A comprehensive risk assessment and contingency plan.
Before finalizing:
docs/vX.X.X/ with a clear filenameOutput: A finalized, A+ quality specification ready for implementation.
# [Project Name] v[X.X.X]: [Memorable Tagline]
**Author:** [Your Name]
**Status:** [Draft | Final | Approved]
**Created:** [Date]
**Grounded In:** [What this builds on - previous versions, research, feedback]
---
## 1. Vision
> A single, compelling sentence that captures the essence of this release.
**The Core Insight:**
[2-3 paragraphs explaining WHY this release matters, what problem it solves, and how it advances the overall vision]
**What Makes This Different:**
[2-3 paragraphs explaining what makes this approach unique, innovative, or better than alternatives]
---
## 1.5 Current State (Audit Results)
> Include measured codebase metrics here. This grounds the spec in reality.
**Testing:** [X] test files, [framework], [coverage tool]
**Accessibility:** [X] aria/role instances, [X] error boundaries
**Performance:** [X] memoization instances, [X] code splitting instances
**Dependencies:** [list key deps from package.json]
**File Structure:** [X] source files, [X] routes, [X] shared components
---
## 2. Goals & Success Criteria
**Primary Goals:**
1. [Specific, measurable goal]
2. [Specific, measurable goal]
3. [Specific, measurable goal]
**Success Criteria:**
- ✅ [Concrete, testable criterion]
- ✅ [Concrete, testable criterion]
- ✅ [Concrete, testable criterion]
**Non-Goals (Out of Scope):**
- ❌ [What this release explicitly does NOT include]
- ❌ [What is deferred to future versions]
---
## 3. Technical Architecture
### 3.1 System Overview
[High-level diagram or description of how components fit together]
**Key Components:**
1. **[Component Name]** - [Purpose and responsibility]
2. **[Component Name]** - [Purpose and responsibility]
3. **[Component Name]** - [Purpose and responsibility]
### 3.2 [Feature/Component 1] - Detailed Design
**Purpose:** [What this component does and why it's needed]
**Architecture:**
[Detailed explanation with diagrams if helpful]
**Backend Implementation (Go):**
```go
// Complete, production-ready code example
package [package_name]
type [StructName] struct {
Field1 string `json:"field1"`
Field2 int `json:"field2"`
}
func (s *[StructName]) [MethodName]() error {
// Implementation with error handling
return nil
}
Frontend Implementation (React/TypeScript):
// Complete, production-ready code example
interface [InterfaceName] {
field1: string;
field2: number;
}
export const [ComponentName]: React.FC<Props> = ({ prop1, prop2 }) => {
const [state, setState] = useState<StateType>(initialState);
// Implementation
return (
<div className="...">
{/* JSX */}
</div>
);
};
API Endpoints:
| Method | Endpoint | Request | Response | Purpose |
|---|---|---|---|---|
| POST | /api/v1/[resource] | { field: value } | { id: string } | [Description] |
| GET | /api/v1/[resource]/:id | - | { data: object } | [Description] |
Database Schema (if applicable):
CREATE TABLE [table_name] (
id TEXT PRIMARY KEY,
field1 TEXT NOT NULL,
field2 INTEGER DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_[field] ON [table_name]([field]);
Integration Points:
Performance Considerations:
[Repeat structure for each major component]
Timeline: [X] weeks
| Phase | Duration | Focus | Deliverables |
|---|---|---|---|
| 1 | Week 1-2 | [Focus area] | [Specific deliverables] |
| 2 | Week 3-4 | [Focus area] | [Specific deliverables] |
| 3 | Week 5-6 | [Focus area] | [Specific deliverables] |
Week 1: [Focus]
Success Criteria: [What "done" looks like for this week]
Week 2: [Focus] [Repeat structure]
[Continue for all weeks]
Required Before Starting:
Parallel Work:
Blocking Dependencies:
Unit Tests:
Integration Tests:
E2E Tests:
Performance Tests:
Manual QA:
| Risk | Likelihood | Impact | Mitigation Strategy |
|---|---|---|---|
| [Risk description] | High/Med/Low | High/Med/Low | [Specific mitigation] |
| [Risk description] | High/Med/Low | High/Med/Low | [Specific mitigation] |
Feature Flags:
[flag_name]: Controls [feature], default: falseRollback Procedure:
Monitoring & Alerts:
User-Facing Documentation:
Developer Documentation:
Release Notes:
v[X+1] Candidates:
---
## V. Best Practices
### 1. Start with Vision, Not Features
**Why:** Features without vision are just a list of tasks. Vision provides meaning and direction.
**How:** Write the vision statement first. If you can't articulate why this release matters in one sentence, you're not ready to write the spec.
---
### 2. Write Production-Ready Code Examples
**Why:** Pseudocode leaves too much room for interpretation. Real code is a contract.
**How:** Write code examples that could be committed to the repository. Include imports, error handling, and types.
---
### 3. Use Realistic Timelines Based on Complexity
**Why:** Underestimating timelines leads to rushed work and technical debt.
**How:** Use past releases as benchmarks. A 1,000-line feature typically takes 1-2 weeks, not 2 days.
---
### 4. Document Integration Points Explicitly
**Why:** Most bugs happen at the boundaries between systems.
**How:** For every new component, explicitly document how it connects to existing systems (APIs, props, state, events).
---
### 5. Include Risk Mitigation from the Start
**Why:** Identifying risks after implementation is too late.
**How:** During the architecture phase, ask "What could go wrong?" and document mitigation strategies.
---
### 6. Make Success Criteria Binary and Testable
**Why:** Ambiguous success criteria lead to scope creep and endless iteration.
**How:** Every success criterion should be a yes/no question. "The user can create a new project" is testable. "The UI is intuitive" is not.
---
### 7. Reference Existing Patterns
**Why:** Consistency reduces cognitive load and makes the codebase easier to maintain.
**How:** When designing a new component, reference an existing component that follows the same pattern. "Follow the structure of `ComponentX`."
---
## VI. Quality Checklist
Before finalizing a specification, verify:
### Vision & Goals (3 questions)
- [ ] Is the vision statement a single, compelling sentence?
- [ ] Are the goals specific, measurable, and achievable?
- [ ] Are non-goals explicitly stated to prevent scope creep?
### Technical Architecture (4 questions)
- [ ] Does every major component have a detailed design with code examples?
- [ ] Are all API endpoints fully specified (method, path, request, response)?
- [ ] Are integration points with existing systems documented?
- [ ] Are performance considerations addressed?
### Implementation Plan (3 questions)
- [ ] Is the timeline realistic based on the complexity of the work?
- [ ] Does the week-by-week breakdown include specific, actionable tasks?
- [ ] Is the testing strategy comprehensive (unit, integration, E2E, performance)?
### Risk & Documentation (3 questions)
- [ ] Have major risks been identified with mitigation strategies?
- [ ] Is there a rollback procedure in case of failure?
- [ ] Are user and developer documentation needs documented?
**If you cannot answer "yes" to all 13 questions, the specification is not ready.**
---
## VII. Example: Dojo Genesis v0.0.23
**The Problem:** Users needed a way to calibrate the agent's behavior and communication style to match their preferences.
**The Vision:** "The Collaborative Calibration" - A release that transforms the agent from a fixed personality into an adaptive partner.
**What Made It A+:**
1. **Clear Vision:** The tagline "Collaborative Calibration" immediately communicated the essence
2. **Comprehensive Architecture:** Detailed design of the calibration UI, backend storage, and agent integration
3. **Production-Ready Code:** Complete Go and TypeScript examples that could be implemented directly
4. **Realistic Timeline:** 3-week phased approach with weekly milestones
5. **Risk Mitigation:** Identified the risk of "calibration drift" and defined a validation mechanism
**Key Decisions:**
- Store calibration preferences in SQLite for local-first architecture
- Use a multi-dimensional calibration model (tone, verbosity, formality)
- Implement real-time preview of calibration changes
**Outcome:** The specification was commissioned to Claude Code and implemented in 2.5 weeks with minimal rework.
---
## VIII. Common Pitfalls to Avoid
❌ **Vague Goals:** "Improve user experience" → ✅ "Reduce context loading time by 50%"
❌ **Missing Code Examples:** High-level description only → ✅ Complete, runnable code
❌ **Unrealistic Timelines:** "2 days for full backend" → ✅ "2 weeks with phased approach"
❌ **No Risk Assessment:** Assumes everything will work → ✅ Identifies risks and mitigations
❌ **Incomplete Testing:** "We'll test it" → ✅ Specific test cases and coverage targets
❌ **No Integration Points:** Treats feature as isolated → ✅ Documents how it connects to existing system
---
## IX. Related Skills
- **`strategic-to-tactical-workflow`** - The complete workflow from scouting to implementation (this skill is Phase 6)
- **`frontend-from-backend`** - For frontend specs that need deep backend grounding
- **`implementation-prompt`** - For converting this spec into implementation prompts
- **`parallel-tracks`** - For splitting large specs into parallel execution tracks
- **`repo-context-sync`** - For gathering codebase context before writing specs
- **`memory-garden`** - For documenting learnings from implementation
- **`seed-extraction`** - For extracting reusable patterns from specs
---
## X. Skill Metadata
**Token Savings:** ~10,000-15,000 tokens per specification (no need to re-read old specs for patterns)
**Quality Impact:** Ensures consistency across all specifications
**Maintenance:** Update when new patterns emerge from successful releases
**When to Update This Skill:**
- After completing 3-5 new specifications (to incorporate new patterns)
- When a specification leads to significant rework (to identify what was missing)
- When commissioning to a new type of agent (to adapt the template)
---
**Last Updated:** 2026-02-07
**Maintained By:** Manus AI
**Status:** Active
---
## OpenClaw Tool Integration
When running inside the Dojo Genesis plugin:
1. **Start** by calling `dojo_get_context` to retrieve full project state, history, and artifacts
2. **During** the skill execution, follow the workflow steps as documented above
3. **Save** all outputs using `dojo_save_artifact` with appropriate artifact types:
- `scout` → type: "scout-report"
- `spec` → type: "specification"
- `tracks` → type: "track-decomposition"
- `commission` → type: "implementation-prompt"
- `retro` → type: "retrospective"
4. **Update state** by calling `dojo_update_state` to:
- Record the skill execution in activity history
- Advance the project phase if appropriate
- Log any decisions made during the skill run