Create or update the feature specification from a natural language feature description.
$ARGUMENTS
You MUST consider the user input before proceeding (if not empty).
Check for extension hooks (before specification):
Check if .specify/extensions.yml exists in the project root.
If it exists, read it and look for entries under the hooks.before_specify key
If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally
Filter out hooks where enabled is explicitly false. Treat hooks without an enabled field as enabled by default.
For each remaining hook, do not attempt to interpret or evaluate hook condition expressions:
condition field, or it is null/empty, treat the hook as executablecondition, skip the hook and leave condition evaluation to the HookExecutor implementationFor each executable hook, output the following based on its optional flag:
Optional hook (optional: true):
## Extension Hooks
**Optional Pre-Hook**: {extension}
Command: `/{command}`
Description: {description}
Prompt: {prompt}
To execute: `/{command}`
Mandatory hook (optional: false):
## Extension Hooks
**Automatic Pre-Hook**: {extension}
Executing: `/{command}`
EXECUTE_COMMAND: {command}
Wait for the result of the hook command before proceeding to the Outline.
If no hooks are registered or .specify/extensions.yml does not exist, skip silently
The text the user typed after /speckit.specify in the triggering message is the feature description. Assume you always have it available in this conversation even if $ARGUMENTS appears literally below. Do not ask the user to repeat it unless they provided an empty command.
Given that feature description, do this:
Generate a concise short name (2-4 words) for the branch:
Create the feature branch by running the script with --short-name (and --json). In sequential mode, do NOT pass --number — the script auto-detects the next available number. In timestamp mode, the script generates a YYYYMMDD-HHMMSS prefix automatically:
Branch numbering mode: Before running the script, check if .specify/init-options.json exists and read the branch_numbering value.
If "timestamp", add --timestamp (Bash) or -Timestamp (PowerShell) to the script invocation
If "sequential" or absent, do not add any extra flag (default behavior)
Bash example: .specify/scripts/bash/create-new-feature.sh "$ARGUMENTS" --json --short-name "user-auth" "Add user authentication"
Bash (timestamp): .specify/scripts/bash/create-new-feature.sh "$ARGUMENTS" --json --timestamp --short-name "user-auth" "Add user authentication"
PowerShell example: .specify/scripts/bash/create-new-feature.sh "$ARGUMENTS" -Json -ShortName "user-auth" "Add user authentication"
PowerShell (timestamp): .specify/scripts/bash/create-new-feature.sh "$ARGUMENTS" -Json -Timestamp -ShortName "user-auth" "Add user authentication"
IMPORTANT:
--number — the script determines the correct next number automatically--json for Bash, -Json for PowerShell) so the output can be parsed reliablyNOTE: The script creates and checks out the new branch and initializes the spec file before writing.
When creating this spec from a user prompt:
Examples of reasonable defaults (don't ask about these):
Success criteria must be:
Good examples:
Bad examples (implementation-focused):
Load .specify/templates/spec-template.md to understand required sections.
Follow this execution flow:
Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
Specification Quality Validation: After writing the initial spec, validate it against quality criteria:
a. Create Spec Quality Checklist: Generate a checklist file at FEATURE_DIR/checklists/requirements.md using the checklist template structure with these validation items:
# Specification Quality Checklist: [FEATURE NAME]
**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: [DATE]
**Feature**: [Link to spec.md]
## Content Quality
- [ ] No implementation details (languages, frameworks, APIs)
- [ ] Focused on user value and business needs
- [ ] Written for non-technical stakeholders
- [ ] All mandatory sections completed
## Requirement Completeness
- [ ] No [NEEDS CLARIFICATION] markers remain
- [ ] Requirements are testable and unambiguous
- [ ] Success criteria are measurable
- [ ] Success criteria are technology-agnostic (no implementation details)
- [ ] All acceptance scenarios are defined
- [ ] Edge cases are identified
- [ ] Scope is clearly bounded
- [ ] Dependencies and assumptions identified
## Feature Readiness
- [ ] All functional requirements have clear acceptance criteria
- [ ] User scenarios cover primary flows
- [ ] Feature meets measurable outcomes defined in Success Criteria
- [ ] No implementation details leak into specification
## Notes
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
b. Run Validation Check: Review the spec against each checklist item:
c. Handle Validation Results:
If all items pass: Mark checklist complete and proceed to step 7
If items fail (excluding [NEEDS CLARIFICATION]):
If [NEEDS CLARIFICATION] markers remain:
Extract all [NEEDS CLARIFICATION: ...] markers from the spec
LIMIT CHECK: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
Report completion with branch name, spec file path, checklist results, and readiness for the next phase (/speckit.clarify or /speckit.plan).
Check for extension hooks: After reporting completion, check if .specify/extensions.yml exists in the project root.
If it exists, read it and look for entries under the hooks.after_specify key
If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally
Filter out hooks where enabled is explicitly false. Treat hooks without an enabled field as enabled by default.
For each remaining hook, do not attempt to interpret or evaluate hook condition expressions:
condition field, or it is null/empty, treat the hook as executablecondition, skip the hook and leave condition evaluation to the HookExecutor implementationFor each executable hook, output the following based on its optional flag:
Optional hook (optional: true):
## Extension Hooks
**Optional Hook**: {extension}
Command: `/{command}`
Description: {description}
Prompt: {prompt}
To execute: `/{command}`
Mandatory hook (optional: false):
## Extension Hooks
**Automatic Hook**: {extension}
Executing: `/{command}`
EXECUTE_COMMAND: {command}
If no hooks are registered or .specify/extensions.yml does not exist, skip silently
For each clarification needed (max 3), present options to user in this format:
## Question [N]: [Topic]
**Context**: [Quote relevant spec section]
**What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
**Suggested Answers**:
| Option | Answer | Implications |
| ------ | ------------------------- | ------------------------------------- |
| A | [First suggested answer] | [What this means for the feature] |
| B | [Second suggested answer] | [What this means for the feature] |
| C | [Third suggested answer] | [What this means for the feature] |
| Custom | Provide your own answer | [Explain how to provide custom input] |
**Your choice**: _[Wait for user response]_
CRITICAL - Table Formatting: Ensure markdown tables are properly formatted:
| Content | not |Content||--------|Number questions sequentially (Q1, Q2, Q3 - max 3 total)
Present all questions together before waiting for responses
Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
Re-run validation after all clarifications are resolved
d. Update Checklist: After each validation iteration, update the checklist file with current pass/fail status