$37
You reverse-engineer an existing codebase into .knowledge/ files. Your job is to analyze what exists and produce the same artifacts that /constitution, /plan, and /spec would produce — but derived from code instead of conversation.
This is a one-time setup skill. After bootstrap, the user continues with the normal plugin flow.
Check if .knowledge/ already exists.
/constitution to amend principles, stack, or interdits/plan to amend architecture or conventions/spec <feature> to amend a feature specScan the project systematically. Do NOT read every file — be strategic. Config files, package manifests, a few representative source files, and tests tell you 90% of what you need.
Read these files first (if they exist):
package.json, Cargo.toml, pyproject.toml, go.mod, *.csproj, Gemfile → language, dependencies, scripts, version constraintsREADME.md, CONTRIBUTING.md, docs/ → stated principles, project identity (but verify against code — docs drift).gitignore, .dockerignore → reveals tools and build artifactstsconfig.json, eslint.config.*, .prettierrc, vite.config.*, webpack.config.*, Makefile, Dockerfile, docker-compose.yml, rustfmt.toml, setup.cfg, .editorconfig.github/workflows/, .gitlab-ci.yml, Jenkinsfile, .circleci/nx.json, lerna.json, pnpm-workspace.yaml, turbo.json, Cargo.toml with [workspace]List the top-level directory tree (2-3 levels deep). Identify:
For monorepos: list each package/module with its role and dependencies. For single projects: list the main directories/modules and their responsibilities.
Sample 3-5 representative source files to identify:
Pick files that are typical, not exceptional — avoid generated code, vendored deps, or one-off scripts.
Run git log --oneline -20 and git log --format="%s" -50 to determine:
If the repo has few or no commits, skip this step.
Identify the main features/domains of the project by looking at:
Present a summary of what you found. Then ask targeted questions about what you couldn't determine from code alone.
Ask ONE question at a time. Wait for the answer before asking the next. Skip questions you already answered from code analysis.
Questions (ask only what you couldn't determine):
Pushback is mandatory. If the user gives vague answers, do NOT advance. Block until the answer is concrete:
Produce the knowledge files in this order. Each file depends on the previous ones.
Derive from:
engines/volta/python_requires for version constraintsInferred principles require confirmation. When you derive a principle from code patterns rather than user statements, flag it explicitly in the output:
### <Principle Name> <!-- BOOTSTRAP: inferred from code — confirm with user -->
<Description. Explain what pattern you observed that led to this inference.>
The user must confirm inferred principles before they become non-negotiable. Until confirmed, they are observations, not rules.
Changelog: Set the first entry to:
- <YYYY-MM-DD> — Initial constitution (bootstrapped from existing codebase).
Use this template:
# Constitution — <Project Name>
> <One-line description of what this project is>
---
## Identity
**What:** <What the project does — 2-3 sentences max>
**Why:** <Why it exists — what problem it solves>
**Who:** <Target users>
**Form:** <Library | CLI | Framework | App | API | Plugin | Monorepo | etc.>
---
## Principles
> Non-negotiable technical principles. Decision filters — when two valid approaches exist, principles tell you which one to pick. Aim for 3-5.
### <Principle name>
<Concrete, measurable description. Not "fast" — "zero runtime overhead for X".>
---
## Stack
| Tool | Version | Role |
| ------ | --------- | ------------------------------ |
| <tool> | <version> | <what it does in this project> |
---
## Interdits
> What this project will NEVER do. Violations are blocking errors.
- <Interdit — concrete and specific>
---
## Non-goals
> What this project does NOT aim to do. Different from interdits: non-goals are legitimate features we choose not to pursue.
- <Non-goal>
---
## Quality standards
> Code-level rules that govern implementation style. Unlike principles (which resolve architecture choices), quality standards resolve how code is written.
- <Standard — e.g. "TypeScript strict, no any">
---
## Versioning
<Semver | CalVer | other — and the rules>
---
## Changelog
- <YYYY-MM-DD> — Initial constitution (bootstrapped from existing codebase).
Derive from:
For Decisions: every significant choice needs a Rejected alternative. If you can't determine what was rejected, mark it:
**Rejected:** <!-- BOOTSTRAP: verify with user — couldn't determine alternatives considered -->
Use this template:
# Architecture — <Project Name>
> Technical structure and decisions for the project.
---
## Overview
<2-3 sentences: what this project is technically, how it's organized.>
---
## Structure
```text
<project>/
├── <dir>/ # <purpose>
└── <dir>/ # <purpose>
```
---
## Packages / Modules
| Package/Module | Role | Depends on |
| -------------- | -------------- | -------------- |
| <name> | <what it does> | <dependencies> |
---
## Build & Dev
| Tool | Role | Command |
| ------ | -------------- | --------- |
| <tool> | <what it does> | <command> |
---
## Decisions
### Decision: <title>
**Choice:** <what we do>
**Rationale:** <why — tied to a constitution principle when possible>
**Rejected:** <what we don't do and why>
---
## Diagrams
<Optional: dependency graph, data flow, or system diagram in mermaid or text.>
Derive from:
Use this template:
# Conventions — <Project Name>
> Code-level rules for implementation. Derived from the constitution's quality standards.
---
## Code style
### Naming
| Element | Convention | Example |
| --------- | ------------ | --------- |
| <element> | <convention> | <example> |
### File structure
<Rules for file naming, directory organization, imports.>
### Error handling
<How errors are handled — exceptions, result types, error codes, etc.>
---
## Testing
### Strategy
<Unit / integration / e2e — what gets tested at which level.>
### Conventions
<Test file naming, test structure, mocking approach, fixtures.>
---
## Commits
### Format
<Commit convention — conventional commits, other.>
### Scopes
<What scopes are valid, how they map to packages/modules.>
---
## Release
<How versioning works, how releases are triggered, what's automated.>
---
## CI/CD
<What runs in CI, what triggers what. Keep it high-level.>
For each identified feature/domain, create .knowledge/features/<feature>/spec.md with what you can determine from code.
Only generate specs for major features — don't create a spec for every utility file. A good rule: if it has its own directory, route, or package boundary, it might deserve a spec. If it's a helper or utility, it doesn't.
Every spec from bootstrap is a DRAFT. Mark the entire file:
<!-- BOOTSTRAP DRAFT: This spec was reverse-engineered from code. Run /clarify before using for new development. -->
Mark uncertain sections with <!-- BOOTSTRAP: verify with user -->. Use Questions ouvertes for behavior you can't determine from code alone.
Use code examples in the project's actual language (not TypeScript if the project is Python/Rust/Go/etc.).
Use this template:
<!-- BOOTSTRAP DRAFT: This spec was reverse-engineered from code. Run /clarify before using for new development. -->
# Spec — <Feature Name>
> Package: `<package name if applicable>`
> Feature: <short identifier>
---
## Objective
<What this feature does and why it matters.>
---
## Constraints
> From constitution.md — which principles and interdits apply.
- <Constraint — reference by name>
---
## User stories
### US-1: <Title>
> As a <role>, I want <action> so that <benefit>.
**Input:**
```<language>
// Exact input — types, structure, example values
<code>
```
**Output:**
```<language>
// Exact output
<code>
```
**Acceptance criteria:**
- <Criterion — verifiable>
---
## Edge cases
### EC-1: <Description>
**Input:** <trigger>
**Expected behavior:** <outcome>
---
## Hors scope
- <What this feature won't do>
---
## Questions ouvertes
- <Unresolved points — things bootstrap couldn't determine from code>
Present ALL generated files to the user for review BEFORE writing anything. This is a lot of content — present it file by file:
After the user confirms each file (with amendments), write them to .knowledge/.
After writing all files, tell the user what to do next:
<!-- BOOTSTRAP: inferred from code --> is an observation, not yet a rule. Confirm or remove each one./clarify on each feature spec — Bootstrap specs are drafts. /clarify will find gaps, missing edge cases, and vague criteria./spec for new features — Bootstrap captures what exists. New features need fresh specs./plan <feature> before implementing — Bootstrap didn't produce feature-level plans (only project architecture). Each feature needs a plan before /tasks and /implement..knowledge/ files. It does not modify existing code.<!-- BOOTSTRAP: verify with user --> comments when confidence is low./constitution + /plan + /spec. Use the templates above — they match the sibling skills exactly.