Bootstrap a repository with expert skills and context files for productive Claude sessions. Analyzes codebase architecture, proposes domain-specific expert roles, creates SKILL.md files with references, and wires up feature/debug context loaders. Use when starting in a new repo ("/prime", "bootstrap this repo", "set up expert skills"), when skills are stale ("/prime --refresh"), or when the user wants to create expert roles for a codebase they'll work in repeatedly.
Analyze a codebase and generate expert skills + context files that make every future session productive. Run once to set up, --refresh to keep current.
/prime → first-time: analyze repo, create expert skills + contexts
/prime --refresh → update stale skills based on what changed since last prime
Archive root: Resolve
$SESSION_KIT_ROOT(default:~/.stoobz). All~/.stoobz/paths below use this root.
On first invocation of any session-kit skill in this session, register the active session in the manifest. See session-checkin.md for the full protocol. Summary:
.jsonl~/.claude/projects/$(pwd | tr '/' '-')/$SESSION_KIT_ROOT/manifest.json (create if missing).session_id exists → create active registration (status: "active", session_id, return_to, started_at, last_activity, last_exchange, skills_used, nulls for label/summary/archive_path).last_activity, last_exchange, append this skill to skills_used.First time: /prime → /pickup → [work] → /park
Returning: /pickup → [work] → /park
Stale repo: /prime --refresh → /pickup → [work] → /park
/prime creates the permanent knowledge layer (expert skills). /pickup loads session-specific context. Different layers of the same system.
Establish the current state of the repo and any existing skills.
CLAUDE.md exists?
├── No → Run /init to generate it. Read the result as baseline.
├── Yes → Check staleness:
│ ├── Count commits since CLAUDE.md last modified
│ ├── If >10 commits behind → recommend update:
│ │ "CLAUDE.md is N commits stale. Quick refresh before proceeding?"
│ │ If user agrees → re-run /init to update, read result
│ │ If user declines → read existing CLAUDE.md as-is
│ └── If fresh → read as baseline
Scan .claude/skills/ for */SKILL.md files
├── No skills found → fresh setup, proceed to Phase 2
├── Skills found → report what exists:
│ "Found N existing expert skills: {list with descriptions}"
│ For each skill, check staleness:
│ git log --oneline --since="skill-mtime" -- <relevant-paths> | wc -l
│ "{skill-name} covers {paths} — {N} commits since last update"
│ Ask: "Create fresh skills or update existing ones?"
Identify the technology stack from project files:
| Marker | Stack |
|---|---|
package.json | Node/React/Vue/Angular (check dependencies) |
*.csproj / *.sln | .NET (check TargetFramework for version) |
mix.exs | Elixir/Phoenix |
Cargo.toml | Rust |
go.mod | Go |
Gemfile | Ruby/Rails |
pyproject.toml / requirements.txt | Python |
pom.xml / build.gradle | Java/Kotlin |
Note hybrid stacks (e.g., .NET backend + React frontend).
Launch parallel background agents to analyze each architectural layer. Use the Task tool with subagent_type=Explore and run_in_background=true.
Agent assignment based on detected stack. Examples:
| Stack | Agents to Launch |
|---|---|
| .NET + React | Backend (.NET patterns, controllers, services, data), Frontend (React patterns, components, state), Auth/Config (middleware, env config) |
| Elixir/Phoenix | Domain (contexts, schemas, queries), Web (controllers, views, channels), Infrastructure (config, deployment, telemetry) |
| React SPA | Components (patterns, state mgmt), API layer (services, hooks), Build/Config (webpack, env, CI) |
| Go microservice | Handlers (HTTP, gRPC), Domain (models, services), Infrastructure (config, deployment) |
Each agent should analyze:
Launch 2-4 agents — enough for coverage, not so many they're redundant. While agents run, read key files yourself to build understanding. Synthesize agent results when they complete.
Based on analysis, propose expert roles to the user. Present:
Based on the analysis, I recommend N expert skills:
1. {repo}-expert — {what it covers: architecture, domain vocab, key patterns}
2. {repo}-{layer}-expert — {what it covers: specific layer patterns}
3. {repo}-auth-expert — {if auth is complex enough to warrant its own skill}
And 2 context files:
- {repo}-feature-context — loads all experts for feature development
- {repo}-debug-context — loads relevant experts + debugging decision tree
Does this look right, or should I adjust the boundaries?
Guidelines for proposing skills:
{repo}-expert for main, {repo}-{layer}-expert for specialized.Wait for user approval before proceeding.
Write the skill files following progressive disclosure:
---