[writes] Auto-configures a project for the forge pipeline. Use when setting up a new project for the first time, onboarding an existing codebase, or reconfiguring after major stack changes. Detects tech stack, generates config files, runs health scan, discovers related repos.
You are the pipeline initializer. Your job is to detect a project's tech stack, generate the correct configuration files, validate the setup, and optionally run a health scan. Be conversational — show what you find, ask for confirmation before writing files.
See shared/skill-contract.md for the standard exit-code table.
Before any action, verify:
Git repository: Run git rev-parse --show-toplevel. If fails: report "Not a git repository. Initialize with git init first." and STOP.
System prerequisites: Run . If fails: show the error messages and STOP. The user must install the missing prerequisites before the forge can operate.
bash shared/check-prerequisites.shEnvironment health check (informational): Run bash "${CLAUDE_PLUGIN_ROOT}/shared/check-environment.sh". Parse the JSON output and display a categorized dashboard:
## Environment Health
### Required
✅ bash 5.2.26 Shell runtime
✅ python3 3.12.4 State management, check engine
✅ git 2.45.1 Version control
### Recommended (improves pipeline quality)
✅ jq 1.7.1 JSON processing for state management
❌ docker Required for Neo4j knowledge graph
❌ tree-sitter L0 AST-based syntax validation
✅ gh 2.49.0 GitHub CLI for cross-repo discovery
✅ sqlite3 3.45.0 SQLite code graph
Use ✅ for available tools (with version) and ❌ for missing tools. Only show optional tools if they were detected (language-specific probes).
MCP Integration Detection: After displaying CLI tools, detect available MCP servers per shared/mcp-detection.md. For each MCP, check if its detection probe tool is available in your tool list. Display:
### MCP Integrations
✅ Context7 Library documentation lookups
❌ Playwright Visual verification + a11y testing
❌ Linear Issue tracking integration
❌ Figma Design-to-code workflows
✅ Excalidraw Architecture diagrams
Install suggestions: If any recommended tools or useful MCPs are missing, show platform-specific install commands from the JSON output's install field:
### Suggested Installations
For best pipeline experience:
docker: brew install --cask docker # Neo4j knowledge graph
tree-sitter: brew install tree-sitter # AST-based syntax validation
For optional MCP integrations:
Playwright: Claude Code Settings → MCP → Add "Playwright"
Linear: Claude Code Settings → MCP → Add "Linear"
This step is informational only — never block on missing optional tools. Continue immediately after displaying. If the script is missing or fails, skip this step silently.
After detecting the environment, provision the Forge MCP server for cross-platform AI client access:
Check config gate: If mcp_server.enabled: false in forge-config.md, skip.
Check Python version:
python3 --version 2>/dev/null
Parse the output. If Python 3.10+ is available, proceed. Otherwise log:
"ℹ️ Python 3.10+ not found. Forge MCP server skipped. Forge works without it."
and skip steps 3-5.
Check mcp package:
python3 -c "import mcp" 2>/dev/null
If import fails, attempt install:
pip install --user mcp 2>/dev/null || pip3 install --user mcp 2>/dev/null || uv pip install mcp 2>/dev/null
If all fail, log INFO and skip.Write .mcp.json entry:
Read existing .mcp.json at project root (create {} if absent). Merge the forge server entry:
{
"mcpServers": {
"forge": {
"command": "python3",
"args": ["{CLAUDE_PLUGIN_ROOT}/shared/mcp-server/forge-mcp-server.py"],
"env": {
"FORGE_PROJECT_ROOT": "{project_root}"
}
}
}
}
If forge entry already exists, update the args path (idempotent).
Display result:
✅ Forge MCP server provisioned in .mcp.json
Any MCP-capable AI client can now query pipeline state, run history, and findings.
Idempotency: Running /forge-init again does not duplicate the entry. It updates the args path if the plugin location changed.
Work through these phases in order. Do NOT skip ahead -- each phase builds on the previous one.
Before scanning for stack markers, verify the environment is ready:
git rev-parse --show-toplevel
git init first." Abort..claude/ directory exists and is writable.
.claude/ directory is not writable. Check permissions." Abort..claude/forge.local.md already exists.
forge.local.md. What should I do?", options: "Overwrite" (description: "Replace existing config with freshly detected settings") and "Keep existing" (description: "Abort initialization and preserve current configuration").Before scanning for stack markers, check whether the project is empty or has no source code:
git ls-files --cached --others --exclude-standard | head -5
.claude/ and .forge/): the project is greenfield..gitignore, .editorconfig, README.md, LICENSE) with no source code: the project is greenfield.package.json, build.gradle.kts, build.gradle, Cargo.toml, go.mod, pyproject.toml, Package.swift, *.csproj, CMakeLists.txt, Makefile).
If the project is greenfield, ASK via AskUserQuestion with header "New Project", question "This looks like a new project with no code yet. Would you like to scaffold a project from scratch?", options:
If user chooses "Bootstrap":
fg-050-project-bootstrapper via the Agent tool with the user's description. The bootstrapper handles all scaffolding, validation, and auto-runs /forge-init at the end. Return the bootstrapper's output and stop — do not continue to Phase 2.If user chooses "Select stack manually":
Present the available frameworks grouped by language:
| Language | Frameworks |
|---|---|
| Kotlin/Java | spring, jetpack-compose, kotlin-multiplatform |
| TypeScript | react, nextjs, angular, vue, svelte, sveltekit, express, nestjs |
| Rust | axum |
| Go | gin, go-stdlib |
| Python | fastapi, django |
| Swift | vapor, swiftui |
| C# | aspnet |
| C/C++ | embedded |
| Infrastructure | k8s |
ASK via AskUserQuestion with header "Framework", question "Which framework will you use?", options: up to 4 most likely matches based on any context clues, plus "Other" (description: "I'll type the framework name").
Once the framework is selected, use it as the detected module and continue to Phase 1.5 (Code Quality Recommendations) and Phase 2 (CONFIGURE) normally. Skip the rest of Phase 1 detection since the user selected manually.
If user chooses "Skip": Continue with normal stack detection below.
Before stack detection, check for monorepo tooling:
Scan for monorepo markers:
nx.json → Nx monorepoturbo.json → Turborepo monorepopnpm-workspace.yaml → pnpm workspaceslerna.json → Lerna monoreporush.json → Rush monorepoIf monorepo detected:
Parse workspace/package definitions to list all packages/apps
For each package, run the Stack Detection logic below independently
Present findings:
Monorepo detected: {tool} ({N} packages)
| Package | Path | Framework | Language |
|------------------|-------------------|------------|------------|
| web-app | apps/web | react | typescript |
| api-server | apps/api | express | typescript |
| shared-utils | packages/utils | — | typescript |
ASK via AskUserQuestion with header "Monorepo", question "Which packages should the pipeline manage?", options:
If "All" or "Select": generate components: entries with path: per package in forge.local.md. Set monorepo.tool in forge-config.md.
If "Primary only": proceed with single-module stack detection as normal.
If no monorepo detected: proceed to Stack Detection normally.
Scan the project root and immediate subdirectories for stack markers. Check for the first match in this priority order:
| Markers | Module |
|---|---|
build.gradle.kts + compose dependency (Android/Compose) | jetpack-compose |
build.gradle.kts + kotlin("multiplatform") or KMP plugin | kotlin-multiplatform |
build.gradle.kts + spring-boot / org.springframework | spring |
build.gradle + spring-boot / org.springframework | spring |
angular.json | angular |
package.json + next.config.* | nextjs |
package.json + svelte.config.* + @sveltejs/kit dependency | sveltekit |
package.json + nest-cli.json | nestjs |
package.json + vue dependency | vue |
package.json + svelte dependency (no @sveltejs/kit) | svelte |
package.json + vite.config.* + react dependency | react |
package.json (no framework markers above) | express |
Cargo.toml + axum dependency | axum |
go.mod + gin-gonic/gin dependency | gin |
go.mod (no framework markers above) | go-stdlib |
manage.py or pyproject.toml + django dependency | django |
pyproject.toml + fastapi dependency | fastapi |
Package.swift + Vapor dependency |
If module detection is ambiguous — for example, both build.gradle.kts and package.json exist in the project root or subdirectories, matching multiple modules — do NOT guess. Instead:
related_modules field for future multi-module support.If detection is unambiguous (only one module matches), proceed without asking.
Also detect and note the presence of:
docker-compose.yml, docker-compose.yaml, Dockerfile.github/workflows/, .gitlab-ci.yml, Jenkinsfile, bitbucket-pipelines.ymlopenapi.yaml, openapi.json, swagger.yaml, swagger.json (search recursively)Scan project files to detect which crosscutting infrastructure modules are in use. These map to modules/{layer}/{name}.md convention files that the composition engine loads at runtime.
Databases — detect from config files and dependencies:
application.yml/application.properties with spring.datasource → parse driver class/URL for: postgresql, mysql, mariadb, oracle, mssqlprisma/schema.prisma → parse provider field: postgresql, mysql, sqlite, mongodbdocker-compose.yml service images: postgres → postgresql, mysql → mysql, mongo → mongodb, redis → redisknexfile.* or ormconfig.* → parse dialectsqlalchemy in requirements.txt/pyproject.toml → check DATABASE_URL in .envdiesel.toml → Rust diesel, parse backendPersistence/ORM — detect from dependencies:
spring-boot-starter-data-jpa or hibernate → jpa-hibernatespring-boot-starter-data-r2dbc → r2dbcjooq in build files → jooqexposed in build files → exposedprisma in package.json → prismatypeorm in package.json → typeormdrizzle-orm in package.json → drizzlesequelize in package.json → sequelizesqlalchemy → sqlalchemydiesel → dieselgorm in go.mod → gorment in go.mod → entMigrations — detect from dependencies/files:
flyway in build files or db/migration/V*.sql → flywayliquibase in build files or db/changelog → liquibaseprisma/migrations/ → prisma (already detected above)alembic/ or alembic.ini → alembicknex migrate or migrations/ with knex → knexAPI protocols — detect from files:
openapi.yaml/openapi.json/swagger.* → openapi*.proto files or buf.yaml → grpcschema.graphql or graphql dependency → graphqltrpc in package.json → trpcMessaging — detect from dependencies/config:
spring-kafka or kafka in docker-compose → kafkaspring-amqp/spring-rabbit or rabbitmq in docker-compose → rabbitmq@nestjs/bull or bullmq in package.json → bullmqcelery in requirements → celerynats in dependencies → natspulsar-client → pulsarCaching — detect from dependencies/config:
spring-boot-starter-data-redis or redis service in docker-compose → redisioredis or redis in package.json → redisspring-boot-starter-cache with caffeine → caffeinememcached → memcachedSearch — detect from dependencies/config:
elasticsearch or opensearch in dependencies/docker-compose → elasticsearch or opensearchmeilisearch in dependencies → meilisearchtypesense in dependencies → typesenseStorage — detect from dependencies:
aws-sdk/@aws-sdk/client-s3 or s3 in config → s3@google-cloud/storage → gcs@azure/storage-blob → azure-blobminio in docker-compose → minioAuth — detect from dependencies:
spring-security → spring-securitypassport in package.json → passportnext-auth or @auth/core → nextauthkeycloak in config → keycloakauth0 in dependencies → auth0firebase-admin auth → firebase-authjsonwebtoken, jose) → note as JWT-basedObservability — detect from dependencies/config:
opentelemetry or otel in dependencies → opentelemetrymicrometer in build files → micrometerprometheus in docker-compose → prometheusdatadog in dependencies → datadogsentry in dependencies → sentrypino or winston in package.json → note logging libraryi18n — detect from dependencies:
react-i18next or i18next in package.json → i18n: i18nextvue-i18n → i18n: vue-i18n@ngx-translate/core → i18n: ngx-translatedjango.utils.translation in imports → i18n: django*.lproj directories or Localizable.strings → i18n: applevalues-*/strings.xml → i18n: androidFeature flags — detect from dependencies:
launchdarkly-node-server-sdk or @launchdarkly/* → feature_flags: launchdarklyunleash-client or @unleash/* → feature_flags: unleash@growthbook/growthbook → feature_flags: growthbookflagsmith → feature_flags: flagsmithML/Ops — detect from dependencies/files:
mlflow in requirements → ml_ops: mlflowdvc.yaml or .dvc/ → ml_ops: dvcwandb in requirements → ml_ops: wandbsagemaker in requirements → ml_ops: sagemakerProperty-based testing — detect from dependencies:
jqwik in build files → property_testing: jqwikfast-check in package.json → property_testing: fast-checkhypothesis in requirements → property_testing: hypothesisproptest in Cargo.toml → property_testing: proptestDeployment infrastructure — detect from files:
argocd/Application CRD in YAML files → deployment: argocdChart.yaml → deployment: helmkustomization.yaml → deployment: kustomizeterraform/ or *.tf files → deployment: terraformpulumi/ or Pulumi.yaml → deployment: pulumiCollect all detected crosscutting modules. They will be presented in the summary table and configured in Phase 2.
Scan for configured code quality tools by checking for config files:
Linting/Analysis: .detekt.yml → detekt, .editorconfig with ktlint_* → ktlint, eslint.config.* or .eslintrc.* or eslintConfig in package.json → eslint, biome.json or biome.jsonc → biome, ruff.toml or [tool.ruff] in pyproject.toml → ruff, .golangci.yml → golangci-lint, clippy.toml → clippy, .swiftlint.yml → swiftlint, .credo.exs → credo, .rubocop.yml → rubocop, phpstan.neon → phpstan, analysis_options.yaml → dart-analyzer, .scalafmt.conf → scalafmt, .scalafix.conf → scalafix, roslyn analyzer packages in .csproj → roslyn-analyzers, checkstyle.xml → checkstyle, pmd.xml or ruleset.xml → pmd, spotbugs-exclude.xml → spotbugs, errorprone in build.gradle.kts → errorprone, .pylintrc or pylintrc → pylint, mypy.ini or .mypy.ini → mypy
Formatting: .prettierrc.* or prettier key in package.json → prettier, [tool.black] in pyproject.toml → black, spotless in build.gradle.kts → spotless, rustfmt.toml → rustfmt
Coverage: jacoco in build files → jacoco, nyc or c8 config → istanbul, [tool.coverage] in pyproject.toml or .coveragerc → coverage-py, coverlet in .csproj → coverlet
Security: dependencyCheck in build files → owasp-dependency-check, .snyk → snyk, .trivy.yaml → trivy
Present detected tools in the summary table.
Scan for documentation files beyond OpenAPI:
.md files in docs/, documentation/, wiki/, guides/ directoriesadr/, docs/adr/, docs/decisions/ directories; count files matching NNN-*.md or ADR-*.mdrunbook, playbook, operationsCHANGELOG.md, CHANGES.md, HISTORY.mdarchitecture.md, design.md, technical.mdAdd to the summary table:
Documentation: {N} files ({breakdown by type})
External docs: {list or "none detected"}
Present findings in a clear summary table:
Detected stack: react
Module: modules/frameworks/react
Package manager: pnpm
Monorepo: — (none detected)
Test framework: Vitest
Code quality: ESLint (lint), Prettier (format), istanbul (coverage)
Docker: docker-compose.yml (3 services)
CI/CD: GitHub Actions (2 workflows)
OpenAPI: docs/openapi.yaml
Documentation: 14 files (3 ADRs, 1 OpenAPI, 2 runbooks, 8 guides)
External docs: Confluence (2 spaces referenced)
Crosscutting:
Database: postgresql (via docker-compose)
Persistence: prisma
Migrations: prisma
Caching: redis (via docker-compose)
Auth: next-auth
Observability: sentry
i18n: react-i18next
Feature flags: — (none detected)
ML/Ops: — (none detected)
Messaging: — (none detected)
Search: — (none detected)
Storage: s3 (via @aws-sdk/client-s3)
Deployment: — (none detected)
Only show crosscutting modules that were detected or are commonly expected for the framework. Omit categories with no detections if the list would be too long (show max 8 lines; use "— (none detected)" for commonly expected but missing ones).
ASK via AskUserQuestion with header "Confirm", question "Does this detected stack look correct? Should I proceed with the {module} module?", options: "Proceed" (description: "Stack detection looks correct, continue to configuration") and "Adjust" (description: "Something is wrong — I'll provide corrections").
Wait for confirmation before continuing. If the user chooses "Adjust", ask what needs to change and adjust accordingly.
If detection returned unknown/null (non-greenfield project with code but unrecognized stack): Present the available frameworks table (same as the "Select stack manually" flow in Greenfield Detection) and ask the user to select manually. Do NOT proceed with a null module — every project needs a resolved framework module for configuration generation.
After stack confirmation, if any documentation files were detected, ask:
"Found {N} documentation files. Are there additional docs I should know about? (external wikis, Confluence spaces, Notion pages, shared drives) You can also add these later — the pipeline picks up new docs automatically on each run."
If the user provides URLs or paths:
documentation.external_sources array during Phase 2 configuration.If the user says no or skips: proceed without additional sources.
Input: Framework's code_quality_recommended list from local-template.md + project's existing tool configs detected in Phase 1.
Algorithm:
Load recommendations: Read the framework's code_quality_recommended list from local-template.md
Read frontmatter: For each recommended tool, read its modules/code-quality/{tool}.md YAML frontmatter to extract: exclusive_group, recommendation_score, detection_files, categories
Detect existing tools: For each tool, check if ANY of its detection_files exist in the project root. Mark as "already configured" if found.
Group by exclusive_group: Partition tools into groups. Tools with exclusive_group: none (security scanners) go into a "complementary" bucket — no deduplication needed.
Deduplicate per group:
a. If the project already has a tool from this group (detected via detection_files) → keep it, hide alternatives
b. If no tool detected in the group → pre-select the one with highest recommendation_score
c. Mark remaining tools in the group as "alternatives (not selected)"
Present to user via AskUserQuestion:
Header: "Code Quality Tools"
Question: "Recommended tools for your {framework} + {language} project:"
Options:
A) Accept recommendations:
✅ {tool1} — {description from overview} (recommended)
✅ {tool2} — {description} (recommended)
↳ Alternatives: {alt1}, {alt2} (same category: {exclusive_group})
✅ {tool3} — {description} (recommended)
...
B) Customize selection (per-group choices)
C) Skip code quality setup
If user selects (B) — Customize:
For each exclusive group with multiple members, present via AskUserQuestion:
For exclusive groups (radio — pick one):
Header: "{Language} {Category}"
Question: "Pick one (or none):"
Options:
A) {tool1} — {brief desc} (recommended, score: {N})
B) {tool2} — {brief desc} (score: {N})
C) {tool3} — {brief desc} (score: {N})
D) None — skip this category
For complementary groups (checkboxes — pick any):
Once confirmed, generate the configuration files:
Read the module template: Read ${CLAUDE_PLUGIN_ROOT}/modules/frameworks/{detected_module}/local-template.md to get the template content.
Fill in detected values: Replace template placeholders with detected project-specific values:
./gradlew build -x test, pnpm build, cargo build)./gradlew test, pnpm test, cargo test)Write config files:
.claude/forge.local.md${CLAUDE_PLUGIN_ROOT}/modules/frameworks/{detected_module}/forge-config-template.md exists, copy it to .claude/forge-config.md.claude/forge-log.md with this content:
# Forge Log
Accumulated learnings from forge runs. Updated automatically by the retrospective agent.
Code Quality Scaffolding: For each accepted tool from Phase 1.5:
modules/code-quality/{tool}.md → Installation & Setup and CI Integration sectionscode_quality list in forge.local.md (simple string form - jacoco or object form with external ruleset - tool: detekt\n ruleset:\n type: external\n source: "...")Documentation config: If the module's local-template.md includes a documentation: section (all modules now do), populate detected values:
external_sources from any URLs the user provided in the documentation promptauto_generate defaults come from the template — no detection-based overrides neededShow the user what files were created and their key settings. ASK via AskUserQuestion with header "Validate", question "Config files written. Want me to validate the setup?", options: "Validate" (description: "Run build, test, and engine checks to verify everything works (Recommended)"), "Skip" (description: "Skip validation — I'll test it myself later").
Scan for existing hooks in the project:
.husky/ → Husky detected.git/hooks/commit-msg (exists with content, not default sample) → Native git hook.pre-commit-config.yaml → pre-commit frameworklefthook.yml → Lefthookcommitlint.config.* (js, json, yaml, yml, ts, cjs, mjs) → commitlint.czrc or .cz.json → CommitizenIf any convention tool detected:
forge.local.md git: section with commit_format: project and detected rulesgit.commit_enforcement: externalIf NO convention tool detected:
AskUserQuestion:
Header: "Git Conventions"
Question: "No commit conventions detected. Would you like to set up Conventional Commits?"
Options:
A) Yes, set up Conventional Commits (recommended)
B) No, I'll configure my own later
forge.local.md git: section with commit_format: conventionalgit: section with commit_format: noneBranch naming:
git.branch_template: "{type}/{ticket}-{slug}" to forge.local.md.forge/tracking/counter.json already existsAskUserQuestion:
Header: "Kanban Tracking"
Question: "Set up file-based kanban tracking for this project?"
Options:
A) Yes, with default prefix "FG"
B) Yes, with custom prefix
C) No, skip tracking
shared/tracking/tracking-ops.sh, call init_counter ".forge/tracking"AskUserQuestion, then init_counter ".forge/tracking" "$prefix"mkdir -p .forge/tracking/{backlog,in-progress,review,done}generate_board ".forge/tracking"Ask the user whether to enable caveman mode (terse output compression) for pipeline sessions:
ASK via AskUserQuestion with header "Output Compression", question "Forge can compress its output to save tokens and reduce noise. Pick a compression level:", options:
Based on the user's choice:
If Ultra/Full/Lite: Add to forge-config.md:
caveman:
enabled: true
default_mode: {chosen_mode} # ultra | full | lite
Create .forge/caveman-mode with the chosen mode value.
If Off: Do not add a caveman: section (omitting = disabled). If .forge/caveman-mode exists, delete it.
Tell the user: "Output compression set to {mode}. Change anytime with /forge-compress output [mode]."
Present the crosscutting modules detected in Phase 1 and let the user confirm or adjust.
ASK via AskUserQuestion with header "Infrastructure", question "Detected these infrastructure modules. Confirm or adjust:", options:
If user chooses "Customize": For each detected category, show detected value and allow override. Also present undetected categories that are common for the framework and ask if any should be added.
If user chooses "Accept all" or after customization: Write to forge.local.md under the appropriate config keys:
vapor |
*.xcodeproj | swiftui |
*.csproj or *.sln | aspnet |
Makefile + *.c source files | embedded |
| Helm charts / K8s manifests / Terraform dirs | k8s |
Header: "Security Scanning"
Question: "Select any (all are complementary):"
Options:
A) ☑ {tool1} — {desc} (recommended)
B) ☐ {tool2} — {desc}
C) ☐ {tool3} — {desc}
Write selections to forge.local.md code_quality: list.
code_quality: [detekt, ktlint, jacoco]code_quality: [{name: detekt, ruleset: "path/to/rules.xml"}]Create .claude/ directory if it does not exist. Never overwrite existing files without asking first — if any config file already exists, show a diff of what would change and ask for confirmation.
Ensure .forge/ is gitignored: Check if the project's .gitignore already contains a .forge/ or .forge entry. If not, append it:
# Forge pipeline state (local only, never committed)
.forge/
If .gitignore does not exist, create it with this entry. This prevents pipeline state (lock files, worktrees, checkpoints, tracking, reports) from being accidentally committed.