Use this skill whenever the user provides a plan file (markdown, spec, or design doc) and asks you to implement it. Triggers on phrases like "implement this", "build this from the plan", "let's do this", "implement @<file>", or any request that combines a reference to a plan/spec file with an instruction to execute it. The skill enforces disciplined implementation: thorough code review before touching anything, structured doc updates, per-chunk test split approval, frequent check-ins, GitHub pushes with CI awareness, and hard stops when things are ambiguous or broken. Use it even if the user's request is casual — "lets do so" paired with a plan file reference is a clear trigger.
You are implementing a software project from a plan file. Your job is to execute faithfully, test constantly, communicate clearly, and never go silent for long.
Complete all substeps fully before writing a single line of code.
Read the entire plan file before doing anything else. Do not skim.
Follow ~/.claude/skills/phase-code-review.md exactly. That skill owns the review format,
the cache-check logic (SHA-keyed against code.md), and when to skip vs. re-run. Do not
inline or duplicate its logic here.
After the code review, explicitly note:
Add this cross-reference section to code.md.
If anything in the plan is unclear, underspecified, or contradictory — stop and ask the user before writing a single line of code. Do not make assumptions on ambiguous specs. This is a hard rule. Only proceed when you have clarity.
Examples of things that require a stop:
Examples of things you can resolve yourself (no need to ask):
.github/workflows/ carefully — understand what each workflow triggers on,
what it tests or builds, and how to read pass/fail with gh run list / gh run view <run-id>Update the following files based on your understanding of the plan and codebase. Create any that do not exist.
CLAUDE.md — Add or update:
ARCH.md — Add or update:
TODO.md — Replace or update with:
After updating all three files, hard stop and ask the user:
"I've completed the code review (written to
code.md) and updated CLAUDE.md, ARCH.md, and TODO.md based on the plan. Do you want to add any context, correct anything, or adjust scope before I begin implementation?"
Wait for the user's response before proceeding to Step 2.
Decompose the plan into ordered, testable implementation chunks. Each chunk should be small enough that you can implement it, test it, and describe what you did in a few sentences.
For each chunk, before implementing it, present all of the following and wait for explicit user approval before writing any code for that chunk:
What code will be written: List every file you intend to create or modify and describe exactly what changes you will make. Call out any risky or irreversible changes.
Recommended test split: State clearly which tests you recommend writing as local unit tests vs. CI workflow checks, and why. Use this exact format:
"I recommend writing X as a local unit test because [reason — e.g., fast, no external deps, tests logic in isolation]. I recommend writing Y as a CI workflow check because [reason — e.g., requires secrets, cross-platform, or duplicates local coverage if run locally]. Approve this split or redirect me."
The goal is to keep local tests and CI complementary, not overlapping. Local tests cover logic fast; CI handles environment-sensitive checks, cross-platform builds, publishing steps, or anything requiring infra not available locally. Never duplicate the same test in both layers.
Do not present all chunks at once and ask for a single bulk approval. Present one chunk, get approval, implement it, check in, then present the next chunk. This keeps each approval decision concrete and grounded.
Once the user approves a chunk's code plan and test split, implement it. Follow the project's existing code conventions (from CLAUDE.md). Work through TODO.md top to bottom, marking items complete as you go.
After implementing each chunk, deliver a check-in using the format below, push to GitHub if tests pass, check CI status, then present the next chunk for approval (back to Step 2).
Every check-in must follow this exact structure. No freeform updates.
## Check-in [N]
**Just implemented:** <1–3 sentences on what changed>
**Test / CI status:**
- Local tests: PASS / FAIL / SKIPPED (with reason)
- CI: <workflow name> — PASS / FAIL / PENDING / NOT YET PUSHED
- If anything is FAIL: see Blockers below
**Blockers / open questions:**
- <list any blockers, or "None">
**Up next:** <what the next chunk is>
Never skip any field. If tests haven't run yet, say so explicitly.
The rule of thumb: if you've been implementing without talking to the user for what feels like "a while", you've waited too long. Err heavily on the side of more check-ins, not fewer.
Push after each chunk that passes tests. Don't batch up multiple chunks before pushing. Use descriptive commit messages that reference the plan section being implemented.
Commit message format:
[plan: <section name>] implement <what you did>
- bullet of key change
- bullet of key change
Never push with failing local tests. Never push without first checking CI status on the previous push.
Check CI status before starting the next chunk. If CI is running, note it as PENDING in your check-in. If CI fails:
Use CI as a signal, not a formality. If the workflow tests something your local run doesn't cover, CI failure is a real blocker.
Stop and ask the user in these situations. Do not attempt to work around them:
| Situation | Action |
|---|---|
| Ambiguous spec | Ask before implementing |
| Test failure (local or CI) | Stop, report, wait |
| Conflict between plan and existing code | Ask which wins |
| A step would require deleting or restructuring existing files | Ask first |
| You're unsure if a dependency is safe to add | Ask |
| Plan references something that doesn't exist in the repo | Ask |
| About to write tests for a chunk | Present test split recommendation and wait for approval |
When the full plan is implemented:
LOG.md entry in the same directory as the plan file (create it if it does not
exist). Append one entry per session using the format below.Do not declare "done" if any tests are failing or CI is red.
## [YYYY-MM-DD] <Plan title or short description>
### Features Implemented
- <bullet per feature>
### Files Changed
| File | What changed |
|------|-------------|
| path/to/file | Description of change |
### Functions Written
| Function | File | Description |
|----------|------|-------------|
| function_name | path/to/file | What it does |
### Data Structures Created
| Name | File | Description |
|------|------|-------------|
| StructName | path/to/file | Fields and purpose |
### Notes
- Any caveats, known issues, or follow-up items
Be specific. The LOG.md is used for context compaction — it must be self-contained enough that a future session can understand what was done without reading the code.