Generates three context layer artifacts for a code module: MODULE_MANIFEST.md (structural map — where things connect), BEHAVIORAL_CONTRACTS.md (semantic contracts — what each interface guarantees), and DECISION_LOG.md (philosophical record — why decisions were made, with explicit warnings about what breaks if reversed). Use this skill whenever working on a module that lacks documentation, when the original author has left, before an AI agent modifies an unfamiliar module, when documenting a module after onboarding, when a codebase audit flagged missing context layers, or any time you hear phrases like "document this module", "make this self-describing", "build context layers", "preserve knowledge before the author leaves", or "what does this module do". This skill is especially important for AI-generated code that was never explained by anyone.
You are generating three durable context artifacts for a code module. These artifacts serve a specific purpose: they make knowledge that currently exists only in the original author's head (or nowhere, if they've left) permanently available to the next engineer — human or AI — who works on this module.
The three artifacts address three distinct questions:
The Decision Log's Warning field is the most important thing you'll write. It is what prevents an AI agent from confidently reversing a deliberate architectural decision because the code "looks like it could be simpler."
Before asking the user a single question, gather everything the codebase can tell you automatically. This context will make the interviews faster and more specific.
Run these in parallel:
List the module's files — Glob for all source files in the target directory. Note: entry points (index.ts, main.py, etc.), exported symbols, config files, test files.
Find what this module imports — Grep for import, require, from statements within the module files. Extract the imported modules/packages. Distinguish: internal project imports vs. external dependencies vs. shared infrastructure (DB clients, cache clients, queue clients).
Find what imports THIS module — Grep the broader codebase for imports of this module's path or package name. These are the dependents — the services and code that break if you change this module's interface.
Find shared resource patterns — Grep for: Redis/cache key patterns, database table names, queue/topic names, S3 bucket references, shared config keys. These are cross-service data flows that won't appear in import graphs.
Read existing documentation — Check for: README.md, existing DECISION_LOG.md, ADRs in docs/adr/ or similar, inline comments explaining "why" (look for // NOTE:, // IMPORTANT:, # TODO:, # HACK:).
Scan git history for warnings — If git is available, run git log --oneline --follow <key-file> to see if there are commit messages mentioning incidents, regressions, or "do not change" notes.
Present a discovery summary to the user before starting interviews:
Auto-discovered:
• Files: [list]
• Imports from: [internal modules], [external packages], [infrastructure clients]
• Imported by: [list of dependents found]
• Shared resources: [Redis keys / DB tables / queues found]
• Existing docs: [found / not found]
• Notable git history: [any warning-worthy commits]
Starting structural interview. Correct anything that looks wrong.
Goal: Complete and accurate MODULE_MANIFEST.md. You already have the skeleton from auto-discovery. The interview fills in what grep can't find.
Ask these questions in a single message (not one at a time):
Deployment & ownership:
Dependencies (verify and extend auto-discovery): For each dependency you discovered, ask: "Is this a synchronous API call, an async event, a shared DB read, or something else?" The nature of the connection matters — a sync call creates a latency dependency; a shared DB read creates a schema coupling.
If auto-discovery found infrastructure clients (Redis, Postgres, SQS, etc.), ask:
Data sensitivity:
Unknown context rule: If the original author has left and certain connections are uncertain, write it explicitly in the manifest: "Reasoning unknown — original author departed. Treat as load-bearing; do not modify without investigation." Never leave a gap silently.
Goal: Complete BEHAVIORAL_CONTRACTS.md. For each exported interface (function, endpoint, event, job), you need precise behavioral guarantees.
From auto-discovery, you know the interface names. For each one, ask:
Idempotency: Can this be called twice with the same inputs without side effects? (Critical for retry logic and at-least-once delivery)
Failure modes: What are the distinct failure scenarios? For each: does the module return an error, throw, emit an event, or silently succeed? What is the caller's responsibility on failure?
Performance envelope: Typical latency? What degrades it (load, data size, downstream latency)? Are there known slow paths?
Side effects: What does this call change beyond its return value? (DB writes, cache invalidations, events emitted, files written, downstream API calls triggered)
Retry semantics: Safe to retry? With what backoff? Any retry limits enforced by this module?
Data classification: What sensitivity level is the input/output? Does any PII flow through this interface?
Push back on vague answers. If the answer is "it just works normally," ask: "What does 'normally' mean for a caller who sees a 500 — retry once? retry with backoff? dead-letter? alert?" The goal is that a caller who has never spoken to this module's author can implement correct retry and error handling from the contract alone.
Goal: DECISION_LOG.md entries that prevent future engineers — especially AI agents — from reversing deliberate decisions.
The most valuable entries are the ones that answer: "Why doesn't this look the way you'd expect?"
Ask:
Alternatives rejected:
Non-obvious constraints:
The danger zones:
The Kiro pattern — specifically ask: "Is there any state in this module (database rows, cache entries, external resources) that must be treated as persistent and must not be deleted or recreated?" If yes, this gets a Warning entry in the Decision Log, bolded.
For each significant decision, the Warning field is mandatory if any of these are true:
Write all three files at the module root (same directory as the module's source files). If a file already exists, read it first and extend rather than overwrite.
Use the templates in:
references/module-manifest-template.mdreferences/behavioral-contract-template.mdreferences/decision-log-template.mdKey rules:
If invoked with a path argument (/context-layer-generator path/to/module), target that directory. If invoked without arguments, ask the user which module to document before proceeding.
Report what was created:
Context layer written:
✓ MODULE_MANIFEST.md — [N] dependencies, [N] dependents, [N] shared resources
✓ BEHAVIORAL_CONTRACTS.md — [N] interfaces documented
✓ DECISION_LOG.md — [N] decisions, [N] with Warning fields
Run /dark-code-audit to see where other modules in this codebase still need context layers.
If any interview questions were left unanswered (the author didn't know, or the original author had left), list them explicitly as open questions at the end of the relevant artifact, marked <!-- OPEN QUESTION: ... -->. These are signals for future investigation, not permission to leave the field blank.