Use when crafting prompts for LLMs, implementing chain-of-thought reasoning, structured output generation, or debugging unreliable AI outputs. Meta-protocols that other skills should reference for reasoning and self-correction patterns.
This skill provides meta-protocols that other skills should import or emulate to ensure intelligence and reliability.
Use when: Solving complex logic, debugging, or planning architecture.
Instruction: "Think step-by-step. First, analyze the constraints and dependencies. Then, outline your approach. Finally, generate the solution."
Use when: Generating critical code, security configs, or valid JSON.
<!-- 🔧 MAINTAINER NOTE: When a second architecture-level section is needed (e.g., tool design, context window management, model selection strategy), extract this section into its own standalone skill (e.g., `ai-app-architecture`). -->Instruction: "After generating the code, review it against the user's requirements. If you find errors or missing edge cases, correct them outputting the final block."
Use when: You need deep domain expertise (e.g., Security, Legal, DevOps).
Instruction: "Act as a [Role Name]. You are an expert in [Domain]. You prioritize [Value X] over [Value Y]." Example: "Act as a Principal Site Reliability Engineer. Prioritize system stability and observability over feature velocity."
Use when: The output must be parsed by a script or tool. See patterns/structured-output.md for implementation details.
Instruction: "Output ONLY valid JSON. Do not include markdown formatting (```json) or conversational text. The schema must be: { key: 'value' }."
Use when: Orchestrating complex tasks where simple prompts fail but full context is expensive.
Instruction: "Start with the simplest prompt. If the output fails validation or lacks depth, escalate to a constrained prompt, then a reasoning prompt, and finally a few-shot prompt with examples." Reference:
patterns/progressive-disclosure.md
Use when: Reliability is paramount and you need to ensure specific criteria are met.
Instruction: "After generating the response, verify it meets ALL these criteria:
- Directly addresses the original request
- Contains no factual errors
- Uses proper formatting If verification fails, revise before outputting."
Use when: Handling potentially malformed outputs or ambiguous requests.
Instruction: "If the output is invalid (e.g., malformed JSON), catch the error and retry with a simplified prompt or an explicit correction instruction citing the error."
Use when: Designing how an AI-powered application structures its instructions across layers.
AI applications have multiple places to put instructions. Choosing the wrong layer leads to bloated contexts, inconsistent behavior, or lost knowledge. Use this protocol to decide where each instruction belongs.
| Layer | Loaded | Purpose | Context Cost |
|---|---|---|---|
| System Prompt | Every conversation | Identity, universal rules, persona | Always consuming tokens |
| SKILL.md | On-demand, when relevant | Specialized procedures & domain expertise | Only when activated |
| Runtime Context | Per-request, dynamically | User data, session state, tool outputs | Varies per request |
Put it in the System Prompt when:
Put it in a SKILL.md when:
Put it in Runtime Context when:
Stuffing everything into the system prompt is the most common mistake. Signs you've fallen into this trap:
Fix: Extract specialized instructions into skills. Keep the system prompt lean — identity, universal rules, and just enough context for routing.
patterns/few-shot-selection.md) instead.When building new skills (like git-wizard or system-design-architect), explicitly reference these protocols in their instructions.