Control AI output through structured specifications: define roles, scope, format, decision rules, and abstraction levels. Prompts become contracts, not requests.
AI Instruction Design is Layer 3 of AI fluency—the ability to control output quality through structured instruction rather than trial-and-error prompting. This isn't about "prompt hacks" but about treating prompts as specifications.
Core Principle: Your prompts should look like specifications, not requests.
Fluency Signal: Produce stable outputs across multiple runs and models.
Structured instructions have five components:
What it does: Establishes the perspective, expertise, and behavior AI should adopt.
Elements:
Examples:
Role: You are a senior technical writer with expertise in API documentation.
Role: Act as a skeptical reviewer looking for weaknesses in arguments.
Role: You are a data analyst explaining findings to non-technical stakeholders.
Common mistake: Generic roles ("You are a helpful assistant") add no value.
What it does: Defines what's in and out of bounds.
Elements:
Examples:
Scope:
- Focus only on the authentication module
- Do not suggest architectural changes
- Assume the existing API contracts cannot change
- Ask if you need clarification on any requirement
Common mistake: Unbounded scope leads to unfocused output.
What it does: Specifies output structure.
Format types:
Example:
Format your response as:
## Summary (2-3 sentences)
## Key Findings (bulleted list, max 5 items)
## Recommendations (numbered, with rationale for each)
## Risks (table with columns: Risk | Likelihood | Impact | Mitigation)
Common mistake: Accepting prose when structure would be more useful.
What it does: Specifies how to handle judgment calls.
Elements:
Example:
Decision rules:
- Prioritize accuracy over comprehensiveness
- Only include findings with >80% confidence
- If uncertain, state the uncertainty rather than guessing
- Flag any recommendation that requires >$10K investment
Common mistake: Leaving judgment calls implicit.
What it does: Sets the level of detail and technical depth.
Dimensions:
Example:
Abstraction level:
- Write for a technical audience familiar with Python but new to async programming
- Include code examples for each concept
- Skip basic syntax explanations
- Explain non-obvious design decisions
Common mistake: Mismatched abstraction wastes time on basics or confuses with unexplained complexity.
## Objective
[What you want to achieve]
## Role
[Who the AI is acting as]
## Context
[Background information needed]
## Scope
- In scope: [what to address]
- Out of scope: [what to exclude]
## Format
[Output structure requirements]
## Decision Rules
[How to handle judgment calls]
## Quality Criteria
[What makes the output acceptable]
## Examples
[If helpful, show desired output style]
Write prompts as if they're contracts:
GIVEN:
- [Input/context you're providing]
- [Assumptions that apply]
WHEN:
- [The task to perform]
THEN:
- [Expected output format]
- [Quality standards]
- [Constraints to respect]
For repeatable tasks, create fill-in templates:
Analyze [DOCUMENT_TYPE] for [PURPOSE].
Focus on:
- [FOCUS_AREA_1]
- [FOCUS_AREA_2]
- [FOCUS_AREA_3]
Output format: [FORMAT_SPECIFICATION]
Constraints:
- [CONSTRAINT_1]
- [CONSTRAINT_2]
Take a vague prompt and transform it:
Before:
"Review this code and tell me what's wrong"
After:
"Review this Python code as a senior developer focused on:
- Security vulnerabilities (SQL injection, input validation)
- Performance issues (O(n²) or worse operations)
- Maintainability concerns
For each issue found:
- State the problem in one sentence
- Show the problematic code snippet
- Provide a corrected version
- Rate severity: Critical/High/Medium/Low
If no issues found in a category, explicitly state that."
Define output schema before writing the prompt:
When output is wrong:
This builds transferable skill; "try again" doesn't.
Most important → First (prime position)
Context → Before tasks (available when needed)
Format → Before asking for output (shapes generation)
Examples → Near the end (concrete reference)
For complex tasks, break into clear sections:
# ROLE
[Role definition]
# TASK OVERVIEW
[High-level objective]
# DETAILED REQUIREMENTS
## Part 1: [First subtask]
[Specific requirements]
## Part 2: [Second subtask]
[Specific requirements]
# OUTPUT SPECIFICATIONS
[Format and structure]
# QUALITY STANDARDS
[Success criteria]
Layer 3 Complete When:
Wrong: "Summarize this article" Right: "As a news editor writing for busy executives, summarize this article in 3 bullet points focusing on business implications"
Wrong: "List the key points" Right: "List exactly 5 key points, each as a single sentence starting with an action verb, in priority order"
Wrong: "Focus on what's important" Right: "Focus on points that would change a reader's decision; exclude background information and context they likely already know"
Wrong: "Write a good summary" Right: "Write a summary that passes this test: someone who only reads the summary should be able to explain the main argument to a colleague"