Prompt writing best practices - use when creating or improving prompts for LLM agents
Create effective prompts for AI Agent systems.
Ask the model to perform a task without any examples.
Task: Classify the sentiment of this review as positive or negative.
Review: "This product exceeded my expectations!"
Provide a single example to show the expected format or behavior.
Task: Classify the sentiment as positive or negative.
Example:
Review: "This product is terrible."
Sentiment: negative
Now classify:
Review: "Love this product!"
Sentiment:
Provide 2-5 examples to demonstrate the pattern.
Task: Extract the main entity and its sentiment.
Examples:
- "Apple released new iPhone" -> Entity: Apple, Sentiment: neutral
- "Microsoft stock surged 5%" -> Entity: Microsoft, Sentiment: positive
- "Tesla recalled vehicles" -> Entity: Tesla, Sentiment: negative
Now extract:
- "Google launched AI assistant"
Use 6+ examples for complex or nuanced tasks. Helps the model understand edge cases.
When to use:
Ask the model to show its reasoning step by step.
Task: If there are 5 birds on a fence and you shoot 1, how many are left?
Think step by step:
1. Starting count: 5 birds
2. Action: shoot 1 bird
3. The shot may cause all birds to fly away
4. Answer: 0 (or 1 if the bird died and others stayed)
Explore multiple reasoning paths for complex decisions.
Task: Find the most efficient route from A to D through B and C.
Explore different paths:
- Path 1: A -> B -> D (cost: 10)
- Path 2: A -> C -> D (cost: 8)
- Path 3: A -> B -> C -> D (cost: 12)
Best path: A -> C -> D
When describing a task, include step-by-step instructions:
# Task: [Task Name]
## Objective
[Brief description of what to achieve]
## Process Steps
1. [First step - what to do]
2. [Second step - what to do]
3. [Third step - what to do]
...
## Output Requirements
- [Format requirement 1]
- [Format requirement 2]
## Examples
Example: [input] -> [expected output]
# Task: Code Review
## Objective
Review Python code for quality, security, and best practices.
## Process Steps
1. Read the code file to understand the implementation
2. Check for security vulnerabilities (hard-coded secrets, injection risks)
3. Verify code quality (type hints, docstrings, error handling)
4. Identify test coverage gaps
5. Provide specific recommendations with line numbers
## Output Format
- Critical Issues: [list]
- High Priority: [list]
- Medium Priority: [list]
- Suggestions: [list]
## Example
Code: def connect_db(password="hardcoded"): ...
Output:
- Critical Issues: Hard-coded password on line 1
# Role/Identity
You are [role] with expertise in [domain].
# Task
[Clear description of what to do]
# Context
[Background information needed]
# Constraints
- [Constraint 1]
- [Constraint 2]
# Output Format
[Exact format specification]
# Examples
Example input: ...
Example output: ...
Always externalize prompts to separate files:
prompts/
├── system.md # Main system prompt
├── tool_instructions/ # Tool-specific instructions
│ ├── web-search.md
│ └── code-execution.md
└── templates/ # Reusable templates
├── summary.md
└── analysis.md
from pathlib import Path
PROMPT_DIR = Path("prompts")
def load_prompt(name: str, **kwargs: str) -> str:
"""Load and format a prompt template."""
template = (PROMPT_DIR / f"{name}.md").read_text()
return template.format(**kwargs)