Validate task approach and requirements before execution
Before executing the user's request, run these validation checks to catch common failure patterns.
If task involves: "analysis", "plan", "optimize", "recommend", "improve", "audit", "review"
Action:
Example:
⚠️ This task requires data gathering first.
Data needed:
- Performance metrics from Google Ads (last 30 days)
- Current SKU content from Supabase (generated_content table)
- Approval rates by category
Approach:
1. Query database for real data
2. Present summary for verification
3. THEN proceed with analysis using verified data
Proceed with data gathering?
If task involves: Spawning agents (Task tool) + database/MCP operations
Action:
/tmp/, pass file paths to agentsExample:
⚠️ This task spawns agents that need MCP data.
Option A (Recommended):
- I run MCP queries here in main context
- Save results to /tmp/agent-data/
- Spawn agents with file paths
Option B:
- Spawn agents with explicit ToolSearch instructions
- Each agent loads its own MCP tools
Which approach do you prefer?
If task includes: "deploy", "push", "commit", "merge", "ship"
Action:
build → lint → test → pushExample:
✅ Deployment workflow verified:
1. Make code changes
2. Run local build (npm run build / pytest)
3. Fix any errors
4. Run linter
5. THEN git push
This is included in the plan.
If task seems complex: >10 steps, multiple phases, deep research
Action:
Example:
⚠️ Complex task detected (estimated 15+ steps)
Risk: Context overflow mid-execution
Recommendation:
- Break into 2-3 phases
- Write checkpoint files after each phase
- OR plan to checkpoint at ~60% progress
Proceed with phased approach or continue in one session?
If task involves: Writing SQL queries, database operations
Action:
Example:
✅ Database query workflow:
1. Read docs/database/SCHEMA.md for table structure
2. Verify column names and types
3. Write query using documented schema
4. Test query
This prevents column name errors.
If task involves: Scripts, new files, tools
Action:
Example:
✅ Stack verification:
- Project uses Python for scripts (pyproject.toml found)
- TypeScript for frontend (dashboard/tsconfig.json)
- Existing utilities in: src/lib/, dashboard/src/lib/
Will use Python for this script task.
Present findings as a structured report:
## Pre-Flight Check Results
✅ **Ready to proceed:** [aspects that look good]
⚠️ **Recommendations:**
- [suggestion 1]
- [suggestion 2]
🛑 **Blockers/Risks:**
- [blocker 1 if any]
**Proposed Approach:**
[Brief outline of how you'll execute based on validations]
Proceed as planned, adjust based on recommendations, or discuss approach?
Skip this validation for:
Recommended usage: