Identify the user's top productivity saboteur (from the PQ framework) and show exactly how it shows up in their AI workflows. Use when the user runs /saboteur-scan or asks why they keep self-sabotaging their productivity or AI setup.
Identify the user's top 1–2 saboteurs from the Positive Intelligence (PQ) framework and map them to their specific AI workflow costs and fixes.
Present these 10 behavioral patterns as a numbered list. Ask the user to pick the 2–3 that feel most true — especially under stress or in high-stakes moments:
For each of their top 2 saboteurs (or 1 if they only picked 1), output this block:
How it shows up in your AI workflows: [from mapping below]
What it costs you: [from mapping below]
One immediate fix: [from mapping below]
How the cohort addresses it: [from mapping below]
| # | Saboteur | AI workflow impact | Cost | Immediate fix | Cohort address |
|---|---|---|---|---|---|
| 1 | Judge | Rejects AI output too harshly OR accepts it to avoid self-criticism for "wasting time" | Either over-edits everything (doubles work) or under-uses AI because standards are never met | Write one sentence about what you wanted differently. That's your next prompt. | Week 4 retro template separates "output quality" from "what I should have asked for." |
| 2 | Avoider | Uses AI to research and delay the hard thing. Productive-looking procrastination. | Elaborate AI systems get built around tasks that aren't the real problem. | Name the thing you're avoiding. Add it with a 20-min block. AI helps you prepare — not replace doing it. | Week 1 includes saboteur-aware planning that explicitly names avoidance before the week starts. |
| 3 | Controller | Doesn't trust AI output, so does it manually anyway. Or supervises AI longer than the task would take. | Automation ROI disappears. | Write the spec before you prompt. Defined output format = less room to deviate = more trust. | Week 3 covers output contracts: prompting for results you can trust without rewriting. |
| 4 | Hyper-Achiever | Uses AI to do more, faster — never slows down enough to build the system. | Lots of production, zero compounding. No templates, no saved prompts, no context docs. | Block 30 min to build one reusable prompt. Treat it as investment, not cost. | The whole cohort is a deceleration: build the system in 4 weeks instead of optimizing output for 40. |
| 5 | Hyper-Rational | Wants to fully understand AI before using it. Tests edge cases. Reads docs. Hasn't shipped. | Analysis paralysis dressed as thoroughness. | Ship one thing with AI this week. Real feedback beats any amount of research. | Week 3 is hands-on: install, configure, run your first automated workflow. No theory. |
| 6 | Hyper-Vigilant | Worried AI will embarrass them. Double-checks everything. Still doesn't quite trust it. | Does 80% of the work anyway because AI "might" make a mistake. | Create a 3–5 item review checklist for AI output. A checklist feels safer than open-ended review. | Week 3 covers building an "immune system" into workflows: systematic checks, not full rewrites. |
| 7 | Pleaser | Uses AI to generate more client deliverables. Never built context for their own system. | AI is well-calibrated to clients' needs; zero calibration to theirs. | This week: build one piece of AI infrastructure for yourself. One context doc. Just for you. | Week 1 is explicitly about you — your personality, your projects, your preferences. Non-negotiable. |
| 8 | Restless | Tried 7 AI tools, read all the newsletters, built 3 systems, abandoned all of them. | Zero compounding. Never sticks long enough for context to accumulate. | One AI tool. 30 days. Boring? Yes. That's how context layers build. | 4 weeks, one system, one setup. Restless pattern named and managed explicitly in Week 1. |
| 9 | Stickler | Prompts are thorough. Expectations are high. Has never shipped an AI workflow because it wasn't ready. | Perfect-spec paralysis. | Write an intentionally incomplete spec. Run it. Edit once. That's a draft. More progress than a perfect spec you never ship. | Week 3 is about iteration, not perfection. You'll ship a working .cursorrules in the session. |
| 10 | Victim | Bad AI output feels like personal failure. Gave it everything it needed. Why didn't it work? | Takes AI inconsistency personally. Less likely to iterate, more likely to quit or complain. | Reframe bad output as bad specs. It's an incomplete input contract — not the AI failing you. | Week 3: output contracts and feedback loops. Bad output becomes data, not defeat. |
In the cohort, your saboteur gets wired into your Cursor rules.
Week 1 is a full personality calibration session — MBTI, Enneagram, Saboteurs, and 3 more assessments — built directly into the AI rules that govern your system.
20 seats. $299. Starts April 28, 2026.