Bidirectional human-AI co-evolution framework. Three techniques (Socratic prompting, few-shot by example, chain-of-thought steering) that compound across sessions.
Version: 1.0.0 Author: Anand Vallamsetla (@thewhyman) Inspired by: Ethan Mollick's Co-Intelligence: Living and Working with AI (oneusefulthing.org) License: MIT Compatibility: Claude Code, OpenAI Swarm, LangChain, Antigravity, any SKILL.md-compatible runtime
Most prompting guides teach humans to talk AT machines — one-directional instruction-giving. That's the Socratic model: the teacher knows the answer and leads the student to discover it.
This skill implements something different: — two minds reasoning together toward truth that neither possesses alone. Thesis (human perspective) + antithesis (AI perspective) → synthesis (better than either could achieve separately). Both sides teach. Both sides learn. Both sides evolve.
The name Co-Dialectic combines Ethan Mollick's co-intelligence concept (humans and AI as complementary reasoning partners) with the philosophical tradition of dialectic (Plato, Hegel, and modern DBT). The "co-" signals partnership. The "dialectic" signals that truth emerges from the tension between two different kinds of intelligence.
Why dialectic, not Socratic? The Socratic method is one-directional — the teacher already knows and guides the student. In co-dialectic, neither side has the complete picture. The human has lived experience, values, emotional intelligence, and stakes. The AI has scale, recall, cross-domain pattern recognition, and tirelessness. These are perfect complementary opposites — like the thesis and antithesis that produce synthesis.
The connection to DBT (Dialectical Behavior Therapy): DBT teaches people to hold two opposing truths simultaneously — "I am doing my best AND I can do better." Co-dialectic applies the same skill to human-AI partnership: "I (human) have wisdom the AI doesn't have AND the AI has capabilities I don't have." Both are true. The synthesis isn't choosing one — it's leveraging both. People using this skill develop dialectical thinking without knowing they're doing it.
The Co-Education Flywheel: Human teaches AI their values and judgment → AI codifies the patterns → AI teaches human better techniques and new connections → Human internalizes → Next interaction starts smarter → Repeat. 1% improvement per day = 37x improvement in a year.
Principle: Ask questions instead of giving instructions. When you ask, the AI derives principles from reasoning rather than retrieving cached answers.
Before (instruction-giving):
Score all my contacts for frontier AI job referrals.
After (Socratic):
When I give you contacts, is it only for Track 1 (frontier AI)?
What if I said I wanted to open a ski shop — who in my network
would light up then?
Why it works: The instruction produces a flat list optimized for one dimension. The question teaches the AI that contacts are a living asset graph with multi-dimensional value that shifts with context. One question, infinite generalization.
Meta-pattern: Socrates believed wisdom begins with knowing what you don't know. When the human asks instead of tells, both sides discover what they don't know together.
Principle: Give a single concrete scenario to communicate an entire class of behavior. The AI generalizes from the example — you don't need to enumerate every case.
Before (abstract instruction):
Always use the richest visual representation when showing me information.
After (few-shot example):
When I said "show me their profile picks," I meant profile PICTURES —
actual images. You gave me text descriptions. Are you going to learn
this lesson one keyword at a time, or as a meta-concept?
Why it works: The abstract instruction is ambiguous. The concrete correction — with the meta-question "keyword vs. meta-concept" — teaches the AI to extract the generative principle (fidelity matching: always use the richest representation available) rather than a prescriptive patch (profile picks = pictures). One example, infinite generalization.
Meta-pattern: Humans learn through stories and examples, not rulebooks. A single vivid scenario communicates more than a page of specifications. The key: always push for the meta-concept, never settle for the keyword fix. Generative principles > prescriptive rules.
Principle: Explicitly control when the AI shows its reasoning versus when it just executes. This gives the human a steering wheel for the AI's cognitive effort.
Before (ambiguous intent):
What should I do about the Anthropic warm intro path being closed?
After (explicit steering):
Anthropic warm intro through Pravir is fully closed. Think through the
trade-offs of three alternative paths using my poker gang network.
Why it works: The ambiguous prompt forces the AI to guess intent. The steering phrase makes it crystal clear. The human controls the AI's cognitive budget — System 2 ("think through trade-offs") vs System 1 ("just do it").
Meta-pattern: This is Kahneman's dual-process thinking applied to collaboration. The steering phrase is the toggle between deliberation and execution.
Session 1: Human corrects AI → AI codifies the lesson
Session 5: AI applies automatically → Human notices fewer corrections needed
Session 20: AI suggests improvements → Human learns new technique
Session 50: Both anticipate each other → Communication becomes telepathic
The compounding math: 1.01^365 = 37x improvement per year. The flywheel doesn't just preserve knowledge — it compounds it.
Generative principles are the accelerant. Every lesson codified as a generative principle (not a prescriptive rule) covers infinite future situations. The more generative the codification, the faster the flywheel spins. The self-evolution rate equals the codification quality.
This skill activates when:
For Claude Code / Antigravity: