Interactive tutorial for the thinkkit plugin. Walks new users through every skill in the plugin with concrete examples, explains when to use each one, and shows how skills chain together. Use this skill whenever the user asks for a "thinkkit tutorial", "how do I use thinkkit", "what can thinkkit do", "show me thinkkit", "getting started with thinkkit", "introduce me to thinkkit", or says they just installed thinkkit and want to learn it. Also trigger on phrases like "teach me the thinking tools", "what skills are in this plugin", or "give me a tour".
Welcome the user to thinkkit and give them a guided tour of the plugin. The goal is to leave them confident about which skill to reach for when, with a concrete example they could run in the next five minutes.
You are a tour guide, not a lecturer. Don't dump the full contents of every SKILL.md on them. Figure out what they actually want to use, then go deep on that. Keep the pace conversational.
Open with a brief framing, then find out where to go:
Thinkkit is a collection of structured thinking tools. There are eight skills organized into four patterns. Rather than walk through all of them, let me point you at what's most useful for your situation.
What brought you here?
- I have a decision to make and want to pressure-test it
- I'm evaluating a vendor, product, or my own security posture
- I need to work through a complex problem or explore a topic
- I need to document or understand a codebase
Based on their answer, go to the appropriate section below. If they pick "full tour," cover all four patterns in sequence with less depth per skill.
Skills in thinkkit fall into four patterns. Use these as your mental map:
Multi-agent debate — Spawn several AI perspectives that argue with each
other, then synthesize. For pressure-testing decisions and reviews.
Skills: boardroom, ciso-review
Structured elicitation — Interview the user with focused questions
before generating anything. The discipline is asking first, writing after.
Skills: explore-with-me, init-discovery
Iterative analysis — Deep code exploration followed by document
generation with pressure-test loops.
Skills: map-the-repo, create-spec
Session-based capture — Real-time or post-meeting note handling.
Skills: take-notes, resolve-against-transcript
For each skill the user is interested in, cover these four things: what it does, when to reach for it, a concrete example command, and what output they'll get. Keep each to a few paragraphs.
What it does: Assembles a board of AI-simulated advisors (real people whose thinking you respect) and has them debate a decision in two rounds. First round: each advisor writes their position independently. Second round: they read each other's arguments and write rebuttals, sometimes changing their votes.
When to reach for it: You're facing a decision where you suspect you're anchored, missing perspectives, or about to commit to something significant.
Try this:
/thinkkit:boardroom should we raise a Series B in Q2 or extend runway with a bridge?
What you get: A folder with debate.md (full transcript), debate.html
(interactive dashboard with sliders for key assumptions), and debate.pdf.
Plus a synthesis showing who changed their mind and the sharpest insight
that emerged.
First-time setup: The skill will interview you once to build your board of advisors (4-8 people whose thinking you respect). That config persists.
What it does: Adopts the persona of a skeptical CISO and evaluates either a vendor/product you're considering adopting, OR your own approach through the eyes of an enterprise CISO who would evaluate it during procurement. Eight evaluation domains plus hard questions the vendor would hate to answer.
When to reach for it: Two modes. Vendor evaluation: "should we adopt Acme Vault?" Self-assessment: "will our approach pass a CISO review, and what's the GTM impact?"
Try this:
/thinkkit:ciso-review evaluate Supabase for storing our customer PII
or
/thinkkit:ciso-review pressure-test our new AI feature's security story for enterprise buyers
What you get: An assessment.md, assessment.html (risk heatmap), and
assessment.pdf. Includes an APPROVE/CONDITIONAL/REJECT recommendation,
hard questions, and (in self-assessment mode) buyer archetype analysis and
GTM impact.
What it does: Runs a depth-first interview to help you think through a problem. Asks 2-3 questions per round on one topic, probes surprising answers, validates its synthesis with you before writing anything.
When to reach for it: You have a problem where you hold the domain knowledge and need someone to structure your thinking — diagnosing an issue, making a hard call, postmortems, risk assessments. Anything where premature generation would be worse than discovering the right framing first.
Try this:
/thinkkit:explore-with-me why are our evaluation pipelines so fragile
What you get: After 5-15 rounds of interviewing, a markdown file capturing context, key findings, constraints, tensions, and recommendations.
What it does: The multi-session version of explore-with-me. Creates a CLAUDE.md and working file structure for a discovery project that will span days or weeks. Interviews you to populate the project template.
When to reach for it: A single exploration session won't cut it. You're setting up a sustained investigation (architectural redesign, organizational analysis, risk assessment) that needs to persist across sessions.
Try this:
/thinkkit:init-discovery authentication architecture redesign
What you get: A CLAUDE.md with initiative overview, current state,
deliverable definition, key actors, and behavioral instructions, plus
working files (current-state.md, problem-analysis.md, requirements.md,
options/, decision-log.md). You then continue with subsequent sessions
against that scaffold.
What it does: Runs a Python static analysis script, then enriches the generated scaffolding with architectural insight. Produces both markdown docs and a self-contained HTML site with search and Mermaid diagrams.
When to reach for it: You need to document a codebase, onboard someone to a project, or build a browsable wiki. Best used on codebases you want others to explore, not just snapshot.
Try this:
/thinkkit:map-the-repo .
What you get: wiki/docs/*.md (architecture, data flows, API reference,
glossary, per-module docs) and wiki/site/index.html (browsable with dark
theme, search, diagrams). The skill checks whether LSP language servers are
available and offers to install them for better analysis.
What it does: Reverse-engineers a repo into a single SPECIFICATION.md document complete enough that a fresh Claude Code session could rebuild a functionally equivalent codebase from the spec alone. Uses at least two pressure-test iterations to flag and resolve gaps.
When to reach for it: Different goal than map-the-repo. Use this when you need a single document that captures everything — for a port, a rewrite, an insurance policy against losing context, or preparing an LLM to reconstruct the system.
Try this:
/thinkkit:create-spec
What you get: A SPECIFICATION.md at the repo root with nine required sections (purpose, directory structure, public interfaces, data models, algorithms, dependencies, build/test/run, design decisions, edge cases).
Difference from map-the-repo: map-the-repo generates a browsable wiki for exploration; create-spec generates a single spec for reconstruction.
What it does: You feed it terse, shorthand observations during a meeting, and it expands each entry into clear prose and maintains a running document with Notes, Action Items, and Open Questions sections.
When to reach for it: You're about to join a meeting and want Claude to handle the note-taking while you stay focused on the conversation.
Try this:
/thinkkit:take-notes Q4 planning review
Then during the meeting, send messages like:
dan: adapter layer needs rethink before next sprintconcerns about kong throughput for v2action: mariano to draft the migration plan by fridayWhat you get: A file at meeting-notes/YYYY-MM-DD-<title>.md that
updates in real time. Speaker attribution is opt-in via name: prefix —
unattributed entries are captured as observations.
What it does: Given a meeting transcript and a notes file, identifies discrepancies (factual errors, mischaracterizations, attribution errors, missing content, missing action items) and walks through resolving each one interactively.
When to reach for it: You took notes in a meeting (or someone else did) and now have the recording transcript. You want to verify the notes are accurate before distributing them.
Try this:
/thinkkit:resolve-against-transcript recording.vtt meeting-notes/2026-04-05-q4-planning.md
What you get: An interactive loop that shows you each discrepancy with the transcript excerpt, the current notes text, a proposed fix, and accept/modify/skip options. Plus a summary at the end.
The skills chain naturally. Point the user at these when relevant:
Meeting workflow: take-notes during → resolve-against-transcript
after. Take shorthand notes live, reconcile against the transcript later.
Discovery workflow: explore-with-me for quick explorations →
init-discovery when a single session won't cut it → use the resulting
CLAUDE.md as project context for all subsequent sessions.
Documentation workflow: map-the-repo when you need a wiki for others
to browse; create-spec when you need a single reconstruction-grade
document. They're complementary, not redundant.
Strategy workflow: explore-with-me to surface the right framing →
boardroom to pressure-test the decision → ciso-review if there's an
adoption/security dimension.
After walking through the skills they cared about, invite them to try one:
Want to try one now? Pick the skill that matches your most immediate need. I can help you set it up or run it together.
If they're still deciding, suggest explore-with-me — it's the lowest-stakes
starting point and often surfaces which of the other skills would help next.