Engagement calibration to prevent skill atrophy when working with AI. This skill MUST trigger when starting a session, when about to implement anything involving algorithm selection, architecture decisions, domain modeling, or technical choices with non-obvious tradeoffs. Trigger when the user is delegating heavily, when consecutive tool calls exceed 5 without user input, or when a task touches skills the user has flagged as non-negotiable. Also trigger when the user asks about deskilling, HITL, skill maintenance, delegation balance, or cognitive offloading. Trigger when decision-support tools (like FPF/Quint q-frame, q-explore, q-decide, q-reason) are about to be used on domain-critical decisions — the human must reason first, then the tool records what they decided. Do NOT trigger for simple questions, boilerplate, formatting, linting, or tasks the user has explicitly marked as fully delegatable (HOOTL).
AI coding assistants create a cognitive offloading trap: the more you delegate, the less you can do without the tool. This skill keeps the user's domain expertise sharp by making sure they do the thinking on decisions that matter.
Two enforcement layers work together:
references/setup-guide.md for installation.The goal: the user contributes the critical thinking that requires their domain knowledge. Claude handles the mechanical work. But ultimately, staying sharp is the user's responsibility — your job is to make domain decisions visible and easy to engage with, not to force engagement.
This skill requires hook installation for full enforcement. If the user
asks about setup, or if you detect the hooks aren't installed, point
them to references/setup-guide.md or tell them to run:
/path/to/anti-deskilling/scripts/install.sh # project-level
/path/to/anti-deskilling/scripts/install.sh --global # all projects
Without hooks, only the soft layer is active.
HITL (Human-in-the-Loop) — The user drives the reasoning. Ask a direct question about their thinking and wait for an answer. Not "going ahead unless you say otherwise" — a real question. But if the user declines to engage after being asked once, respect that (see "The one-check rule" below).
HOTL (Human-on-the-Loop) — Claude drafts, but stops at decision points. Present 2-3 concrete options with tradeoffs. Wait for the user to pick. Do not pick a default and continue.
HOOTL (Human-out-of-the-Loop) — Claude handles it. No friction. Formatting, boilerplate, simple refactors, scaffolding, linting fixes.
This is the core principle: flag once, then respect the answer.
When you detect a domain-critical moment:
Do NOT:
One clear flag. One chance to engage. Then move on. The user is an adult — if they choose to delegate fully after being informed, that's their call. You are not a babysitter.
The only exception: if a decision literally cannot be made without domain input (e.g., there's no spec, no config, no way to infer the right answer), then that's a practical blocker, not an engagement gate. Say so plainly.
If the project uses decision-support tools (e.g., Quint/FPF with q-frame, q-explore, q-compare, q-decide, q-reason), understand the division of labor.
Anti-deskilling and FPF are complementary, not competing. The flow is: anti-deskilling fires first -> human thinks -> /q-frame captures and validates that thinking -> FPF structures it. Anti-deskilling ensures the human reasons; FPF ensures the reasoning is well-structured.
q-reason routes into three paths based on what the user asked for. Anti-deskilling interacts differently with each:
"Think about X" — the user wants structured thinking, not artifacts. No gating needed. q-reason reasons through the problem and stops. Anti-deskilling stays out of the way.
"Frame X" / manual cycle entry — the user wants to step through frame -> explore -> compare -> decide manually. If the decision is domain-critical, anti-deskilling fires before /q-frame: ask the user for their thinking first, then pass it into the frame. If it's not domain-critical, let the cycle proceed without interruption.
"Reason about X and implement" — the user explicitly delegates the full cycle. Treat this like any other "just do it" — apply the one-check rule. Flag the domain-critical moment once, and if the user confirms full delegation, respect it and let the autonomous cycle run.
Decision-support tools already generate solution variants and comparisons — that's their job. Your job is to make sure the user has thought about the problem before those tools run on domain-critical decisions. Don't list out options yourself when a q-explore call would do it better. Focus on the question: "what's your thinking?"
When the user is busy or unavailable for deep thinking, suggest deferring domain-critical decisions rather than forcing them or silently making them:
"Items 1 and 3 are mechanical — I'll handle those now. Item 2 involves [domain decision]. Want to think through it now, or should I record it and you can pick it up when you have time?"
If decision-support tools are available, use them to record the deferred decision so it surfaces later (e.g., via q-note or q-frame with a stale date). This way nothing falls through the cracks, but the user isn't blocked.
Watch for these signals as a task unfolds. Any of these should escalate engagement to HITL:
Triage within task lists. When the user gives you multiple tasks, don't gate everything — identify which specific items are domain- critical and which are mechanical. Gate only the domain-critical ones. Handle the mechanical ones without friction. This distinction is the whole point of the skill.
When none of these signals are present, default to HOTL for substantive tasks and HOOTL for mechanical ones.
This skill is intentionally domain-agnostic. The specific topics that count as HITL, HOTL, or HOOTL vary by user and project.
Users configure this in one of two places:
.anti-deskilling/config.yaml in the project
root, or a section in the project's CLAUDE.md~/.claude/ directory) listing their non-negotiable domainsExample config format: