Validate content framing on joy-grievance spectrum.
Validate content framing using mode-specific rubrics. Two modes:
By default the skill evaluates each paragraph/instruction independently, produces a score (0-100), and suggests reframes without modifying content. Optional flags: --fix rewrites flagged items in place and re-verifies; --strict fails on any item below 60; --mode writing|instruction overrides auto-detection.
This skill checks framing, not topic and not voice. Voice fidelity belongs to voice-validator, AI pattern detection belongs to anti-ai-editor.
| Signal | Load These Files | Why |
|---|
| tasks related to this reference | instruction-rubric.md | Loads detailed guidance from instruction-rubric.md. |
| tasks related to this reference | writing-rubric.md | Loads detailed guidance from writing-rubric.md. |
Goal: Determine which rubric to apply based on file location or explicit flag.
Auto-detection rules (in priority order):
--mode writing|instruction flag → use that modeagents/*.md → instructionskills/*/SKILL.md → instructionskills/workflow/references/*.md → instructionCLAUDE.md or README.md → instructionLoad the rubric: Read references/{mode}-rubric.md for the scoring criteria, patterns, and examples relevant to this mode.
GATE: Mode determined, rubric loaded. Proceed to Phase 1.
Goal: Use regex scanning as a fast gate to catch obvious patterns before spending LLM tokens on semantic analysis.
For writing mode: Run the regex-based scanner for grievance patterns:
python3 ~/.claude/scripts/scan-negative-framing.py [file]
For instruction mode: Run a grep scan for prohibition patterns:
grep -nE 'NEVER|do NOT|must NOT|FORBIDDEN' [file]
grep -nE "^-?\s*Don't|^-?\s*Avoid|^#+.*Anti-[Pp]attern|^#+.*Avoid" [file]
Handle hits: Report findings with suggested reframes from the loaded rubric. If --fix mode is active, apply reframes and re-run to confirm clean.
GATE: Regex/grep scan returns zero hits. Resolve obvious patterns before proceeding to Phase 2 — mechanical fixes come first.
Goal: Read the content and evaluate each item against the loaded rubric using LLM semantic understanding.
Step 1: Read the content
Read the full file. Skip frontmatter (YAML between --- markers) and code blocks.
Step 2: Evaluate against the rubric
Apply the scoring dimensions from the loaded rubric (references/{mode}-rubric.md). Each rubric defines its own PASS/FAIL dimensions, subtle patterns to detect, and contextual exceptions.
For writing mode: Evaluate through the joy-grievance lens. Watch for the subtle patterns described in references/writing-rubric.md (defensive disclaimers, accumulative grievance, passive-aggressive factuality, reluctant generosity).
For instruction mode: Evaluate through the positive-negative lens. Check each instruction against the patterns table in references/instruction-rubric.md. Apply contextual exceptions — subordinate negatives attached to positive instructions are PASS, as are negatives in code examples, writing samples, and technical terms.
Step 3: Score each item
Apply the scoring scale from the loaded rubric. For any item scoring in the lower tiers (CAUTION/GRIEVANCE for writing, NEGATIVE-LEANING/PROHIBITION-HEAVY for instruction), draft a specific reframe suggestion that preserves the substance while shifting the framing.
If an item seems "too subtle to flag," that is precisely when flagging matters most — subtle patterns are what the regex/grep pre-filter misses, making them the primary purpose of this LLM analysis phase.
GATE: All items analyzed and scored. Reframe suggestions drafted for all flagged items. Proceed to Phase 3.
Goal: Produce a structured report with scores, findings, and reframe suggestions.
Step 1: Calculate overall score
Average all item scores. Pass criteria come from the loaded rubric:
Step 2: Output the report
JOY CHECK: [file]
Mode: [writing|instruction]
Score: [0-100]
Status: PASS / FAIL
Items:
[writing mode]
P1 (L10-12): JOY [85] -- explorer framing, curiosity
P3 (L18-22): CAUTION [40] -- "confused" leans defensive
-> Reframe: Focus on what you learned from the confusion
[instruction mode]
L33: NEGATIVE [20] -- "NEVER edit code directly"
-> Rewrite: "Route all code modifications to domain agents"
L45: PASS [90] -- "Create feature branches for all changes"
L78: PASS [85] -- "Credentials stay in .env files, never in code" (subordinate negative OK)
Overall: [summary of framing arc]
Step 3: Handle fix mode
If --fix mode is active:
GATE: Report produced. If --fix, all rewrites applied and re-verified. Joy check complete.
This skill integrates with content and toolkit pipelines:
Writing pipeline (human-facing content):
CONTENT --> voice-validator --> scan-ai-patterns --> joy-check --mode writing --> anti-ai-editor
Instruction pipeline (agent/skill/pipeline creation and modification):
SKILL.md --> joy-check --mode instruction --> fix flagged patterns --> re-verify
Auto-invocation points:
skill-creator pipeline: Run joy-check --mode instruction after generating a new skillagent-upgrade pipeline: Run joy-check --mode instruction after modifying an agentvoice-writer: Run joy-check --mode writing during validationdoc-pipeline: Run joy-check --mode instruction for toolkit documentationThe joy-check can be invoked standalone via /joy-check [file] (auto-detects mode) or with explicit --mode writing|instruction.
Cause: Path incorrect or file does not exist Solution:
ls -la [path]Glob **/*.mdCause: scan-negative-framing.py script missing or Python error
Solution:
ls scripts/scan-negative-framing.pypython3 --version (requires 3.10+)Cause: Content is fundamentally framed through grievance -- not recoverable with paragraph-level reframes Solution:
Cause: Rewritten paragraphs keep introducing new CAUTION/GRIEVANCE patterns, often because the underlying premise is grievance-based Solution:
references/writing-rubric.md — Joy-grievance spectrum, subtle patterns, scoring, examples (writing mode)references/instruction-rubric.md — Positive framing rules, patterns to flag, rewrite strategies, examples (instruction mode)scan-negative-framing.py — Regex pre-filter for grievance patterns (writing mode, Phase 1)voice-validator — Voice fidelity validation (different concern)anti-ai-editor — AI pattern detection and removal (different concern)voice-writer — Content pipeline that invokes joy-check as a validation phaseskill-creator — Skill creation pipeline that invokes joy-check in instruction mode