Use when the user wants to proofread, style-check, or QA an article against editorial guidelines. Reads an article file and an editorial-guidelines.md, returns an inline diff plus a findings report. Optional LanguageTool layer for grammar.
Check any markdown article against editorial-guidelines.md and seo-best-practices.md. Returns two artifacts: an inline-edited copy with suggested changes marked, and a findings report categorized by severity.
No required keys. Optional:
LANGUAGETOOL_ENDPOINT (default https://api.languagetool.org/v2) — adds a free grammar/spelling pass on top of Claude's native review. Works keyless, or self-host LanguageTool for privacy.posts/*.md and let the user pick if not specified../editorial-guidelines.md. If missing, fall back to the shared voice reference and note it.critical, warning, suggestion. Default: report all three.{SKILL_BASE}/../_shared/seo-best-practices.md.From the editorial guidelines, pull:
From the SEO best practices, pull the hard rules:
Walk the article and flag every occurrence of:
Each flag becomes a finding with: line number, severity, rule name, the offending text, and a suggested fix.
Check the article against the SEO rules list above. Each violation becomes a critical finding.
Read the whole article and judge voice match. Flag:
Each flag is a warning or suggestion.
If LANGUAGETOOL_ENDPOINT is set, POST the article body to /check:
POST ${LANGUAGETOOL_ENDPOINT}/check
Content-Type: application/x-www-form-urlencoded
text=<body>&language=en-US&enabledOnly=false
Map each match to a finding with severity suggestion (grammar) or warning (likely errors). Deduplicate against Pass 1 findings.
Artifact 1: inline-edited copy at review/<slug>-edited.md
Copy of the original with suggested changes marked using <del> / <ins> HTML tags so diffs are visible in any markdown renderer:
The product is a <del>revolutionary</del><ins>category-defining</ins> platform.
Artifact 2: findings report at review/<slug>-report.md
# Review: {slug}
_Checked against: {path to editorial-guidelines.md}_
_Date: {YYYY-MM-DD}_
## Summary
- **Critical:** {n} — must fix before publishing
- **Warnings:** {n} — strongly recommended
- **Suggestions:** {n} — worth considering
## Critical findings
### [L{line}] {rule name}
- **Found:** {offending text}
- **Why it matters:** {1 line}
- **Suggested fix:** {fix}
{…}
## Warnings
{same structure}
## Suggestions
{same structure}
## Checklist summary
| Check | Result |
|---|---|
| Primary keyword in first 100 words | ✅/❌ |
| Keyword in H1 | ✅/❌ |
| Keyword in ≥1 H2 | ✅/❌ |
| Exactly one H1 | ✅/❌ |
| No skipped heading levels | ✅/❌ |
| Meta title length (50–60) | ✅/❌ |
| Meta description length (140–160) | ✅/❌ |
| ≥3 internal links | ✅/❌ |
| 2–4 outbound links | ✅/❌ |
| Hero alt text present | ✅/❌ |
| No banned words | ✅/❌ |
| Sentence length within ceiling | ✅/❌ |
| Reading level in target range | ✅/❌ |
| Closer has concrete next action | ✅/❌ |
5 lines: path to edited file, path to report, counts of critical/warning/suggestion, and a one-line verdict (ready to publish, fix criticals first, needs rewrite).
suggestion only (not warning) since the rules weren't user-authored.critical.review/.