Publish-readiness gate: 80-item CORE-EEAT audit with weighted scoring, veto checks, and fix plan.
40:T77d3,
Based on CORE-EEAT Content Benchmark. Full benchmark reference: references/core-eeat-benchmark.md
Adapted from aaron-he-zhu/seo-geo-claude-skills v8.0.0 for LuckySpire. Shared framework references live in .claude/skills/references/.
This skill evaluates content quality across 80 standardized criteria organized in 8 dimensions. It produces a comprehensive audit report with per-item scoring, dimension and system scores, weighted totals by content type, and a prioritized action plan.
<!-- runbook-sync start: source_sha256=fbd3f05fbd9285973f3f5bf7252c40f5474c2e5a21217b5055e4eda469a84856 block_sha256=bc95b8993b94f099b1ce8f62d667c14fea504059fc4d2e860e420d358094e0be -->System role: Publish Readiness Gate. It decides whether content is ready to ship, what blocks publication, and what should be promoted into durable project memory.
Use this when content needs a quality check before publishing — even if the user doesn't use audit terminology:
Start with one of these prompts. Finish with a publish verdict and a handoff summary using the repository format in Skill Contract.
Audit this content against CORE-EEAT: [content text or URL]
Run a content quality audit on [URL] as a [content type]
CORE-EEAT audit for this product review: [content]
Score this how-to guide against the 80-item benchmark: [content]
Audit my content vs competitor: [your content] vs [competitor content]
Gate verdict: SHIP (no critical issues, dimension scores above threshold) / FIX (issues found but none critical) / BLOCK (a critical trust issue failed — see "Critical Issue to Fix" in the report). Always state the verdict prominently at the top of the report using plain language, not item IDs.
Expected output: a CORE-EEAT audit report, a publish-readiness verdict, and a short handoff summary ready for memory/audits/content/.
memory/audits/content/.memory/hot-cache.md (auto-saved, no user confirmation needed). Top improvement priorities to memory/open-loops.md.Next Best Skill below once the verdict is clear.See CONNECTORS.md for tool category placeholders.
With ~~web crawler + ~~SEO tool connected: Automatically fetch page content, extract HTML structure, check schema markup, verify internal/external links, and pull competitor content for comparison.
With manual data only: Ask the user to provide:
Proceed with the full 80-item audit using provided data. Note in the output which items could not be fully evaluated due to missing access (e.g., backlink data, schema markup, site-level signals).
When stopping to ask, always: (1) state the specific value and threshold, (2) offer numbered options with outcomes.
Stop and ask the user when:
Continue silently (never stop for):
When a user requests a content quality audit:
### Audit Setup
**Content**: [title or URL]
**Content Type**: [auto-detected or user-specified]
**Dimension Weights**: [loaded from content-type weight table]
#### Critical Trust Check (Emergency Brake)
| Check | Status | Action |
|-------|--------|--------|
| Affiliate links disclosed | ✅ Pass / ⚠️ CRITICAL | [If CRITICAL: "Add disclosure banner at page top immediately"] |
| Title matches page content | ✅ Pass / ⚠️ CRITICAL | [If CRITICAL: "Rewrite title and first paragraph to match"] |
| Data points are consistent | ✅ Pass / ⚠️ CRITICAL | [If CRITICAL: "Verify all data before publishing"] |
If any veto item triggers, flag it prominently at the top of the report and recommend immediate action before continuing the full audit.
Evaluate each item against the criteria in references/core-eeat-benchmark.md.
Score each item:
### C — Contextual Clarity
| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| C01 | Intent Alignment | Pass/Partial/Fail | [specific observation] |
| C02 | Direct Answer | Pass/Partial/Fail | [specific observation] |
| ... | ... | ... | ... |
| C10 | Semantic Closure | Pass/Partial/Fail | [specific observation] |
**C Score**: [X]/100
Repeat the same table format for O (Organization), R (Referenceability), and E (Exclusivity), scoring all 10 items per dimension.
### Exp — Experience
| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| Exp01 | First-Person Narrative | Pass/Partial/Fail | [specific observation] |
| ... | ... | ... | ... |
**Exp Score**: [X]/100
Repeat the same table format for Ept (Expertise), A (Authority), and T (Trust), scoring all 10 items per dimension.
See references/item-reference.md for the complete 80-item ID lookup table and site-level item handling notes.
Every auditor-class handoff MUST follow this shape: