Systematic content quality audit based on 80 CORE-EEAT standards, evaluating content's GEO (Generative Engine Optimization) and SEO (Search Engine Optimization) potential. Features 8-dimension scoring, weighted total calculation, veto item detection, and priority improvement recommendations. Applicable for pre-publication checks, competitive analysis, and AI citation potential assessment.
This skill is developed based on the CORE-EEAT Content Benchmark, providing 80 standardized content quality audit criteria.
This skill evaluates content quality through 80 standardized criteria across 8 core dimensions. It generates comprehensive audit reports including item-level scores, dimension scores, system scores (GEO/SEO), content-type weighted total scores, veto item detection, and priority action plans.
Use this skill when users request the following:
This skill can:
This skill supports the following content types, each with different dimension weights:
Please audit the quality of the following content: [Content text or URL]
Perform content quality audit on [URL]
Audit this content as a product review: [Content]
Score this tutorial based on 80 criteria: [Content]
Audit the differences between my content and competitor's: [Your content] vs [Competitor content]
Manual Data Input (Currently recommended):
Request users to provide:
Note: Explicitly mark in the output which items cannot be fully evaluated due to lack of access (e.g., backlink data, Schema markup, site-level signals).
When users request content quality audit, follow these steps:
### Audit Preparation
**Content**: [Title or URL]
**Content Type**: [Auto-detected or user-specified]
**Dimension Weights**: [Load from content type weight table]
#### Veto Item Check (Emergency Brake)
| Veto Item | Status | Action |
|-----------|--------|--------|
| T04: Disclosure Statement | ✅ Pass / ⚠️ Triggered | [If triggered: "Immediately add disclosure banner at top of page"] |
| C01: Intent Alignment | ✅ Pass / ⚠️ Triggered | [If triggered: "Rewrite title and first paragraph"] |
| R10: Content Consistency | ✅ Pass / ⚠️ Triggered | [If triggered: "Verify all data before publication"] |
If any veto item is triggered, prominently mark it at the top of the report and recommend immediate action before continuing with the full audit.
Evaluate each item according to standards in references/core-eeat-benchmark.md.
Score each item:
### C — Contextual Clarity
| ID | Check Item | Score | Notes |
|----|------------|-------|-------|
| C01 | Intent Alignment | Pass/Partial/Fail | [Specific observation] |
| C02 | Direct Answer | Pass/Partial/Fail | [Specific observation] |
| ... | ... | ... | ... |
| C10 | Semantic Closure | Pass/Partial/Fail | [Specific observation] |
**C Dimension Score**: [X]/100
Evaluate O (Organization), R (Referenceability), and E (Exclusivity) in the same table format, 10 items per dimension.
### Exp — Experience
| ID | Check Item | Score | Notes |
|----|------------|-------|-------|
| Exp01 | First-Person Narrative | Pass/Partial/Fail | [Specific observation] |
| ... | ... | ... | ... |
**Exp Dimension Score**: [X]/100
Evaluate Ept (Expertise), A (Authority), and T (Trust) in the same table format, 10 items per dimension.
For detailed 80-item ID lookup table and site-level item handling instructions, see references/item-reference.md.
Calculate scores and generate final report:
## CORE-EEAT Audit Report
### Overview
- **Content**: [Title]
- **Content Type**: [Type]
- **Audit Date**: [Date]
- **Total Score**: [Score]/100 ([Rating])
- **GEO Score**: [Score]/100 | **SEO Score**: [Score]/100
- **Veto Item Status**: ✅ No triggers / ⚠️ [Item] triggered
### Dimension Scores
| Dimension | Score | Rating | Weight | Weighted Score |
|-----------|-------|--------|--------|----------------|
| C — Contextual Clarity | [X]/100 | [Rating] | [X]% | [X] |
| O — Organization | [X]/100 | [Rating] | [X]% | [X] |
| R — Referenceability | [X]/100 | [Rating] | [X]% | [X] |
| E — Exclusivity | [X]/100 | [Rating] | [X]% | [X] |
| Exp — Experience | [X]/100 | [Rating] | [X]% | [X] |
| Ept — Expertise | [X]/100 | [Rating] | [X]% | [X] |
| A — Authority | [X]/100 | [Rating] | [X]% | [X] |
| T — Trust | [X]/100 | [Rating] | [X]% | [X] |
| **Weighted Total Score** | | | | **[X]/100** |
**Score Calculation Formulas**:
- GEO Score = (C + O + R + E) / 4
- SEO Score = (Exp + Ept + A + T) / 4
- Weighted Score = Σ (Dimension Score × Content Type Weight)
**Rating Standards**: 90-100 Excellent | 75-89 Good | 60-74 Fair | 40-59 Poor | 0-39 Very Poor
### Unavailable Item Handling
When an item cannot be evaluated (e.g., A01 backlink profile requires site-level data, inaccessible):
1. Mark the item as "N/A" and note the reason
2. Exclude N/A items from dimension score calculation
3. Dimension Score = (Sum of scored items) / (Number of scored items × 10) × 100
4. If a dimension has >50% items as N/A, mark that dimension as "Insufficient Data" and exclude from weighted total score
5. Recalculate weighted total score using only dimensions with sufficient data, renormalizing weights to total 100%
**Example**: Authority dimension has 8 N/A items and 2 scored items (A05=8, A07=5):
- Dimension Score = (8+5) / (2 × 10) × 100 = 65
- But 8/10 items are N/A (>50%), so mark as "Insufficient Data -- Authority"
- Exclude A dimension from weighted total; redistribute its weight proportionally to remaining dimensions
### Item-Level Scores
#### CORE — Content Body (40 Items)
| ID | Check Item | Score | Notes |
|----|------------|-------|-------|
| C01 | Intent Alignment | [Pass/Partial/Fail] | [Observation] |
| C02 | Direct Answer | [Pass/Partial/Fail] | [Observation] |
| ... | ... | ... | ... |
#### EEAT — Source Credibility (40 Items)
| ID | Check Item | Score | Notes |
|----|------------|-------|-------|
| Exp01 | First-Person Narrative | [Pass/Partial/Fail] | [Observation] |
| ... | ... | ... | ... |
### Top 5 Priority Improvements
Sorted by: Weight × Points Lost (by impact from high to low)
1. **[ID] [Name]** — [Specific improvement suggestion]
- Current Status: [Fail/Partial] | Potential Gain: [X] weighted points
- Action: [Specific steps]
2. **[ID] [Name]** — [Specific improvement suggestion]
- Current Status: [Fail/Partial] | Potential Gain: [X] weighted points
- Action: [Specific steps]
3–5. [Same format]
### Action Plan
#### Quick Wins (Less than 30 minutes each)
- [ ] [Action 1]
- [ ] [Action 2]
#### Medium Investment (1-2 hours)
- [ ] [Action 3]
- [ ] [Action 4]
#### Strategic (Requires Planning)
- [ ] [Action 5]
- [ ] [Action 6]
### Recommended Next Steps
- Complete content rewrite: Rewrite with CORE-EEAT constraints
- GEO optimization: Optimize for failed GEO-First items
- Content refresh: Focus on weak dimensions
- Technical fixes: Check site-level issues
Start with Veto Items — T04, C01, R10 are one-vote veto items; they affect overall evaluation regardless of total score
Focus on High-Weight Dimensions — Different content types prioritize different dimensions
GEO-First Items are Critical for AI Visibility — If goal is AI citation, prioritize items marked with GEO 🎯
Some EEAT Items Require Site-Level Data — Don't penalize content for things only observable at site level (backlinks, brand recognition)
Use Weighted Scores, Not Just Raw Averages — Product reviews with strong exclusivity are more important than strong authority
Re-Audit After Improvements — Run again to verify score improvements and catch regressions