Score and review existing narrative files against story arc quality gates. This skill should be used when the user asks to 'review a narrative', 'score a narrative', 'check narrative quality', 'validate narrative', 'audit narrative', 'grade a narrative', 'evaluate narrative quality', 'narrative scorecard', 'rate my narrative', 'run quality gates on a narrative', or when the narrative-reviewer agent evaluates a generated narrative.
Evaluate an existing narrative markdown file against the cogni-narrative quality gates. Produce a structured scorecard with pass/warn/fail per gate, an overall score (0-100), and the top 3 actionable improvement suggestions.
Not for:
cogni-narrative:narrative skill instead)cogni-narrative:narrative-adapt skill instead)| Parameter | Required | Description |
|---|
--source-path | Yes | Path to the narrative .md file to review |
--arc-id | No | Override arc detection (uses frontmatter arc_id by default) |
--language | No | Override language detection (uses frontmatter language by default) |
Two outputs:
{source-dir}/narrative-review.md{
"success": true,
"source_path": "insight-summary.md",
"arc_id": "corporate-visions",
"overall_score": 82,
"grade": "B",
"gates": {
"structural": "pass",
"critical": "pass",
"evidence": "warn",
"structure": "pass",
"language": "pass"
},
"top_improvements": [
"Add 3 more citations to reach minimum 15 (currently 12)",
"Expand 'Why Now' section by ~40 words to meet 300-word minimum",
"Add citation to uncited quantitative claim in paragraph 3 of 'Why Change'"
]
}
| Score | Grade | Meaning |
|---|---|---|
| 90-100 | A | Publication-ready, all gates pass |
| 80-89 | B | Strong, minor improvements possible |
| 70-79 | C | Acceptable, several improvements needed |
| 60-69 | D | Below standard, significant rework needed |
| 0-59 | F | Fails critical gates, major rework required |
--source-pathtitle, subtitle, arc_id, arc_display_name, word_count, language, date_created, source_file_countarc_id from: explicit parameter > frontmatter > detection failurelanguage from: explicit parameter > frontmatter > default enRead the arc definition to know expected element names, word targets, and quality gates:
../narrative/references/story-arc/arc-registry.md -- for arc metadata../narrative/references/story-arc/{arc_id}/arc-definition.md -- for element definitions and word targets../narrative/references/language-templates.md -- for localized header namesStore the expected element names, word targets, and citation requirements.
Evaluate the narrative against each gate category. Use the scoring rubric in references/scoring-rubric.md.
Gate evaluation order (matches narrative skill Phase 5):
For each gate:
pass / warn / failWrite narrative-review.md to the same directory as the source file:
---