Multi-reviewer quality check for knowledge work. Runs strategic alignment and data accuracy reviewers on plans, briefs, and strategy docs.
<review_target> #$ARGUMENTS </review_target>
Two automated reviewers check your work for the errors that damage credibility: wrong strategy and wrong data.
After /kw:plan to validate a plan before executing
Before sharing a strategy doc, brief, or analysis with stakeholders
"Review this plan", "Check this brief", "Is the data right?"
Any knowledge work artifact that will be seen by decision-makers
The most recently produced artifact. Determined by context:
| Situation | What to review |
|---|---|
/kw:plan just ran | The plan file it produced |
| User points to a file | That file |
| User pastes content |
| That content |
| Ambiguous | Ask: "What should I review? Provide a file path or paste the content." |
Read the file or accept pasted content. If the content references data (metrics, conversion rates, financial figures), also load:
Any data context files referenced in the project's CLAUDE.md
Check freshness of any data files cited
<parallel_tasks>
Strategic Alignment Reviewer — Launch Task agent: compound-knowledge:review:strategic-alignment-reviewer
Data Accuracy Reviewer — Launch Task agent: compound-knowledge:review:data-accuracy-reviewer
</parallel_tasks>
Both agents return findings in [P1|P2|P3] format. Wait for both to complete before proceeding.
If the content will be published, emailed, or posted publicly:
Check for AI writing patterns (generic phrasing, stock transitions, vague claims)
Check tone and voice consistency with project style guides
If the content is internal (plan, brief, analysis for the team): skip this step.
Combine findings from both reviewers. Group all findings by severity:
## Review: [Document Title]
### P1 — Blocks Shipping
[These must be fixed before sharing. Wrong data, wrong goal, unfalsifiable hypothesis.]
### P2 — Should Fix
[Important but not blocking. Missing sources, unclear metrics, scope concerns.]
### P3 — Nice to Have
[Minor refinements. Wording, additional context, formatting.]
### Clean
[Sections that passed all checks — explicitly note what's good.]
Severity definitions:
| Severity | What qualifies | Examples |
|---|---|---|
| P1 Critical | Factual error, wrong data source, missing goal, unfalsifiable hypothesis | "Metric cited from wrong source" |
| P2 Important | Missing source citation, stale data, unclear success metric | "Conversion rate has no comparison basis" |
| P3 Nice-to-have | Minor framing, additional context, formatting | "Could specify the time period for this metric" |
Use AskUserQuestion:
Question: "Review complete. [N] findings ([P1 count] critical, [P2 count] important). What next?"
Options:
/kw:work — Plan passes. Start executing it/kw:compound — Save review insights as learningsP1 = hard gate. A factual error in a strategy doc is worse than a typo. Say so clearly.
Verify, don't assume. If a number is cited, check it against the actual source if possible. Don't just check formatting.
Flag staleness. Data older than 48 hours gets a freshness warning. Data older than 7 days gets a P2.
Be specific. "Data might be wrong" is not useful. "Revenue cited as $X but source shows $Y as of [date]" is.
Credit what's good. Don't only flag problems. Note sections that are well-grounded and clearly structured.
When invoked with disable-model-invocation context (e.g., from an orchestrator or automation):