Review and prioritize features using RICE, WSJF, or Kano scoring frameworks, then create GitHub issues for suggestions.
Run make test-feature-review to verify scoring logic after changes.
feature-review:inventory-complete)feature-review:classified)feature-review:scored)feature-review:tradeoffs-analyzed)feature-review:suggestions-generated)feature-review:issues-created)Review implemented features and suggest new ones using evidence-based prioritization. Create GitHub issues for accepted suggestions.
Feature decisions rely on data. Every feature involves tradeoffs that require evaluation. This skill uses hybrid RICE+WSJF scoring with Kano classification to prioritize work and generates actionable GitHub issues for accepted suggestions.
scope-guard).Discover and categorize existing features:
/feature-review --inventory
Evaluate features against the prioritization framework:
/feature-review
Review gaps and suggest new features:
/feature-review --suggest
Use tome plugin to adjust scores with external evidence:
/feature-review --research
Create issues for accepted suggestions:
/feature-review --suggest --create-issues
feature-review:inventory-complete)Identify features by analyzing:
Output: Feature inventory table.
feature-review:classified)Classify each feature along two axes:
Axis 1: Proactive vs Reactive
| Type | Definition | Examples |
|---|---|---|
| Proactive | Anticipates user needs. | Suggestions, prefetching. |
| Reactive | Responds to explicit input. | Form handling, click actions. |
Axis 2: Static vs Dynamic
| Type | Update Pattern | Storage Model |
|---|---|---|
| Static | Incremental, versioned. | File-based, cached. |
| Dynamic | Continuous, streaming. | Database, real-time. |
See classification-system.md for details.
feature-review:scored)Apply hybrid RICE+WSJF scoring:
Feature Score = Value Score / Cost Score
Value Score = (Reach + Impact + Business Value + Time Criticality) / 4
Cost Score = (Effort + Risk + Complexity) / 3
Adjusted Score = Feature Score * Confidence
Scoring Scale: Fibonacci (1, 2, 3, 5, 8, 13).
Thresholds:
See scoring-framework.md for the framework.
feature-review:tradeoffs-analyzed)Evaluate each feature across quality dimensions:
| Dimension | Question | Scale |
|---|---|---|
| Quality | Does it deliver correct results? | 1-5 |
| Latency | Does it meet timing requirements? | 1-5 |
| Token Usage | Is it context-efficient? | 1-5 |
| Resource Usage | Is CPU/memory reasonable? | 1-5 |
| Redundancy | Does it handle failures gracefully? | 1-5 |
| Readability | Can others understand it? | 1-5 |
| Scalability | Will it handle 10x load? | 1-5 |
| Integration | Does it play well with others? | 1-5 |
| API Surface | Is it backward compatible? | 1-5 |
See tradeoff-dimensions.md for criteria.
feature-review:research-enriched)Triggered by: --research flag. Requires tome plugin.
Use tome's multi-source research to adjust scoring factors with external evidence. This phase runs between tradeoff analysis and gap analysis.
tome:synthesize.See research-enrichment.md for the full enrichment protocol, delta calculation, and graceful degradation behavior.
Graceful degradation: If tome is not installed, prints a warning and proceeds with initial scores unchanged.
feature-review:suggestions-generated)feature-review:issues-created)Deferred capture for high-scoring suggestions: After the user confirms which suggestions to act on, any high-scoring suggestion (score > 2.5) that is not acted on should be preserved as a deferred item. Run once per skipped high-scoring suggestion:
python3 scripts/deferred_capture.py \
--title "<suggestion title>" \
--source feature-review \
--context "RICE score: <score>. <description>"
This runs automatically without prompting the user. Suggestions with scores of 2.5 or below do not need to be captured.
Feature-review uses opinionated defaults but allows customization.
Create .feature-review.yaml in project root:
# .feature-review.yaml