Run adversarial multi-round critique on a research idea through @critic and @architect. Produces a full review report with scores, weaknesses, improvement suggestions, and a final verdict.
This skill orchestrates a rigorous adversarial review of a research idea, mimicking the NeurIPS/ICML program committee process. It is a prerequisite before investing effort in experiment design or writing.
Use this skill when:
You need at minimum:
Launch @critic with:
@critic's job:
search_arxiv_papers) for papers that directly address this ideaOverlap classification:
If novelty check passes (overlap < 50%), run the full NeurIPS-style review:
Scores (1–10):
- Novelty: X — [justification with paper citations]
- Feasibility: X — [compute/data requirements assessment]
- Significance: X — [impact if successful]
- Clarity: X — [how well-defined is the contribution]
- Overall: X
Weaknesses (must identify at least 3):
1. ...
2. ...
3. ...
Strengths (must identify at least 2):
1. ...
2. ...
Concrete improvements:
1. ...
2. ...
Verdict: ACCEPT / WEAK_ACCEPT / WEAK_REJECT / REJECT
Advancement threshold: Overall ≥ 6/10.
If the verdict is WEAK_REJECT or the overall score is 5–6:
Update the idea in the idea store using idea_store with action update:
critique_done: truescores fieldstatus to validated if Overall ≥ 6, or abandoned if rejectednotes# Critique Report: [Idea Title]
## Novelty Check
[Overlap classification + conflicting papers found]
## Review Scores
| Dimension | Score | Key Reason |
|--------------|-------|-------------------------------------|
| Novelty | X/10 | ... |
| Feasibility | X/10 | ... |
| Significance | X/10 | ... |
| Clarity | X/10 | ... |
| Overall | X/10 | ... |
## Weaknesses
1. ...
2. ...
3. ...
## Strengths
1. ...
2. ...
## Improvements Applied
[If a revised version was generated]
## Verdict
[ACCEPT / WEAK_ACCEPT / WEAK_REJECT / REJECT] — Proceed to `experiment-design`? [Yes / No]