Use when experiments complete to judge what claims the results support, what they don't, and what evidence is still missing. Codex MCP evaluates results against intended claims and routes to next action (pivot, supplement, or confirm). Use after experiments finish — before writing the paper or running ablations.
Experiments produce numbers; this gate decides what those numbers mean. Collect results from available sources, get a Codex judgment, then auto-route based on the verdict.
Gather experiment data from whatever sources are available in the project:
wandb.Api().run("<entity>/<project>/<run_id>").history() — metrics, training curves, comparisonsssh server "tail -100 /path/to/training.log" if no other sourceAssemble the key information:
Send the collected results to Codex for objective evaluation:
mcp__codex__codex:
config: {"model_reasoning_effort": "xhigh"}
prompt: |
RESULT-TO-CLAIM EVALUATION
I need you to judge whether experimental results support the intended claim.
Intended claim: [the claim these experiments test]
Experiments run:
[list experiments with method, dataset, metrics]
Results:
[paste key numbers, comparison deltas, significance]
Baselines:
[baseline numbers and sources — reproduced or from paper]
Known caveats:
[any confounding factors, limited datasets, missing comparisons]
Please evaluate:
1. claim_supported: yes | partial | no
2. what_results_support: what the data actually shows
3. what_results_dont_support: where the data falls short of the claim
4. missing_evidence: specific evidence gaps
5. suggested_claim_revision: if the claim should be strengthened, weakened, or reframed
6. next_experiments_needed: specific experiments to fill gaps (if any)
7. confidence: high | medium | low
Be honest. Do not inflate claims beyond what the data supports.
A single positive result on one dataset does not support a general claim.
Extract structured fields from Codex response:
- claim_supported: yes | partial | no
- what_results_support: "..."
- what_results_dont_support: "..."
- missing_evidence: "..."
- suggested_claim_revision: "..."
- next_experiments_needed: "..."
- confidence: high | medium | low
no — Claim not supportedpartial — Claim partially supportedpartial on the same claim → record analysis in findings.md, consider whether to narrow the claim scope or switch ideasyes — Claim supported/aris-ablation-plannerSkip this step entirely if research-wiki/ does not exist.
if research-wiki/ exists:
# 1. Create experiment page
Create research-wiki/experiments/<exp_id>.md with:
- node_id: exp:<id>
- idea_id: idea:<active_idea>
- date, hardware, duration, metrics
- verdict, confidence, reasoning summary
# 2. Update claim status
for each claim resolved by this verdict:
if verdict == "yes":
Update claim page: status → supported
python3 tools/research_wiki.py add_edge research-wiki/ --from "exp:<id>" --to "claim:<cid>" --type supports --evidence "<metric>"
elif verdict == "partial":
Update claim page: status → partial
python3 tools/research_wiki.py add_edge research-wiki/ --from "exp:<id>" --to "claim:<cid>" --type supports --evidence "partial"
else:
Update claim page: status → invalidated
python3 tools/research_wiki.py add_edge research-wiki/ --from "exp:<id>" --to "claim:<cid>" --type invalidates --evidence "<why>"
# 3. Update idea outcome
Update research-wiki/ideas/<idea_id>.md:
- outcome: positive | mixed | negative
- If negative: fill "Failure / Risk Notes" and "Lessons Learned"
- If positive: fill "Actual Outcome" and "Reusable Components"
# 4. Rebuild + log
python3 tools/research_wiki.py rebuild_query_pack research-wiki/
python3 tools/research_wiki.py log research-wiki/ "result-to-claim: exp:<id> verdict=<verdict> for idea:<idea_id>"
# 5. Re-ideation suggestion
Count failed/partial ideas since last /aris-idea-creator run.
If >= 3: print "💡 3+ ideas tested since last ideation. Consider re-running /aris-idea-creator — the wiki now knows what doesn't work."
confidence is low, treat the judgment as inconclusive and add experiments rather than committing to a claim.[pending Codex review] — do not block the pipeline.