Write a complete Numerai experiment report in experiment.md (abstract, methods, results tables, decisions, next steps) and generate/link the standard show_experiment plot(s). Use after running any Numerai research experiments, or when a user asks for a “full report”, “write up”, “experiment.md update”, or “generate the standard plot”.
This skill turns an experiment run into a durable write-up: a full experiment.md plus the standard show_experiment plot(s) linked from the report.
Use the folder that contains:
configs/ (the configs you ran)results/ (JSON metrics output)predictions/ (OOF parquet output)experiment.md (the report you will write/update)results/*.json and predictions/*.parquet.bmc_mean and bmc_last_200_eras.mean (primary), with as a sanity check.corr_meanFor each run you report, include at least:
corr_meanbmc_meanbmc_last_200_eras.meanavg_corr_with_benchmark (from the BMC summary)Prefer a single markdown table with one row per model.
Update/create experiment.md with these sections (keep it crisp but complete):
Default standard plot (baseline = benchmark predictions):
PYTHONPATH=numerai python3 -m agents.code.analysis.show_experiment benchmark <best_model_results_name> \
--base-benchmark-model v52_lgbm_ender20 \
--benchmark-data-path numerai/v5.2/full_benchmark_models.parquet \
--start-era 575 --dark \
--output-dir numerai/agents/experiments/<experiment_name> \
--baselines-dir numerai/agents/baselines
Then embed it in experiment.md with a relative link:

If you have multiple candidate models, either:
plots/.experiment.md links resolve (use relative paths).results/*.json.