Execute research code inside isolated Docker containers for safe replication, experiments, and benchmarks. Use when the user selects Docker as the execution environment or asks to run code safely, in isolation, or in a sandbox.
Run research code inside Docker containers while Feynman stays on the host. The container gets the project files, runs the commands, and results sync back.
/replicate or /autoresearchFor Python research code (most common):
docker run --rm -v "$(pwd)":/workspace -w /workspace python:3.11 bash -c "
pip install -r requirements.txt &&
python train.py
"
For projects with a Dockerfile:
docker build -t feynman-experiment .
docker run --rm -v "$(pwd)/results":/workspace/results feynman-experiment
For GPU workloads:
docker run --rm --gpus all -v "$(pwd)":/workspace -w /workspace pytorch/pytorch:latest bash -c "
pip install -r requirements.txt &&
python train.py
"
| Research type | Base image |
|---|---|
| Python ML/DL | pytorch/pytorch:latest or tensorflow/tensorflow:latest-gpu |
| Python general | python:3.11 |
| Node.js | node:20 |
| R / statistics | rocker/r-ver:4 |
| Julia | julia:1.10 |
| Multi-language | ubuntu:24.04 with manual installs |
For iterative experiments (like /autoresearch), create a named container instead of --rm. Choose a descriptive name based on the experiment:
docker create --name <name> -v "$(pwd)":/workspace -w /workspace python:3.11 tail -f /dev/null
docker start <name>
docker exec <name> bash -c "pip install -r requirements.txt"
docker exec <name> bash -c "python train.py"
This preserves installed packages across iterations. Clean up with:
docker stop <name> && docker rm <name>
--network none for full isolation