Integrate mini-swe-agent (SWE-Agent nano v2.x) with our Vertex AI + LiteLLM proxy infrastructure. Documents setup, model routing, config pitfalls, and working configurations.
Mini-SWE-Agent v2.x is a nano version of SWE-Agent -- an autonomous coding agent that executes steps in a local environment. Integrating it with our Vertex AI proxy requires navigating several config pitfalls.
python3 -m venv ~/miniswe-venv
source ~/miniswe-venv/bin/activate
pip install 'litellm[google]' mini-swe-agent
# Fix permissions if needed
sudo chown -R $USER:$USER ~/miniswe-venv/
Mini-swe loads ~/.config/mini-swe-agent/.env with load_dotenv(override=True), meaning env vars there override ALL YAML config values, including:
MSWEA_MODEL_NAME → overrides model.model_name in yamlOPENAI_API_KEY → overrides model.model_kwargs.api_keyMSWEA_COST_TRACKING → overrides model.cost_trackingDefault .env usually has: MSWEA_MODEL_NAME='openrouter/free'
This silently hijacks your yaml config and tries OpenRouter instead.
.env to set MSWEA_MODEL_NAME to your desired model.env before runningunset MSWEA_MODEL_NAME before the commandMini-swe uses litellm directly. To route through our LiteLLM proxy at localhost:4000:
# mini-swe-vertex.yaml
model_type: minisweagent.models.litellm_textbased_model.LitellmTextbasedModel
agent_type: minisweagent.agents.default.DefaultAgent