How to add a local Ollama model to OpenClaw as a selectable provider with proper context window and auth configuration.
http://localhost:11434)ollama pull <model>:<tag>Ollama models default to small context windows (often 8192). For larger context, either pull a tag that includes it or create a custom Modelfile:
# Option A: Pull a tag with context baked in (if available)
ollama pull gemma4:e4b-128k
# Option B: Create custom Modelfile for larger context
cat <<'EOF' > /tmp/Modelfile
FROM gemma4:e4b
PARAMETER num_ctx 131072
EOF
ollama create gemma4:e4b-128k -f /tmp/Modelfile
Verify the model has the correct context:
ollama show gemma4:e4b-128k --modelfile | grep num_ctx
# Should show: PARAMETER num_ctx 131072
Add or update the ollama-local provider under models.providers. Use openai-completions API (NOT native ollama API) to avoid context window probe issues:
{
"models": {
"providers": {
"ollama-local": {
"baseUrl": "http://localhost:11434/v1",
"api": "openai-completions",
"apiKey": "ollama-local",
"models": [
{
"id": "gemma4:e4b-128k",
"name": "Gemma 4 E4B 128k (Local)",
"reasoning": false,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 131072,
"maxTokens": 8192
}
]
}
}
}
}
Critical notes:
baseUrl must be http://localhost:11434/v1 (with /v1) — this is Ollama's OpenAI-compatible endpointapi must be openai-completions — the native "ollama" API has a bug where it probes context window and always reports 8192, regardless of the model's actual num_ctxcontextWindow must be >= 16000 (OpenClaw's hard minimum) and >= 32000 recommended for tool useid must exactly match the Ollama model tag (e.g., gemma4:e4b-128k)Under agents.defaults.models, add the model so it appears in the model picker:
{
"agents": {
"defaults": {
"models": {
"ollama-local/gemma4:e4b-128k": {
"alias": "Gemma4-Local-128k"
}
}
}
}
}
Add a dummy API key to the auth profiles store. Ollama doesn't need real auth but OpenClaw's openai-completions adapter requires an entry:
Edit ~/.openclaw/agents/main/agent/auth-profiles.json and add:
{
"ollama-local:default": {
"type": "api_key",
"provider": "ollama-local",
"key": "ollama"
}
}
After saving, OpenClaw will auto-detect the config change and restart. Check logs:
tail -10 ~/.openclaw/logs/gateway.log
# Should show: config change detected; evaluating reload
# Then: ready (X plugins...)
# If errors occur:
grep "ollama-local" ~/.openclaw/logs/gateway.err.log | tail -10
"api": "ollama" instead of "api": "openai-completions"contextWindow in the config is set below 16000openai-completions API with /v1 endpoint~/.openclaw/agents/main/agent/auth-profiles.jsonollama-local:default entry as shown in Step 4tail -20 ~/.openclaw/logs/gateway.err.log
tail -20 ~/.openclaw/logs/gateway.log
models.providers.ollama-local.models[] AND agents.defaults.modelsollama-local/<model-id> (e.g., ollama-local/gemma4:e4b-128k)Large context windows consume significant RAM/VRAM:
If you hit OOM, reduce num_ctx in the Ollama Modelfile and contextWindow in openclaw.json accordingly. Minimum usable is 32768.