Configure Lakebase for agent memory storage. Use when: (1) Adding memory capabilities to the agent, (2) 'Failed to connect to Lakebase' errors, (3) Permission errors on checkpoint/store tables, (4) User says 'lakebase', 'memory setup', or 'add memory'.
Profile reminder: All
databricksCLI commands must include the profile from.env:databricks <command> --profile <profile>orDATABRICKS_CONFIG_PROFILE=<profile> databricks <command>
Autoscaling Lakebase? If the user mentions "autoscaling", "project", or "branch" in the context of Lakebase, they are using an autoscaling Lakebase instance (not provisioned). This skill covers provisioned instances only. For autoscaling, see
.claude/skills/add-tools/examples/lakebase-autoscaling.mdinstead — it usesLAKEBASE_AUTOSCALING_PROJECTandLAKEBASE_AUTOSCALING_BRANCHenv vars, deploys the app first, then adds the postgres resource via API for permissions and grants table access.
Lakebase provides persistent PostgreSQL storage for agents:
AsyncCheckpointSaver)AsyncDatabricksStore)agent_server schema)Note: For pre-configured memory templates, see:
agent-langgraph-short-term-memory- Conversation history within a sessionagent-langgraph-long-term-memory- User facts that persist across sessionsagent-openai-agents-sdk-long-running-agent- Background tasks with Lakebase persistence
┌─────────────────────────────────────────────────────────────────────────────┐
│ 1. Add dependency → 2. Get instance → 3. Configure DAB │
│ 4. Configure .env → 5. Initialize tables → 6. Deploy + Run │
└─────────────────────────────────────────────────────────────────────────────┘
Add the memory extra to your pyproject.toml:
dependencies = [
"databricks-langchain[memory]",
# ... other dependencies
]
Then sync dependencies:
uv sync
If you have an existing instance, note its name for the next step.
Add the Lakebase database resource to your app in databricks.yml: