Deploy and manage Databricks workspaces using the FE Vending Machine with automatic authentication and environment caching. Answers questions about which workspace was deployed or created at an earlier time
This skill provides programmatic access to the FE Vending Machine (FEVM) for deploying and managing Databricks workspaces. It automatically handles authentication and caches environment information to minimize user interaction.
The FE Vending Machine is used to create net-new demo environments that can live for up to 30 days. This skill:
~/.vibe/fe-vm/ to avoid repeated browser interactionsBefore deploying anything new, always check for existing environments:
python3 resources/environment_manager.py list
This shows all cached workspaces with their URLs, creation dates, and expiration dates.
If no cached environments exist or you need fresh data:
python3 resources/fe_vm_client.py refresh-cache
This will:
~/.vibe/fe-vm/session.jsonTo find an existing suitable workspace:
python3 resources/environment_manager.py find --type serverless --min-days 7
This returns a workspace URL if one exists with at least 7 days remaining.
To deploy a new Serverless workspace:
python3 resources/fe_vm_client.py deploy-serverless --name my-demo --region us-east-1 --lifetime 30
To deploy a new Classic workspace:
python3 resources/fe_vm_client.py deploy-classic --name my-classic --region us-west-2 --lifetime 14
For new deployments, monitor the progress:
python3 resources/fe_vm_client.py status --run-id <run_id>
Or wait for completion:
python3 resources/fe_vm_client.py wait --run-id <run_id>
Once deployed, use the databricks-authentication skill to set up CLI access:
databricks auth login <workspace_url> --profile=fe-vm-<workspace_name>
Use this flow to decide what to do:
python3 resources/environment_manager.py listpython3 resources/fe_vm_client.py refresh-cache| Command | Description |
|---|---|
list | List all cached environments with details |
find --type <serverless|classic> --min-days N | Find suitable workspace |
get <workspace_name> | Get details for specific workspace |
remove <workspace_name> | Remove workspace from cache |
clear | Clear all cached data |
| Command | Description |
|---|---|
refresh-cache | Fetch latest from FEVM and update cache |
deploy-serverless [--name N] [--region R] [--lifetime D] | Deploy serverless workspace |
deploy-classic [--name N] [--region R] [--lifetime D] | Deploy classic workspace |
status --run-id ID | Check deployment status |
wait --run-id ID [--timeout M] | Wait for deployment to complete |
user-info | Get current user info |
quota | Get quota information |
All environment data is stored in ~/.vibe/fe-vm/:
~/.vibe/fe-vm/
├── session.json # Session cookie and expiry
├── environments.json # Cached workspace information
└── deployments/ # Individual deployment configs
└── <workspace_name>.json
When authentication is needed:
~/.vibe/fe-vm/session.json__Host-databricksapps cookie~/.vibe/fe-vm/session.json with expiry timestamp| Template | ID | Cloud | Use Case |
|---|---|---|---|
| Serverless Deployment | 3 | AWS | Apps, Lakebase, fast demos |
| Classic Deployment | 4 | AWS | Custom networking, cloud integrations |
| Azure Sandbox | 5 | Azure | Azure-specific demos |
| Azure Stable Classic | 8 | Azure | Azure classic workspaces |
Available regions: us-east-1, us-east-2, us-west-1, us-west-2, eu-central-1, ap-northeast-1
FE-VM workspaces follow a standard naming convention:
| Component | Pattern | Example |
|---|---|---|
| Workspace name | <name> | deep-research-agent |
| Workspace URL | https://fevm-<name>.cloud.databricks.com | https://fevm-deep-research-agent.cloud.databricks.com |
| Standard catalog | <name_with_underscores>_catalog | deep_research_agent_catalog |
Important: Each FE-VM workspace comes with a pre-created Unity Catalog. The catalog name is derived from the workspace name by replacing hyphens with underscores and appending _catalog.
When creating schemas or tables, always use the standard catalog rather than creating a new one:
-- Use the existing catalog (example for workspace "deep-research-agent")
USE CATALOG deep_research_agent_catalog;
-- Create schemas within the standard catalog
CREATE SCHEMA IF NOT EXISTS my_demo_schema;
-- Create tables in the standard catalog
CREATE TABLE deep_research_agent_catalog.my_demo_schema.my_table (...);
This avoids permission errors and ensures consistency across the workspace.