Optimize Databricks cluster and query performance.
Use when jobs are running slowly, optimizing Spark configurations,
or improving Delta Lake query performance.
Trigger with phrases like "databricks performance", "spark tuning",
"databricks slow", "optimize databricks", "cluster performance".
jeremylongshore1,965 starsApr 3, 2026
Occupation
Categories
Database Tools
Skill Content
Overview
Optimize Databricks cluster sizing, Spark configuration, and Delta Lake query performance. Covers workload-specific Spark configs, Adaptive Query Execution (AQE), Liquid Clustering, Z-ordering, OPTIMIZE/VACUUM maintenance, query plan analysis, and caching strategies.
Prerequisites
Access to cluster configuration (admin or cluster owner)
Understanding of workload type (ETL batch, ML training, streaming, interactive)
-- Compact small files and co-locate data by frequently filtered columns
OPTIMIZE prod_catalog.silver.orders ZORDER BY (order_date, customer_id);
-- Check file stats before and after
DESCRIBE DETAIL prod_catalog.silver.orders;
-- Look at: numFiles (should decrease), sizeInBytes
-- Enable Liquid Clustering — Databricks auto-optimizes data layout
ALTER TABLE prod_catalog.silver.orders CLUSTER BY (order_date, region);
-- Trigger incremental clustering
OPTIMIZE prod_catalog.silver.orders;
-- Advantages over Z-order:
-- * Incremental (only re-clusters new data)
-- * No need to choose between partitioning and Z-ordering
-- * Works with Deletion Vectors for faster DELETE/UPDATE
Predictive Optimization
-- Let Databricks auto-schedule OPTIMIZE and VACUUM
ALTER TABLE prod_catalog.silver.orders
SET TBLPROPERTIES ('delta.enableDeletionVectors' = 'true');
-- Enable at schema level for all tables
ALTER SCHEMA prod_catalog.silver
SET DBPROPERTIES ('delta.enablePredictiveOptimization' = 'true');
-- Find slow queries (SQL warehouse query history)
SELECT statement_id, executed_by,
total_duration_ms / 1000 AS duration_sec,
rows_produced, bytes_scanned / 1024 / 1024 AS scanned_mb,
statement_text
FROM system.query.history
WHERE total_duration_ms > 30000 -- > 30 seconds
AND start_time > current_timestamp() - INTERVAL 24 HOURS
ORDER BY total_duration_ms DESC
LIMIT 20;
# Analyze a query plan for bottlenecks
df = spark.table("prod_catalog.silver.orders").filter("region = 'US'")
df.explain(mode="formatted")
# Look for: BroadcastHashJoin (good), SortMergeJoin (may be slow on skewed data)
# Look for: ColumnarToRow conversion (indicates non-Photon path)
Step 5: Join Optimization
from pyspark.sql.functions import broadcast
# Rule of thumb: broadcast tables < 100MB
# BAD: Sort-merge join on small lookup table
result = orders.join(products, "product_id") # triggers expensive shuffle
# GOOD: Broadcast the small table
result = orders.join(broadcast(products), "product_id") # no shuffle
# For skewed keys: use AQE skew join handling
spark.conf.set("spark.sql.adaptive.skewJoin.enabled", "true")
spark.conf.set("spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes", "256m")
Step 6: Caching Strategy
# Cache a frequently-accessed table
spark.table("prod_catalog.gold.daily_metrics").cache()
# Or use Delta Cache (automatic for i3/r5 instances with local SSD)
# Enable in cluster config:
# spark.databricks.io.cache.enabled = true
# spark.databricks.io.cache.maxDiskUsage = 50g
# NEVER cache Bronze tables — they're too large and change frequently
# ALWAYS cache small lookup/dimension tables used in multiple queries
Step 7: VACUUM and Table Maintenance Schedule
-- Clean up old file versions (default retention: 7 days)
VACUUM prod_catalog.silver.orders RETAIN 168 HOURS;
-- Schedule via Databricks job or DLT maintenance task
-- Recommended: weekly OPTIMIZE, daily VACUUM for active tables
Output
Cluster sized appropriately for workload type
Spark configs tuned per workload (ETL, ML, streaming, interactive)
Delta tables optimized with Z-ordering or Liquid Clustering
Slow queries identified via query history analysis
Join and caching strategies applied
Error Handling
Issue
Cause
Solution
OOM during shuffle
Skewed partition
Enable AQE skew join or salt the join key
Slow joins
Large shuffle
broadcast() tables < 100MB
Too many small files
Frequent small writes
Run OPTIMIZE or enable autoCompact
VACUUM below retention
Retention < 7 days
Minimum is 168 HOURS; set delta.deletedFileRetentionDuration
Query plan shows ColumnarToRow
Non-Photon code path
Use Photon-enabled runtime (suffix -photon-scala2.12)