SQL and NoSQL query optimization techniques, indexing strategies, execution plan analysis, JOIN algorithms, cardinality estimation, and database-specific query patterns
Modern relational DBs use cost-based optimizers (CBO): generate plan candidates -> estimate cost via statistics (row counts, distributions, selectivity) -> select lowest I/O/CPU/memory plan. Stale statistics lead to suboptimal plans.
Validate optimization with EXPLAIN before and after changes.
-- PostgreSQL (add ANALYZE for actual runtime stats)
EXPLAIN ANALYZE SELECT order_id, total FROM orders WHERE customer_id = 12345;
-- MySQL: EXPLAIN FORMAT=JSON ... | SQL Server: SET STATISTICS IO ON
Key indicators: Seq Scan/Table Scan = missing index | Index Scan/Seek = efficient | Hash Join = large equality joins | Nested Loop = small/indexed inner | Merge Join = pre-sorted inputs | Sort = watch disk spills
Supports: equality, range, sorting, prefix matching | O(log n) lookup | General-purpose, all major DBs default
Equality only | O(1) lookup | High-cardinality exact-match | No range/sorting/pattern support
Include all query columns in index -> eliminates table access (index-only scan) | Trade-off: larger index, slower writes
-- Covering index for: SELECT name, email FROM users WHERE status = 'active'
CREATE INDEX idx_users_status_covering ON users(status) INCLUDE (name, email);
Order by: 1. Equality conditions first (highest selectivity) | 2. Sort columns second | 3. Range conditions last
Equality-Sort-Range ordering for compound indexes:
// Query: status = "A", qty > 20, sorted by item
// Optimal index:
db.collection.createIndex({ status: 1, item: 1, qty: 1 })
// E(quality) S(ort) R(ange)
-- Bad: SELECT * retrieves unnecessary data, prevents covering indexes
SELECT * FROM orders WHERE customer_id = 12345;
-- Good: Specify columns, enables covering index
SELECT order_id, order_date, total FROM orders WHERE customer_id = 12345;
SUM() OVER, RANK() OVER (PARTITION BY ...) for analytics without self-joinsWHERE id > last_seen ORDER BY id LIMIT N) over OFFSET for deep pagescursor.execute("... WHERE id = %s", (id,)))| Algorithm | Best When | Cost |
|---|---|---|
| Nested Loop | Small outer table, indexed inner table | O(n * m) worst, O(n * log m) with index |
| Hash Join | Large tables, equality joins, no useful indexes | O(n + m) build + probe |
| Merge Join | Both inputs already sorted (index order) | O(n + m) after sort |
Optimizer predicts row counts using: Histograms (value distribution) | Density vectors (non-histogram columns) | Statistics objects via ANALYZE (PostgreSQL) / UPDATE STATISTICS (SQL Server)
When estimation is wrong (correlated columns, skewed data, multi-table joins): 1. Run ANALYZE/UPDATE STATISTICS | 2. Create multi-column statistics | 3. Query hints as last resort
Place $match/$project early in pipelines | Use $lookup sparingly (left outer joins) | Compound indexes following ESR | Validate with explain("executionStats")
Always include partition key | Design tables around query patterns (query-first) | Use SAI over SASI (43% throughput gain) | Avoid ALLOW FILTERING (full cluster scan) | Materialized views add write overhead
Use Query not Scan | Design partition keys for even distribution | GSIs for alternative access patterns | Single-table design with composite sort keys
FT.SEARCH for complex queries (RediSearch module) | Design key naming for efficient SCAN | Use pipelining for batch ops
WHERE UPPER(name) = 'JOHN' blocks index; use function-based indexes