Expert knowledge for deploying and operating Composable Rust applications in production. Use when setting up database migrations, configuring connection pools, implementing backup/restore procedures, tuning performance, setting up monitoring and observability, or handling operational concerns like disaster recovery and production database management.
Expert knowledge for production deployment and operations of Composable Rust applications - database migrations, connection pooling, backup/restore, performance tuning, monitoring, and operational excellence.
Automatically apply when:
Option 1: Helper Function (Deployment Scripts)
use composable_rust_postgres::run_migrations;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let database_url = std::env::var("DATABASE_URL")?;
// Run migrations during startup
run_migrations(&database_url).await?;
println!("Database ready!");
Ok(())
}
Option 2: EventStore Method (When Store Exists)
use composable_rust_postgres::PostgresEventStore;
let store = PostgresEventStore::new(&database_url).await?;
store.run_migrations().await?;
Option 3: sqlx CLI (Development)
# Install CLI
cargo install sqlx-cli --no-default-features --features postgres
# Run migrations
sqlx migrate run --database-url postgres://localhost/mydb
# Revert last migration
sqlx migrate revert --database-url postgres://localhost/mydb
Step 1: Create SQL file with sequential number
# migrations/003_add_user_context.sql
Step 2: Write idempotent SQL
-- Add user_context column to events table
ALTER TABLE events
ADD COLUMN IF NOT EXISTS user_context JSONB;
-- Add index
CREATE INDEX IF NOT EXISTS idx_events_user_context
ON events USING GIN (user_context);
Critical Rules:
IF NOT EXISTS for idempotency-- ✅ GOOD: Idempotent
CREATE TABLE IF NOT EXISTS orders (
id UUID PRIMARY KEY,
customer_id UUID NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- ✅ GOOD: Safe column addition
ALTER TABLE orders
ADD COLUMN IF NOT EXISTS status TEXT DEFAULT 'pending';
-- ❌ BAD: Not idempotent
CREATE TABLE orders (...); -- Fails on second run
-- ❌ BAD: Destructive
DROP TABLE old_orders; -- Can't be undone
use sqlx::postgres::PgPoolOptions;
use std::time::Duration;
let pool = PgPoolOptions::new()
// Connection limits
.max_connections(20) // Max concurrent
.min_connections(5) // Keep warm
// Timeouts
.acquire_timeout(Duration::from_secs(10)) // Wait for conn
.idle_timeout(Duration::from_secs(600)) // 10min idle
.max_lifetime(Duration::from_secs(1800)) // 30min recycle
// Health
.test_before_acquire(true) // Validate conn
.connect(&database_url)
.await?;
let store = PostgresEventStore::from_pool(pool);
Formula: max_connections = (req/sec × conn_time) / req_time + buffer
Example:
(1000 × 0.010) / 0.050 + 5 = 25 connectionsRecommendations by Load:
| Environment | Max Connections | Use Case |
|---|---|---|
| Development | 5 | Minimal overhead |
| Staging | 10-20 | Simulate production |
| Low traffic | 20-50 | < 100 req/sec |
| Medium traffic | 50-100 | 100-1000 req/sec |
| High traffic | 100-200 | > 1000 req/sec |
PostgreSQL Limits:
# postgresql.conf
max_connections = 200 # Reserve some for admin/monitoring
// Check pool health
let size = pool.size(); // Current connections
let idle = pool.num_idle(); // Available connections