Database specialist covering PostgreSQL, MongoDB, Redis, Oracle, and advanced data patterns for modern applications. Use for database schema design, query optimization, indexing strategies, or data modeling.
Enterprise Database Expertise - Comprehensive database patterns and implementations covering PostgreSQL, MongoDB, Redis, Oracle, and advanced data management for scalable modern applications.
Core Capabilities:
When to Use:
Database Stack Initialization:
Create a DatabaseManager instance and configure multiple database connections. Set up PostgreSQL with connection string, pool size of 20, and query logging enabled. Configure MongoDB with connection string, database name, and sharding enabled. Configure Redis with connection string, max connections of 50, and clustering enabled. Use the unified interface to query user data with profile and analytics across all database types.
Single Database Operations:
Run PostgreSQL schema migrations using the migration command with the database type and migration file path. Execute MongoDB aggregation pipelines by specifying the collection name and pipeline JSON file. Warm Redis cache by specifying key patterns and TTL values.
PostgreSQL Module:
MongoDB Module:
Redis Module:
Oracle Module:
Polyglot Persistence Pattern:
Create a DataRouter class that initializes connections to PostgreSQL, MongoDB, Redis, and Oracle. Implement get_user_profile method that retrieves structured user data from PostgreSQL or Oracle, flexible profile data from MongoDB, and real-time status from Redis, then merges all data sources. Implement update_user_data method that routes structured data updates to PostgreSQL/Oracle, profile data updates to MongoDB, and real-time data updates to Redis, followed by cache invalidation.
Data Synchronization:
Create a DataSyncManager class that synchronizes user data across databases. Implement sync_user_data method that retrieves user from PostgreSQL, creates a search document for MongoDB, upserts to the MongoDB search collection, creates cache data, and updates Redis cache with TTL.
Query Performance Analysis:
For PostgreSQL, execute EXPLAIN ANALYZE BUFFERS on queries and use a QueryAnalyzer to generate optimization suggestions. For MongoDB, create an AggregationOptimizer to analyze and optimize aggregation pipelines. For Redis, retrieve info metrics and use a PerformanceAnalyzer to generate recommendations.
Scaling Strategies:
Configure PostgreSQL read replicas by providing replica connection URLs. Set up MongoDB sharding with shard key and number of shards. Configure Redis clustering by providing node URLs for the cluster.
Complementary Skills:
Technology Integration:
Relational Database:
NoSQL Database:
In-Memory Database:
Enterprise Database:
Supporting Tools:
Performance Features:
For working code examples, see examples.md.
For detailed implementation patterns and database-specific optimizations, see the modules directory.
Status: Production Ready Last Updated: 2026-01-11 Maintained by: MoAI-ADK Database Team
| Rationalization | Reality |
|---|---|
| "I do not need an index, the table is small" | Tables grow. The missing index that is invisible at 1K rows becomes a production incident at 1M rows. |
| "I will add the migration later" | Schema changes without migrations are unreproducible. Every change must have a reversible migration script. |
| "This query works fine in development" | Development databases have tiny datasets. Production query plans differ dramatically at scale. Explain analyze first. |
| "NoSQL does not need schema design" | Schemaless does not mean designless. Document structure decisions affect every query and index. |
| "I will just add a column, it is non-breaking" | Adding a NOT NULL column without a default breaks existing inserts. Column additions need default values or migration backfills. |
| "Connection pooling is handled by the framework" | Framework defaults are generic. Pool size, timeout, and idle limits must be tuned to the workload. |