Build with Aurora DSQL - manage schemas, execute queries, and handle migrations with DSQL-specific requirements. Use when developing a scalable or distributed database/application or user requests DSQL.
Aurora DSQL is a serverless, PostgreSQL-compatible distributed SQL database. This skill provides direct database interaction via MCP tools, schema management, migration support, and multi-tenant patterns.
When: Always load for guidance using or updating the DSQL MCP server
Contains: Instructions for setting up the DSQL MCP server with 2 configuration options as
sampled in .mcp.json
When: MUST load when making language-specific implementation choices
Contains: Driver selection, framework patterns, connection code for Python/JS/Go/Java/Rust
When: MUST load when creating database roles, granting permissions, setting up schemas for applications, or handling sensitive data
Contains: Scoped role setup, IAM-to-database role mapping, schema separation for sensitive data, role design patterns
When: MUST load when trying to perform DROP COLUMN, RENAME COLUMN, ALTER COLUMN TYPE, or DROP CONSTRAINT functionality
Contains: Table recreation patterns, batched migration for large tables, data validation
When: MUST load when migrating from MySQL to DSQL or translating MySQL DDL to DSQL-compatible equivalents
Contains: MySQL data type mappings, DDL operation translations, AUTO_INCREMENT/ENUM/SET/FOREIGN KEY migration patterns, ALTER TABLE ALTER COLUMN and DROP COLUMN via table recreation
MCP Tools Available
The aurora-dsql MCP server provides these tools:
Database Operations:
readonly_query - Execute SELECT queries (returns list of dicts)
transact - Execute DDL/DML statements in transaction (takes list of SQL statements)
get_schema - Get table structure for a specific table
Documentation & Knowledge:
4. dsql_search_documentation - Search Aurora DSQL documentation
5. dsql_read_documentation - Read specific documentation pages
6. dsql_recommend - Get DSQL best practice recommendations
Note: There is no list_tables tool. Use readonly_query with information_schema.
See mcp-setup.md for detailed setup instructions.
See mcp-tools.md for detailed usage and examples.
CLI Scripts Available
Bash scripts for cluster management and direct psql connections. All scripts are located in scripts/.
Cluster Management:
create-cluster.sh - Create new DSQL cluster with optional tags
delete-cluster.sh - Delete cluster with confirmation prompt
list-clusters.sh - List all clusters in a region
cluster-info.sh - Get detailed cluster information
Database Connection:
psql-connect.sh - Connect to DSQL using psql with automatic IAM auth token generation
Use readonly_query with information_schema to list tables
Use get_schema to understand table structure
2. Query data
Use readonly_query for SELECT queries
Always include tenant_id in WHERE clause for multi-tenant apps
Validate inputs carefully (no parameterized queries available)
3. Execute schema changes
Use transact tool with list of SQL statements
Follow one-DDL-per-transaction rule
Always use CREATE INDEX ASYNC in separate transaction
Common Workflows
Workflow 1: Create Multi-Tenant Schema
Goal: Create a new table with proper tenant isolation
Steps:
Create main table with tenant_id column using transact
Create async index on tenant_id in separate transact call
Create composite indexes for common query patterns (separate transact calls)
Verify schema with get_schema
Critical rules:
Include tenant_id in all tables
Use CREATE INDEX ASYNC (never synchronous)
Each DDL in its own transact call: transact(["CREATE TABLE ..."])
Store arrays/JSON as TEXT
Workflow 2: Safe Data Migration
Goal: Add a new column with defaults safely
Steps:
Add column using transact: transact(["ALTER TABLE ... ADD COLUMN ..."])
Populate existing rows with UPDATE in separate transact calls (batched under 3,000 rows)
Verify migration with readonly_query using COUNT
Create async index for new column using transact if needed
Critical rules:
Add column first, populate later
Never add DEFAULT in ALTER TABLE
Batch updates under 3,000 rows in separate transact calls
MUST validate table exists and get row count with readonly_query
MUST get current schema with get_schema
MUST create new table with desired structure using transact
MUST migrate data (batched in 500-1,000 row chunks for tables > 3,000 rows)
MUST verify row counts match before proceeding
MUST swap tables: drop original, rename new
MUST recreate indexes using CREATE INDEX ASYNC
Rules:
MUST use batching for tables exceeding 3,000 rows
PREFER batches of 500-1,000 rows for optimal throughput
MUST validate data compatibility before type changes (abort if incompatible)
MUST NOT drop original table until new table is verified
MUST recreate all indexes after table swap using ASYNC
Workflow 6: MySQL to DSQL Schema Migration
Goal: Migrate MySQL table schemas and DDL operations to DSQL-compatible equivalents, including data type mapping, ALTER TABLE ALTER COLUMN, and DROP COLUMN operations.
ALWAYS use ASYNC indexes - CREATE INDEX ASYNC is mandatory
MUST Serialize arrays/JSON as TEXT - Store arrays/JSON as TEXT (comma separated, JSON.stringify)
ALWAYS Batch under 3,000 rows - maintain transaction limits
REQUIRED: Sanitize SQL inputs with allowlists, regex, and quote escaping - See Input Validation
MUST follow correct Application Layer Patterns - when multi-tenant isolation or application referential itegrity are required; refer to Application Layer Patterns
REQUIRED use DELETE for truncation - DELETE is the only supported operation for truncation
SHOULD test any migrations - Verify DDL on dev clusters before production
Plan for Horizontal Scale - DSQL is designed to optimize for massive scales without latency drops; refer to Horizontal Scaling
SHOULD use connection pooling in production applications - Refer to Connection Pooling
SHOULD debug with the troubleshooting guide: - Always refer to the resources and guidelines in troubleshooting.md
ALWAYS use scoped roles for applications - Create database roles with dsql:DbConnect; refer to Access Control