Create new Snowflake dynamic tables with proper configuration
Workflow for creating new dynamic tables with appropriate refresh mode, target lag, and warehouse configuration.
Main skill routes here when user wants to:
Goal: Understand what the user wants to build
Actions:
⚠️ MANDATORY STOPPING POINT: Get requirements before proceeding.
Goal: Ensure base objects are ready for dynamic table
Actions:
Check change tracking on base tables:
SHOW TABLES LIKE '<base_table_name>';
-- Check change_tracking column is TRUE
For views:
SHOW VIEWS LIKE '<base_view_name>';
If change tracking is FALSE, prepare ALTER statement:
ALTER TABLE <base_table> SET CHANGE_TRACKING = TRUE;
⚠️ MANDATORY STOPPING POINT: Present change tracking status. If ALTER needed, get approval before executing.
Goal: Select appropriate refresh mode and target lag
Actions:
Determine refresh mode based on query complexity:
| Refresh Mode | When to Use |
|---|---|
AUTO | Development/testing - let Snowflake decide |
INCREMENTAL | Simple queries, small data changes (<5% per refresh) |
FULL | Complex queries, non-deterministic functions, or when INCREMENTAL not supported |
Load references/incremental-operators.md to check if query supports incremental.
Determine target lag:
| Target Lag | When to Use |
|---|---|
DOWNSTREAM | Intermediate tables in pipelines |
'X minutes' | Final/leaf tables with specific freshness needs |
Load references/supported-queries.md to check query patterns and limitations.
Determine initialization:
| Initialize | When to Use |
|---|---|
ON_CREATE (default) | Populate immediately |
ON_SCHEDULE | Defer until first scheduled refresh |
Select warehouse: Recommend dedicated warehouse for cost isolation
⚠️ MANDATORY STOPPING POINT: Present configuration recommendations and get approval.
Goal: Build the CREATE DYNAMIC TABLE statement
Actions:
Generate statement using approved configuration:
CREATE OR REPLACE DYNAMIC TABLE <database>.<schema>.<name>
TARGET_LAG = '<time>' | DOWNSTREAM
WAREHOUSE = <warehouse_name>
REFRESH_MODE = INCREMENTAL | FULL | AUTO
INITIALIZE = ON_CREATE | ON_SCHEDULE
AS
<SELECT query>;
Review best practices:
SELECT *)DOWNSTREAM, only final table has time-based lag⚠️ MANDATORY STOPPING POINT: Present CREATE statement for approval before executing.
Goal: Create the dynamic table and verify it's working
IMPORTANT: Load references/monitoring-functions.md for required database context, named parameter rules, and routing to the specific monitoring reference you need (state, refresh analysis, or graph).
Actions:
Execute the approved CREATE statement
Verify creation - Use SHOW DYNAMIC TABLES and INFORMATION_SCHEMA.DYNAMIC_TABLES() per the monitoring reference
Waiting logic (depends on initialization mode):
If INITIALIZE = ON_CREATE:
If INITIALIZE = ON_SCHEDULE:
Polling Strategy (only if user chose to wait):
Poll query: Use DYNAMIC_TABLE_REFRESH_HISTORY() from the monitoring reference to check refresh state.
Interpret results:
state = 'SUCCESS' → Proceed to next stepstate is NULL/empty (refresh in progress)state = 'FAILED' → Present error to user⚠️ MANDATORY STOPPING POINT: Present creation results. Confirm success or diagnose issues.
Goal: Create baseline diary entry and update connection diary
Actions:
Gather baseline metrics using monitoring functions from the reference:
SHOW DYNAMIC TABLES for refresh_modeINFORMATION_SCHEMA.DYNAMIC_TABLES() for lag statistics and scheduling stateWrite DT diary entry to ~/.snowflake/cortex/memory/dynamic_tables/<connection>/<database>.<schema>.<dt_name>.md:
Update connection diary at ~/.snowflake/cortex/memory/dynamic_tables/<connection>/_connection_diary.md:
TARGET_LAG = DOWNSTREAM for all intermediate tables in pipelinesREFRESH_MODE = AUTO first, then switch to explicit mode for productionSELECT * to avoid schema change failuresResume rule: Only proceed after explicit user approval.