Use when migrating a dbt project from one data platform or data warehouse to another (e.g., Snowflake to Databricks, Databricks to Snowflake) using dbt Fusion's real-time compilation to identify and fix SQL dialect differences.
This skill guides migration of a dbt project from one data platform (source) to another (target) — for example, Snowflake to Databricks, or Databricks to Snowflake.
The core approach: dbt Fusion compiles SQL in real-time and produces rich, detailed error logs that tell you exactly what's wrong and where. We trust Fusion entirely for dialect conversion — no need to pre-document every SQL pattern difference. The workflow is: read Fusion's errors, fix them, recompile, repeat until done. Combined with dbt unit tests (generated on the source platform before migration), we prove both compilation correctness and data correctness on the target platform.
Success criteria: Migration is complete when:
dbtf compile finishes with 0 errors and 0 warnings on the target platformdbt test --select test_type:unit)dbtf run)Validation cost: Use dbtf compile as the primary iteration gate — it's free (no warehouse queries) and catches both errors and warnings from static analysis. Only and incur warehouse cost; run those only after compile is clean.
dbtf rundbt testCopy this checklist to track migration progress:
Migration Progress:
- [ ] Step 1: Verify dbt Fusion is installed and working
- [ ] Step 2: Assess source project (dbtf compile — 0 errors on source)
- [ ] Step 3: Generate unit tests on source platform
- [ ] Step 4: Switch dbt target to destination platform
- [ ] Step 5: Run Fusion compilation and fix all errors (dbtf compile — 0 errors on target)
- [ ] Step 6: Run and validate unit tests on target platform
- [ ] Step 7: Final validation and document changes in migration_changes.md
When a user asks to migrate their dbt project to a different data platform, follow these steps. Create a migration_changes.md file documenting all code changes (see template below).
Fusion is required — it provides the real-time compilation and rich error diagnostics that power this migration. Fusion may be available as dbtf or as dbt.
To detect which command to use:
dbtf is available — if it exists, it's Fusiondbtf is not found, run dbt --version — if the output starts with dbt-fusion, then dbt is FusionUse whichever command is Fusion everywhere this skill references dbtf. If neither provides Fusion, guide the user through installation. See references/installing-dbt-fusion.md for details.
Run dbtf compile on the source platform target to confirm the project compiles cleanly with 0 errors. This establishes the baseline.
dbtf compile
If there are errors on the source platform, those must be resolved first before starting the migration. The migrating-dbt-core-to-fusion skill can help resolve Fusion compatibility issues.
While still connected to the source platform, generate dbt unit tests for key models to capture expected data outputs as a "golden dataset." These tests will prove data consistency after migration.
Which models to test: You must test every leaf node — models at the very end of the DAG that no other model depends on via ref(). Do not guess leaf nodes from naming conventions — derive them programmatically using the methods in references/generating-unit-tests.md. List all leaf nodes explicitly and confirm the count before writing tests. Also test any mid-DAG model with significant transformation logic (joins, calculations, case statements).
How to generate tests:
dbt ls --select "+tag:core" --resource-type model or inspect the DAGdbt show --select model_name --limit 5 to preview output rows on the source platformdict format — see the adding-dbt-unit-test skill for detailed guidance on authoring unit tests_unit_tests.yml fileSee references/generating-unit-tests.md for detailed strategies on selecting test rows and handling complex models.
Verify tests pass on source: Run dbt test --select test_type:unit on the source platform to confirm all unit tests pass before proceeding.
Add a new target output for the destination platform within the existing profile in profiles.yml, then set it as the active target. Do not change the profile key in dbt_project.yml.
profiles.yml under the existing profile for the destination platformtarget: key in the profile to point to the new output_sources.yml) if the database/schema names differ on the destination platform+snowflake_warehouse, +file_format: delta)See references/switching-targets.md for detailed guidance.
This is the core migration step. First, clear the target cache to avoid stale schema issues from the source platform, then run dbtf compile against the target platform — Fusion will flag every dialect incompatibility at once.
rm -rf target/
dbtf compile
How to work through errors:
GENERATOR on Snowflake vs. sequence on Databricks, nvl2 vs. CASE WHEN)VARIANT on Snowflake vs. STRING on Databricks)FLATTEN on Snowflake vs. EXPLODE on Databricks)+snowflake_warehouse or +file_format: deltaTrust Fusion's errors: The error logs are the primary guide. Do not try to anticipate or pre-fix issues that Fusion hasn't flagged — this leads to unnecessary changes. Fix exactly what Fusion reports.
Continue iterating until dbtf compile succeeds with 0 errors and 0 warnings. Warnings become errors in production — treat them as blockers. Common warnings to resolve:
SUM() on Snowflake produce NUMBER with unspecified precision/scale, risking silent rounding. Fix by casting: cast(sum(col) as decimal(18,2)). This is a cross-platform issue — Databricks doesn't enforce this, Snowflake does.spark_utils, dbt-databricks) that are no longer needed on the target. Remove them from packages.yml and any associated config (e.g., dispatch blocks, +file_format: delta). Also check dbt_packages/ for stale installed packages and re-run dbtf deps after changes.profiles.yml contains profiles for multiple platforms (e.g., both snowflake_demo and databricks_demo), Fusion may load adapters for all profiles and warn about unused ones. These are non-actionable at the project level — inform the user but don't count them as blockers.With compilation succeeding, run the unit tests that were generated in Step 3:
dbt test --select test_type:unit
If tests fail:
round() or approximate comparisons for decimal columns.Iterate until all unit tests pass.
If you already ran dbtf run (to materialize models for unit testing) and all unit tests passed, the migration is proven — don't repeat work with a redundant dbtf build. If you haven't yet materialized models, run dbtf build to do everything in one step. Verify all three success criteria (defined above) are met.
Document all changes in migration_changes.md using the template below. Summarize the migration for the user, including:
Use this structure when documenting migration changes:
# Cross-Platform Migration Changes
## Migration Details
- **Source platform**: [e.g., Snowflake]
- **Target platform**: [e.g., Databricks]
- **dbt project**: [project name]
- **Total models migrated**: [count]
## Migration Status
- **Final compile errors**: 0
- **Final unit test failures**: 0
- **Final build status**: Success
## Configuration Changes
### dbt_project.yml
- [List of config changes]
### Source Definitions
- [List of source definition changes]
### Target Changes
- [Target configuration details]
## Package Changes
- [Any package additions, removals, or version changes]
## Unit Test Adjustments
- [Any changes made to unit tests to accommodate platform differences]
## Notes for User
- [Any manual follow-up needed]
- [Known limitations or trade-offs]
dbtf run for iterative validation. It costs warehouse compute. Use dbtf compile (free) to iterate on fixes. Only run dbtf run and dbt test once compile is fully clean.rm -rf target/ before compiling against a new platform. Fusion caches warehouse schemas in the target directory, and stale schemas from the source platform can cause false column-not-found errors.versions: in their YAML) may fail with dbt1048 errors. Workaround: test non-versioned models, or test versioned models through their non-versioned intermediate dependencies.dbtf show --select validates against warehouse schema. If models haven't been materialized on the target platform yet, use dbtf show --inline "SELECT ..." for direct warehouse queries instead.dbt.ref() even when disabled. Disabling a Python model does not prevent Fusion from validating its dbt.ref() calls (dbt1062). Workaround: comment out the dbt.ref() lines or remove the Python models if they're not relevant to the migration.snowflake_warehouse or cluster_by won't cause Fusion compile errors on the source platform — they'll only surface when compiling against the target. Don't pre-remove them.