Migrate ML workloads from AWS/GCP/Azure to CoreWeave GPU cloud. Use when moving inference services from hyperscaler GPU instances, migrating training pipelines, or evaluating CoreWeave vs cloud GPU costs. Trigger with phrases like "migrate to coreweave", "coreweave migration", "move from aws to coreweave", "coreweave vs aws gpu".
| Instance | AWS | CoreWeave | Savings |
|---|---|---|---|
| 1x A100 80GB | ~$3.60/hr (p4d) | ~$2.21/hr | ~39% |
| 8x A100 80GB | ~$32/hr (p4d.24xl) | ~$17.70/hr | ~45% |
| 1x H100 80GB | ~$6.50/hr (p5) | ~$4.76/hr | ~27% |
# If running on bare EC2/GCE, containerize first
docker build -t inference-server:v1 .
docker push ghcr.io/myorg/inference-server:v1
Key changes from AWS EKS / GKE:
gpu.nvidia.com/class instead of nvidia.com/gpu.productshared-ssd-ord1)Run both old and new infrastructure simultaneously, gradually shift traffic.
Decommission old GPU instances after validation period.
| Issue | Solution |
|---|---|
| Different CUDA drivers | Match container CUDA to CoreWeave node drivers |
| Storage migration | Use rclone or rsync to move data to CoreWeave PVC |
| DNS changes | Update ingress/load balancer DNS |
| IAM differences | CoreWeave uses kubeconfig, not IAM roles |
This completes the CoreWeave skill pack. Start with coreweave-install-auth for new deployments.