Deploy production multi-service Docker Compose backends (api + worker + postgres) to Google Cloud with gcloud only. Use this whenever users mention docker-compose to Cloud Run migration, separate API/worker deployment, Cloud SQL PostgreSQL setup, Artifact Registry image publishing, service-by-service redeploys, or cleanup of GCP resources.
Use this skill to migrate a local docker-compose backend to a cloud-native GCP deployment.
Target architecture:
docker-compose.yml; do not assume single-container apps.gcloud for cloud provisioning, build, deploy, and cleanup.--min-instances=0, no public access for worker.deploy-alldeploy-apideploy-workersetup-databasedelete-allfull-cleanupRequired:
PROJECT_IDREGION (example: us-central1)AR_REPO (Artifact Registry repo name)API_SERVICE_NAMEWORKER_SERVICE_NAMESQL_INSTANCE_NAMEDB_NAMEDB_USERDB_PASSWORD (or confirm Secret Manager secret name)Optional with defaults:
PLATFORM default managedAPI_CPU default 1API_MEMORY default 512MiWORKER_CPU default 1WORKER_MEMORY default 512MiWORKER_CONCURRENCY default 1JOB_MAX_RETRIES default 5MAX_INSTANCES_API default 10MAX_INSTANCES_WORKER default 3docker-compose.yml exists.api, worker, postgres (or equivalents).node dist/server.js).node dist/worker.js).If compose does not contain these services, stop with a clear mismatch error and show what was found.
Run and validate:
gcloud --version
gcloud auth list --filter=status:ACTIVE --format="value(account)"
gcloud config set project "$PROJECT_ID"
gcloud config set run/region "$REGION"
gcloud services enable run.googleapis.com artifactregistry.googleapis.com sqladmin.googleapis.com cloudbuild.googleapis.com
If no active account exists, instruct user to run:
gcloud auth login
gcloud artifacts repositories describe "$AR_REPO" --location="$REGION" \
|| gcloud artifacts repositories create "$AR_REPO" \
--repository-format=docker \
--location="$REGION" \
--description="Eventflow service images"
Image base:
AR_BASE="$REGION-docker.pkg.dev/$PROJECT_ID/$AR_REPO"
API_IMAGE="$AR_BASE/$API_SERVICE_NAME"
WORKER_IMAGE="$AR_BASE/$WORKER_SERVICE_NAME"
Prefer Cloud Build (no local Docker dependency):
TAG="$(date +%Y%m%d-%H%M%S)"
gcloud builds submit --tag "$API_IMAGE:$TAG" .
gcloud builds submit --tag "$WORKER_IMAGE:$TAG" .
Notes:
--config or --file equivalent via Cloud Build config.setup-database)Create Cloud SQL instance if missing:
gcloud sql instances describe "$SQL_INSTANCE_NAME" \
|| gcloud sql instances create "$SQL_INSTANCE_NAME" \
--database-version=POSTGRES_16 \
--cpu=1 \
--memory=3840MiB \
--region="$REGION" \
--availability-type=zonal \
--storage-type=SSD \
--storage-size=20GB
Create database and user idempotently:
gcloud sql databases describe "$DB_NAME" --instance="$SQL_INSTANCE_NAME" \
|| gcloud sql databases create "$DB_NAME" --instance="$SQL_INSTANCE_NAME"
USER_EXISTS="$(gcloud sql users list --instance="$SQL_INSTANCE_NAME" --filter="name:$DB_USER" --format='value(name)' | head -n 1)"
if [ -z "$USER_EXISTS" ]; then
gcloud sql users create "$DB_USER" --instance="$SQL_INSTANCE_NAME" --password="$DB_PASSWORD"
fi
Always reset password when provided (safe update path):
gcloud sql users set-password "$DB_USER" --instance="$SQL_INSTANCE_NAME" --password="$DB_PASSWORD"
Get connection name:
INSTANCE_CONNECTION_NAME="$(gcloud sql instances describe "$SQL_INSTANCE_NAME" --format='value(connectionName)')"
Connection string for Cloud Run + Cloud SQL Unix socket:
DATABASE_URL="postgresql://$DB_USER:$DB_PASSWORD@/$DB_NAME?host=/cloudsql/$INSTANCE_CONNECTION_NAME"
Cost warning to always show:
deploy-api)gcloud run deploy "$API_SERVICE_NAME" \
--image "$API_IMAGE:$TAG" \
--platform "$PLATFORM" \
--region "$REGION" \
--allow-unauthenticated \
--min-instances=0 \
--max-instances="$MAX_INSTANCES_API" \
--cpu="$API_CPU" \
--memory="$API_MEMORY" \
--add-cloudsql-instances "$INSTANCE_CONNECTION_NAME" \
--set-env-vars "NODE_ENV=production,PORT=3000,DATABASE_URL=$DATABASE_URL"
After deploy, print URL:
gcloud run services describe "$API_SERVICE_NAME" --region "$REGION" --format='value(status.url)'
deploy-worker)Worker must be private:
gcloud run deploy "$WORKER_SERVICE_NAME" \
--image "$WORKER_IMAGE:$TAG" \
--platform "$PLATFORM" \
--region "$REGION" \
--no-allow-unauthenticated \
--min-instances=0 \
--max-instances="$MAX_INSTANCES_WORKER" \
--concurrency="$WORKER_CONCURRENCY" \
--cpu="$WORKER_CPU" \
--memory="$WORKER_MEMORY" \
--add-cloudsql-instances "$INSTANCE_CONNECTION_NAME" \
--command "node" \
--args "dist/worker.js" \
--set-env-vars "NODE_ENV=production,DATABASE_URL=$DATABASE_URL,WORKER_CONCURRENCY=$WORKER_CONCURRENCY,JOB_MAX_RETRIES=$JOB_MAX_RETRIES"
Important behavior check:
Validate after deployment:
Use logs:
gcloud run services logs read "$API_SERVICE_NAME" --region "$REGION" --limit=100
gcloud run services logs read "$WORKER_SERVICE_NAME" --region "$REGION" --limit=100
Look for:
pg-boss startup successdeploy-all behavior)When user runs deploy-all:
src/server.ts, src/routes/**, shared modules.src/worker.ts, src/actions/**, queue services.package.json, Dockerfile, src/services/**, src/db/** -> redeploy both.setup-databasedeploy-apideploy-workerdeploy-allsetup-database (or validate existing).delete-allgcloud run services delete "$API_SERVICE_NAME" --region "$REGION" --quiet || true
gcloud run services delete "$WORKER_SERVICE_NAME" --region "$REGION" --quiet || true
full-cleanupCloud SQL destructive step (must confirm):
gcloud sql instances delete "$SQL_INSTANCE_NAME"
Delete all images in repo path:
gcloud artifacts docker images list "$AR_BASE" --include-tags
gcloud artifacts repositories delete "$AR_REPO" --location="$REGION" --quiet
--allow-unauthenticated).--no-allow-unauthenticated).--add-cloudsql-instances) instead of public DB IP.