Query Google BigQuery datasets using the bq CLI. Use when the user asks to query, explore, or extract data from BigQuery.
Query BigQuery datasets using the bq CLI (part of Google Cloud SDK). Always use --use_legacy_sql=false for standard SQL.
bq CLI availablegcloud auth login or a service account| Environment | Project ID |
|---|---|
| Production | skilled-fulcrum-90207 |
| Development | holistics-data-294707 |
Before running any query, check the current active project:
gcloud config get-value project
Switch project if needed:
gcloud config set project skilled-fulcrum-90207 # production
gcloud config set project holistics-data-294707 # development
gcloud config get-value project to confirm environment.bq query --use_legacy_sql=false --project_id=PROJECT_ID \
"SELECT * FROM dataset.table LIMIT 10"
Always run a dry run first on large or unfamiliar tables to check bytes processed:
bq query --use_legacy_sql=false --dry_run --project_id=PROJECT_ID \
"SELECT * FROM dataset.table"
bq ls --project_id=PROJECT_ID
bq ls --project_id=PROJECT_ID dataset_name
bq show --format=json PROJECT_ID:dataset.table | jq '.schema.fields'
bq show --format=json PROJECT_ID:dataset.table | jq '{rows: .numRows, bytes: .numBytes, type: .type}'
gcloud config get-value project before querying.--dry_run before running queries on large tables to check estimated bytes.LIMIT 10 during exploration; remove it only for final queries.project.dataset.table to avoid ambiguity.SAFE_ prefix functions (e.g., SAFE_DIVIDE, SAFE_CAST) to avoid query failures on bad data.SELECT * on wide tables is expensive. Select only needed columns.bq show --format=json to inspect table schemas before writing queries.