Google BigQuery data warehouse queries and schema inspection. Use when running SQL queries, listing datasets/tables, or inspecting table schemas in BigQuery.
IMPORTANT: Credentials are injected automatically by a proxy layer. Do NOT check for BIGQUERY_SERVICE_ACCOUNT_KEY in environment variables - it won't be visible to you. Just run the scripts directly; authentication is handled transparently.
Configuration environment variables you CAN check (non-secret):
BIGQUERY_PROJECT_ID - GCP project IDBIGQUERY_DATASET - Default datasetList datasets and tables before writing queries.
LIST DATASETS → LIST TABLES → GET TABLE SCHEMA → QUERY
All scripts are in .claude/skills/database-bigquery/scripts/
python .claude/skills/database-bigquery/scripts/list_datasets.py
python .claude/skills/database-bigquery/scripts/list_tables.py --dataset DATASET_ID
python .claude/skills/database-bigquery/scripts/get_table_schema.py --dataset DATASET_ID --table TABLE_ID
python .claude/skills/database-bigquery/scripts/query.py --query "SELECT * FROM dataset.table LIMIT 10" [--dataset DEFAULT_DATASET] [--max-results 1000]
-- Standard SQL (default)
SELECT * FROM `project.dataset.table` LIMIT 10
-- Aggregate with time
SELECT DATE(timestamp), COUNT(*) as events
FROM `dataset.events`
WHERE timestamp > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 24 HOUR)
GROUP BY 1 ORDER BY 1 DESC
-- Partitioned table query (cost-efficient)
SELECT * FROM `dataset.events`
WHERE _PARTITIONTIME >= TIMESTAMP('2026-01-01')
1. list_datasets.py (find available datasets)
2. list_tables.py --dataset <dataset> (find tables)
3. get_table_schema.py --dataset <dataset> --table <table>
4. query.py --query "SELECT ..."