Back to destination list
Official
BigQuery
The BigQuery plugin syncs data from any CloudQuery source plugin(s) to a BigQuery database running on Google Cloud Platform
Loading plugin documentation
The BigQuery plugin syncs data from any CloudQuery source plugin(s) to a BigQuery database running on Google Cloud Platform
Loading plugin documentation
We use tracking cookies to understand how you use the product and help us improve it. Please accept cookies to help us improve. You can always opt out later via the link in the footer.
sync
.kind: destination
spec:
name: bigquery
path: cloudquery/bigquery
registry: cloudquery
version: "v4.3.8"
write_mode: "append"
# Learn more about the configuration options at https://cql.ink/bigquery_destination
spec:
project_id: ${PROJECT_ID}
dataset_id: ${DATASET_ID}
# Optional parameters
# dataset_location: ""
# time_partitioning: none # options: "none", "hour", "day"
# service_account_key_json: ""
# endpoint: ""
# batch_size: 10000
# batch_size_bytes: 5242880 # 5 MiB
# batch_timeout: 10s
# client_project_id: "*detect-project-id*"
PROJECT_ID
- The Google Cloud Project IDDATASET_ID
- The Google Cloud BigQuery Dataset IDclient_project_id
variable can be used to run BigQuery queries in a project different from where the destination table is located.
If you set client_project_id to *detect-project-id*
, it will automatically detect the project ID from the environment variable or application default credentials.batch_size
and batch_size_bytes
.append
write mode.gcloud auth application-default login
(recommended when running locally)GOOGLE_APPLICATION_CREDENTIALS
. (Not recommended as long-lived keys are a security risk)project_id
(string
) (required)dataset_id
(string
) (required)my_dataset
.
This dataset needs to be created before running a sync or migration.dataset_location
(string
) (optional)time_partitioning
(string
) (options: none
, hour
, day
) (default: none
)_cq_sync_time
so that all rows for a sync run will be partitioned on the hour/day the sync started.service_account_key_json
(string
) (optional) (default: empty).endpoint
(string
) (optional)batch_size
(integer
) (optional) (default: 10000
)batch_size_bytes
(integer
) (optional) (default: 5242880
(5 MiB))batch_timeout
(duration
) (optional) (default: 10s
(10 seconds))v3.0.0
and later) supports most Apache Arrow
types. The following table shows the supported types and how they are mapped
to BigQuery data types.REPEATED
columns to represent lists, lists of
lists are not supported right now.