We use tracking cookies to understand how you use the product and help us improve it. Please accept cookies to help us improve. You can always opt out later via the link in the footer.
gcs://bucket_name/path/to/files
.kind: destination
spec:
name: "gcs"
path: "cloudquery/gcs"
registry: "cloudquery"
version: "v5.4.26"
write_mode: "append"
spec:
bucket: "bucket_name"
path: "path/to/files/{{TABLE}}/{{UUID}}.{{FORMAT}}"
format: "parquet" # options: parquet, json, csv
format_spec:
# CSV specific parameters:
# delimiter: ","
# skip_header: false
# Parquet specific parameters:
# version: "v2Latest"
# root_repetition: "repeated"
# max_row_group_length: 134217728 # 128 * 1024 * 1024
# Optional parameters
# compression: "" # options: gzip
# no_rotate: false
# batch_size: 10000
# batch_size_bytes: 52428800 # 50 MiB
# batch_timeout: 30s
append
write_mode
. The (top level) spec section is described in the Destination Spec Reference.batch_size
, batch_size_bytes
and batch_timeout
options (see below).bucket
(string
) (required)path
(string
) (required)path/to/files/{{TABLE}}/{{UUID}}.parquet
.{{TABLE}}
will be replaced with the table name{{SYNC_ID}}
will be replaced with the unique identifier of the sync. This value is a UUID and is randomly generated for each sync.{{FORMAT}}
will be replaced with the file format, such as csv
, json
or parquet
. If compression is enabled, the format will be csv.gz
, json.gz
etc.{{UUID}}
will be replaced with a random UUID to uniquely identify each file{{YEAR}}
will be replaced with the current year in YYYY
format{{MONTH}}
will be replaced with the current month in MM
format{{DAY}}
will be replaced with the current day in DD
format{{HOUR}}
will be replaced with the current hour in HH
format{{MINUTE}}
will be replaced with the current minute in mm
formatformat
(string
) (required)csv
, json
and parquet
.format_spec
(format_spec) (optional)compression
(string
) (optional) (default: empty)gzip
. Not supported for parquet
format.no_rotate
(boolean
) (optional) (default: false
)true
, the plugin will write to one file per table.
Otherwise, for every batch a new file will be created with a different .<UUID>
suffix.batch_size
(integer
) (optional) (default: 10000
)batch_size_bytes
(integer
) (optional) (default: 52428800
(50 MiB))batch_timeout
(duration
) (optional) (default: 30s
(30 seconds))delimiter
(string
) (optional) (default: ,
)skip_header
(boolean
) (optional) (default: false
)true
, the CSV file will not contain a header row as the first row.version
(string
) (optional) (default: v2Latest
)v1.0
, v2.4
, v2.6
and v2Latest
.
v2Latest
is an alias for the latest version available in the Parquet library which is currently v2.6
.root_repetition
(string
) (optional) (default: repeated
)undefined
, required
, optional
and repeated
.undefined
.max_row_group_length
(integer
) (optional) (default: 134217728
(= 128 * 1024 * 1024))gcloud auth application-default login
(recommended when running locally)GOOGLE_APPLICATION_CREDENTIALS
. (Not recommended as long-lived keys are a security risk)