Back to destination list
Official
Kafka
This destination plugin lets you sync data from a CloudQuery source to Kafka in various formats such as CSV, JSON. Each table will be pushed to a separate topic
Loading plugin documentation
This destination plugin lets you sync data from a CloudQuery source to Kafka in various formats such as CSV, JSON. Each table will be pushed to a separate topic
Loading plugin documentation
We use tracking cookies to understand how you use the product and help us improve it. Please accept cookies to help us improve. You can always opt out later via the link in the footer.
kind: destination
spec:
name: "kafka"
path: "cloudquery/kafka"
registry: "cloudquery"
version: "v5.5.3"
write_mode: "append"
spec:
# required - list of brokers to connect to
brokers: ["<broker-host>:<broker-port>"]
# optional - if connecting via SASL/PLAIN, the username and password to use. If not set, no authentication will be used.
sasl_username: "${KAFKA_SASL_USERNAME}"
sasl_password: "${KAFKA_SASL_PASSWORD}"
format: "json" # options: parquet, json, csv
format_spec:
# CSV specific parameters:
# delimiter: ","
# skip_header: false
# Parquet specific parameters:
# version: "v2Latest"
# root_repetition: "repeated"
# max_row_group_length: 134217728 # 128 * 1024 * 1024
# Optional parameters
# compression: "" # options: gzip
# verbose: false
# batch_size: 1000
# topic_details:
# num_partitions: 1
# replication_factor: 1
append
write_mode
. The (top level) spec section is described in the Destination Spec Reference.brokers
([]string
) (required)"localhost:9092"
default URL for a local Kafka brokerformat
(string
) (required)csv
, json
and parquet
.format_spec
(format_spec) (optional)compression
(string
) (optional) (default: empty)gzip
. Not supported for parquet
format.sasl_username
(string
) (optional) (default: empty)sasl_password
(string
) (optional) (default: empty)enforce_tls_verification
(boolean
) (optional) (default: false
)true
, the plugin will verify the TLS certificate of the Kafka broker.verbose
(boolean
) (optional) (default: false
)true
, the plugin will log all underlying Kafka client messages to the log.batch_size
(integer
) (optional) (default: 1000
)topic_details
(topic_details) (optional)tls_details
(tls_details) (optional)delimiter
(string
) (optional) (default: ,
)skip_header
(boolean
) (optional) (default: false
)true
, the CSV file will not contain a header row as the first row.version
(string
) (optional) (default: v2Latest
)v1.0
, v2.4
, v2.6
and v2Latest
.
v2Latest
is an alias for the latest version available in the Parquet library which is currently v2.6
.root_repetition
(string
) (optional) (default: repeated
)undefined
, required
, optional
and repeated
.undefined
.max_row_group_length
(integer
) (optional) (default: 134217728
(= 128 * 1024 * 1024))num_partitions
(integer
) (optional) (default: 1
)replication_factor
(integer
) (optional) (default: 1
)ca_file_path
(string
) (optional) (default: empty)cert_file_path
(string
) (optional) (default: empty)key_file_path
(string
) (optional) (default: empty)sasl_username
and sasl_password
properties in the Kafka plugin documentation. The file will also contain the URL for the bootstrap server. Use that in the brokers
property in the configuration:kind: destination
spec:
name: "kafka"
path: "cloudquery/kafka"
registry: "cloudquery"
version: "v5.5.3"
write_mode: "append"
spec:
# required - list of brokers to connect to
brokers: ["${CONFLUENT_BOOTSTRAP_SERVER}"]
sasl_username: "${CONFLUENT_KEY}"
sasl_password: "${CONFLUENT_SECRET}"
format: "json" # options: parquet, json, csv
format_spec:
# CSV-specific parameters:
# delimiter: ","
# skip_header: false
# Optional parameters
# compression: "" # options: gzip
# verbose: false
# batch_size: 1000
topic_details:
num_partitions: 1
replication_factor: 1