Back to source plugin

Export from AWS to Kafka

CloudQuery is an open-source data integration platform that allows you to export data from any source to any destination.

The CloudQuery AWS plugin allows you to sync data from AWS to any destination, including Kafka. It takes only minutes to get started.

AWS
aws
Official
Premium

AWS

The AWS Source plugin extracts information from many of the supported services by Amazon Web Services (AWS) and loads it into any supported CloudQuery destination

Publisher

cloudquery

Latest version

v27.1.0

Type

Source

Platforms
Date Published

Kafka
kafka
Official

Kafka

This destination plugin lets you sync data from a CloudQuery source to Kafka in various formats such as CSV, JSON. Each table will be pushed to a separate topic

Publisher

cloudquery

Repositorygithub.com
Latest version

v5.0.0

Type

Destination

Platforms
Date Published

Linux Setup

Step 1. Install CloudQuery

curl -L https://github.com/cloudquery/cloudquery/releases/download/cli-v5.22.1/cloudquery_linux_amd64 -o cloudquery
chmod a+x cloudquery

Step 2. Log in to CloudQuery CLI

cloudquery login

Step 3. Configure AWS source plugin

You can find more information about the configuration in the plugin documentation

kind: source
spec:
  # Source spec section
  name: aws
  path: cloudquery/aws
  registry: cloudquery
  version: "v27.1.0"
  tables: ["aws_ec2_instances"]
  destinations: ["kafka"]
  # Learn more about the configuration options at https://cql.ink/aws_source
  spec:
    # Optional parameters
    # regions: []
    # accounts: []
    # org: nil
    # concurrency: 50000
    # initialization_concurrency: 4
    # aws_debug: false
    # max_retries: 10
    # max_backoff: 30
    # custom_endpoint_url: ""
    # custom_endpoint_hostname_immutable: nil # required when custom_endpoint_url is set
    # custom_endpoint_partition_id: "" # required when custom_endpoint_url is set
    # custom_endpoint_signing_region: "" # required when custom_endpoint_url is set
    # use_paid_apis: false
    # table_options: nil
    # scheduler: shuffle # options are: dfs, round-robin or shuffle
    # use_nested_table_rate_limiting: false 
    # enable_api_level_tracing: false

Step 4. Configure Kafka destination plugin

You can find more information about the configuration in the plugin documentation

kind: destination
spec:
  name: "kafka"
  path: "cloudquery/kafka"
  registry: "cloudquery"
  version: "v5.0.0"
  write_mode: "append"
  spec:
    # required - list of brokers to connect to
    brokers: ["<broker-host>:<broker-port>"]
    # optional - if connecting via SASL/PLAIN, the username and password to use. If not set, no authentication will be used.
    sasl_username: "${KAFKA_SASL_USERNAME}"
    sasl_password: "${KAFKA_SASL_PASSWORD}"
    format: "json" # options: parquet, json, csv
    format_spec:
      # CSV-specific parameters:
      # delimiter: ","
      # skip_header: false

    # Optional parameters
    # compression: "" # options: gzip
    # verbose: false
    # batch_size: 1000
    # topic_details:
      # num_partitions: 1
      # replication_factor: 1

Step 5. Run Sync

cloudquery sync aws.yml kafka.yml
Subscribe to product updates

Be the first to know about new features.


© 2024 CloudQuery, Inc. All rights reserved.