Back to source plugin

Export from Snyk to S3

CloudQuery is an open-source data integration platform that allows you to export data from any source to any destination.

The CloudQuery Snyk plugin allows you to sync data from Snyk to any destination, including S3. It takes only minutes to get started.

Snyk
snyk
Official
Premium

Snyk

The CloudQuery Snyk plugin pulls configuration out of Snyk resources and loads it into any supported CloudQuery destination

Publisher

cloudquery

Latest version

v6.4.1

Type

Source

Platforms
Date Published

S3
s3
Official

S3

This destination plugin lets you sync data from a CloudQuery source to remote S3 storage in various formats such as CSV, JSON and Parquet

Publisher

cloudquery

Repositorygithub.com
Latest version

v7.3.2

Type

Destination

Platforms
Date Published

MacOS Setup

Step 1. Install CloudQuery

brew install cloudquery/tap/cloudquery

Step 2. Log in to CloudQuery CLI

cloudquery login

Step 3. Configure Snyk source plugin

You can find more information about the configuration in the plugin documentation

kind: source
# Common source-plugin configuration
spec:
  name: snyk
  path: cloudquery/snyk
  registry: cloudquery
  version: "v6.4.1"
  tables:
    - "snyk_audit_logs"
    - "snyk_container_images"
    - "snyk_custom_base_images"
    - "snyk_issues"
    - "snyk_organizations"
    - "snyk_projects"
    - "snyk_sbom"
  destinations: ["s3"]
  # Snyk specific configuration
  # Learn more about the configuration options at https://cql.ink/snyk_source
  spec:
    # required
    api_key: "${SNYK_API_KEY}"
    # optional, default: all organizations accessible via `api_key`
    organizations:
      - "<YOUR_ORG_1>"
      - "<YOUR_ORG_2>"
    # optional, default: all projects accessible via `api_key`
    projects:
      - "<YOUR_PROJECT_1>"
      - "<YOUR_PROJECT_2>"

Step 4. Configure S3 destination plugin

You can find more information about the configuration in the plugin documentation

kind: destination
spec:
  name: "s3"
  path: "cloudquery/s3"
  registry: "cloudquery"
  version: "v7.3.2"
  write_mode: "append"
  # Learn more about the configuration options at https://cql.ink/s3_destination
  spec:
    bucket: "bucket_name"
    region: "region-name" # Example: us-east-1
    path: "path/to/files/{{TABLE}}/{{UUID}}.{{FORMAT}}"
    format: "parquet" # options: parquet, json, csv
    format_spec:
      # CSV-specific parameters:
      # delimiter: ","
      # skip_header: false

    # Optional parameters
    # compression: "" # options: gzip
    # no_rotate: false
    # athena: false # <- set this to true for Athena compatibility
    # write_empty_objects_for_empty_tables: false # <- set this to true if using with the CloudQuery Compliance policies
    # test_write: true # tests the ability to write to the bucket before processing the data
    # endpoint: "" # Endpoint to use for S3 API calls.
    # endpoint_skip_tls_verify # Disable TLS verification if using an untrusted certificate
    # use_path_style: false
    # batch_size: 10000 # 10K entries
    # batch_size_bytes: 52428800 # 50 MiB
    # batch_timeout: 30s # 30 seconds

Step 5. Run Sync

cloudquery sync snyk.yml s3.yml

© 2024 CloudQuery, Inc. All rights reserved.