Back to source plugin

Sync data from GCP to Splunk

CloudQuery is the simple, fast data integration platform that can fetch your data from GCP APIs and load it into Splunk
GCP
Splunk

Trusted by

Self-hosted

Start locally, then deploy to a Virtual Machine, Kubernetes, or anywhere else. Full instructions on CLI setup are available in our documentation.

Cloud-hosted

Start syncing in a few clicks. No need to deploy your own infrastructure.

Fast and reliable

CloudQuery’s efficient design means our syncs are fast and a sync from GCP to Splunk can be completed in a fraction of the time compared to other tools.

Easy to use, easy to maintain

GCP syncing using CloudQuery is easy to set up and maintain thanks to its simple YAML configuration. Once synced, you can use normal SQL queries to work with your data.

A huge library of supported destinations

Splunk isn’t the only place we can sync your GCP data to. Whatever you need to do with your GCP data, CloudQuery can make it happen. We support a huge range of destinations, customizable transformations for ETL, and we regularly release new plugins.

Extensible and Open Source SDK

Write your own connectors in any language by utilizing the CloudQuery open source SDK powered by Apache Arrow. Get out-of-the-box scheduling, rate-limiting, transformation, documentation and much more.

Step by step guide for how to export data from GCP to Splunk

MacOS Setup

Step 1: Install CloudQuery

To install CloudQuery, run the following command in your terminal:

brew install cloudquery/tap/cloudquery

Step 2: Create a Configuration File

Next, run the following command to initialize a sync configuration file for GCP to Splunk:

cloudquery init --source=gcp --destination=splunk

This will generate a config file named gcp_to_splunk.yaml. Follow the instructions to fill out the necessary fields to authenticate against your own environment.

Step 3: Log in to CloudQuery CLI

Next, log in to the CloudQuery CLI. If you have't already, you can sign up for a free account as part of this step:

cloudquery login

Step 4: Run a Sync

cloudquery sync gcp_to_splunk.yaml

This will start syncing data from the GCP API to your Splunk database! 🚀

See the CloudQuery documentation portal for more deployment guides, options and further tips.

FAQs

What is CloudQuery?
CloudQuery is an open-source tool that helps you extract, transform, and load cloud asset data from various sources into databases for security, compliance, and visibility.
Why does CloudQuery require login?
Logging in allows CloudQuery to authenticate your access to the CloudQuery Hub and monitor usage for billing purposes. Data synced with CloudQuery remains private to your environment and is not shared with our servers or any third parties.
What data does CloudQuery have access to?
CloudQuery accesses only the metadata and configurations of your cloud resources that you specify without touching sensitive data or workloads.
How is CloudQuery priced?
CloudQuery offers flexible pricing based on the number of cloud accounts and usage. Visit our pricing page for detailed plans.
Is there a free version of CloudQuery?
Yes, CloudQuery offers a free plan that includes basic features, perfect for smaller teams or personal use. More details can be found on our pricing page.
What credentials are required to sync from GCP to Splunk?
CloudQuery uses application default credentials in order to sync from GCP to Splunk. The best option to use depends on the environment in which your sync is running, whether it is local or cloud based and your own preferences. Full details can be found in the GCP documentation.
Can I restrict the sync to an individual project within my GCP environment?
Yes, if you only want CloudQuery to use a particular project when syncing to Splunk, you can specify this in the project_ids field. If you leave this field blank, CloudQuery will use all projects that it has been granted access to by your chosen authentication method.
Can I use wildcards when selecting which projects to sync from?
Yes, CloudQuery supports wildcards when searching for projects within your GCP environment. For example, if you want to select all projects which begin with data, you would specify data* in the project_filter field.
Which write mode can I use in Splunk when syncing data from GCP?
At the moment, CloudQuery only supports append write mode, this means that it will not remove data from your Splunk destination and will create new indexes when needed.
How can I ensure that my sync from GCP does not exceed my Splunk API limits?
You can manage the rate at which data is synced from GCP to Splunk by using the batch_size, batch_size_bytes, and max_concurrent_requests integers. In general, you should keep the max_concurrent_requests integer as low as possible while aiming for a ratio of roughly 1,000 between batch_size and max_concurrent_requests, this will ensure that the response times from your Splunk instance remain reasonable.
What is the Splunk Destination integration for CloudQuery?
The Splunk Destination integration allows you to send cloud asset data collected by CloudQuery to Splunk for further analysis, enabling you to monitor, visualize, and query cloud infrastructure metrics in real-time.
Join our mailing list

Subscribe to our newsletter to make sure you don't miss any updates.

Legal

© 2024 CloudQuery, Inc. All rights reserved.

We use tracking cookies to understand how you use the product and help us improve it. Please accept cookies to help us improve. You can always opt out later via the link in the footer.