Sync data from End of Life to Splunk
CloudQuery is the simple, fast data integration platform that can fetch your data from End of Life APIs and load it into Splunk
Trusted by
Get a personalized demo
Book a demo and see how quickly CloudQuery delivers new and ongoing insights on your multi-cloud environments.

Enterprise Ready
Customize & Extend
Query Assets with SQL
Non-invasive account access for better security and efficiency.
Import data with CloudQuery SDKs and build your own plugins.
Query cloud assets and security with a simple SQL-based UI.
Step by step guide for how to export data from End of Life to Splunk
Table of Contents
MacOS Setup
Step 1: Install CloudQuery
To install CloudQuery, run the following command in your terminal:
brew install cloudquery/tap/cloudquery
Step 2: Create a Configuration File
Next, run the following command to initialize a sync configuration file for End of Life to Splunk:
cloudquery init --source=endoflife --destination=splunk
This will generate a config file named endoflife_to_splunk.yaml. Follow the instructions to fill out the necessary fields to authenticate against your own environment.
Step 3: Log in to CloudQuery CLI
Next, log in to the CloudQuery CLI. If you have't already, you can sign up for a free account as part of this step:
cloudquery login
Step 4: Run a Sync
cloudquery sync endoflife_to_splunk.yaml
This will start syncing data from the End of Life API to your Splunk database! 🚀
See the CloudQuery documentation portal for more deployment guides, options and further tips.
FAQs
What is CloudQuery?
Why does CloudQuery require login?
What data does CloudQuery have access to?
How is CloudQuery priced?
Is there a free version of CloudQuery?
Which write mode can I use in Splunk when syncing data from End of Life?
append
write mode, this means that it will not remove data from your Splunk destination and will create new indexes when needed.How can I ensure that my sync from End of Life does not exceed my Splunk API limits?
batch_size
, batch_size_bytes
, and max_concurrent_requests
integers. In general, you should keep the max_concurrent_requests
integer as low as possible while aiming for a ratio of roughly 1,000 between batch_size
and max_concurrent_requests
, this will ensure that the response times from your Splunk instance remain reasonable.