Back to source list
Official
Premium
AWS Cost Usage Reports
The CloudQuery AWS CUR (Cost Usage Reports) source plugin reads cost report parquet files and loads them into any supported CloudQuery destination
Publisher
cloudquery
Latest version
v1.0.3
Type
Source
Platforms
Date Published
Overview #
The CloudQuery AWS CUR (Cost Usage Reports) source plugin reads cost reports parquet files and loads them into any supported CloudQuery destination (e.g. PostgreSQL, BigQuery, Snowflake, and more).
Authentication #
The plugin needs to be authenticated with your account(s) in order to read from your S3 bucket.
The plugin requires
s3:GetObject
and s3:ListBucket
permissions on the bucket and objects that you are trying to sync.There are multiple ways to authenticate with AWS, and the plugin respects the AWS credential provider chain. This means that CloudQuery will follow the following priorities when attempting to authenticate:
- The
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
,AWS_SESSION_TOKEN
environment variables. - The
credentials
andconfig
files in~/.aws
(thecredentials
file takes priority). - You can also use
aws sso
to authenticate cloudquery - you can read more about it here. - IAM roles for AWS compute resources (including EC2 instances, Fargate and ECS containers).
Environment Variables #
CloudQuery can use the credentials from the
AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, and AWS_SESSION_TOKEN
environment variables (AWS_SESSION_TOKEN
can be optional for some accounts).
For information on obtaining credentials, see the AWS guide.To export the environment variables (On Linux/Mac - similar for Windows):
export AWS_ACCESS_KEY_ID='{Your AWS Access Key ID}'
export AWS_SECRET_ACCESS_KEY='{Your AWS secret access key}'
export AWS_SESSION_TOKEN='{Your AWS session token}'
Shared Configuration files #
The plugin can use credentials from your
credentials
and config
files in the .aws
directory in your home folder.
The contents of these files are practically interchangeable, but CloudQuery will prioritize credentials in the credentials
file.For information about obtaining credentials, see the
AWS guide.
Here are example contents for a
credentials
file:[default]
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
You can also specify credentials for a different profile, and instruct CloudQuery to use the credentials from this profile instead of the default one.
For example:
[myprofile]
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
Then, you can either export the
AWS_PROFILE
environment variable (On Linux/Mac, similar for Windows):export AWS_PROFILE=myprofile
or, configure your desired profile in the
local_profile
field:local_profile: myprofile
IAM Roles for AWS Compute Resources #
The plugin can use IAM roles for AWS compute resources (including EC2 instances, Fargate and ECS containers).
If you configured your AWS compute resources with IAM, the plugin will use these roles automatically.
For more information on configuring IAM, see the AWS docs here and here.
User Credentials with MFA #
In order to leverage IAM User credentials with MFA, the STS "get-session-token" command may be used with the IAM User's long-term security credentials (Access Key and Secret Access Key). For more information, see here.
aws sts get-session-token --serial-number <YOUR_MFA_SERIAL_NUMBER> --token-code <YOUR_MFA_TOKEN_CODE> --duration-seconds 3600
Then export the temporary credentials to your environment variables.
export AWS_ACCESS_KEY_ID=<YOUR_ACCESS_KEY_ID>
export AWS_SECRET_ACCESS_KEY=<YOUR_SECRET_ACCESS_KEY>
export AWS_SESSION_TOKEN=<YOUR_SESSION_TOKEN>
Assuming a Role #
If you need to assume a role (e.g. for cross-account access), configure the
role_to_assume
field in the spec.role_to_assume:
arn: arn:aws:iam::123456789012:role/YourRole # required
session_name: YourSessionName # optional
external_id: YourExternalId # optional
Configuration #
To configure CloudQuery to extract Cost Usage Reports data, create a
.yml
file in your CloudQuery configuration directory with the following configuration.kind: source
spec:
name: awscur
path: cloudquery/awscur
version: "v1.0.3"
tables: ["*"]
destinations: ["postgresql"]
spec:
bucket: "<BUCKET_NAME>"
region: "<REGION>"
reports:
- path: "<PATH_PREFIX_1>"
name: "my-report-v1"
Based on the configuration, the plugin will sync all
parquet
files in the defined prefix, to a table named my-report-v1
.The plugin supports both legacy and 2.0 cost usage report formats, as long as those are synced as separate reports.
Incremental Syncing #
The AWS CUR plugin supports incremental syncing. This means that only new files will be fetched from S3 and loaded into your destination. This is done by keeping track of the time of the last sync and comparing it against the last modified date of each file to only fetch new files. This assumes that S3 files are immutable.
To enable this,
backend_options
must be set in the spec (as shown below). This is documented in the Managing Incremental Tables section.Configuration #
kind: source
spec:
name: awscur
path: cloudquery/awscur
registry: cloudquery
version: "v1.0.3"
tables: ["*"]
destinations: ["postgresql"]
backend_options:
table_name: "cq_state_awscur"
connection: "@@plugins.postgresql.connection"
spec:
bucket: "<BUCKET_NAME>"
region: "<REGION>"
# Optional parameters
# path_prefix: ""
# rows_per_record: 500
# concurrency: 50
Spec #
This is the (nested) spec used by the AWS CUR source plugin.
bucket
(string
) (required)The name of the S3 bucket that contains cost usage report files.region
(string
) (required)The AWS region of the S3 bucket.reports
([]Report
) (required)A list of reports to sync.local_profile
(string
) (optional) (default: will use current credentials)Local profile to use to authenticate this account with. Please note this should be set to the name of the profile.For example, with the following credentials file:[default] aws_access_key_id=xxxx aws_secret_access_key=xxxx [user1] aws_access_key_id=xxxx aws_secret_access_key=xxxx
local_profile
should be set to eitherdefault
oruser1
.rows_per_record
(integer
) (optional) (default:500
)Amount of rows to be packed into a single Apache Arrow record to be sent over the wire during sync.concurrency
(integer
) (optional) (default:50
)Number of objects to sync in parallel. Negative values mean no limit.role_to_assume
(RoleToAssume
)If specified will use this to assume the role for access to the S3 bucket.
Report #
path
(string
) (required)The path prefix that will limit the files to sync for this report.name
(string
) (optional) (default: path prefix value)The table name to use for the report, defaults to the path prefix value. The name will be sanitized to ensure it is a valid table name.
RoleToAssume #
arn
(string
) (required)The ARN of the role to assume.session_name
(string
) (optional)The session name to use when assuming the role.external_id
(string
) (optional)The external ID to use when assuming the role.