Back to plugin list
aws
Official
Premium

AWS

The AWS Source plugin extracts information from many of the supported services by Amazon Web Services (AWS) and loads it into any supported CloudQuery destination

Publisher

cloudquery

Latest version

v27.23.1

Type

Source

Platforms
Date Published

Price per 1M rows

Starting from $15

monthly free quota

1M rows

Set up process #


brew install cloudquery/tap/cloudquery

1. Download CLI and login

See installation options

2. Create source and destination configs

Plugin configuration

cloudquery sync aws.yml postgresql.yml

3. Run the sync

CloudQuery sync

Overview #

The AWS Source plugin extracts information from many of the supported services by Amazon Web Services (AWS) and loads it into any supported CloudQuery destination (e.g. PostgreSQL, BigQuery, Snowflake, and more).

Visualize your infrastructure with dashboards

Use this plugin to build your own asset inventory or manage your infrastructure compliance with policies and best practices. With CloudQuery add-ons, you can build your own dashboards in no time.


Authentication #

The plugin needs to be authenticated with your account(s) in order to sync information from your cloud setup.
The plugin requires only read permissions (we will never make any changes to your cloud setup), so, following the principle of the least privilege, it's recommended to grant it read-only permissions.
There are multiple ways to authenticate with AWS, and the plugin respects the AWS credential provider chain. This means that AWS plugin will follow the following priorities when attempting to authenticate:
  • The AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN environment variables
  • The credentials and config files in ~/.aws (the credentials file takes priority)
  • You can also use aws sso to authenticate the plugin - you can read more about it here
  • IAM roles for AWS compute resources (including EC2 instances, Fargate and ECS containers)
You can read more about AWS authentication here and here.

Environment Variables #

AWS plugin can use the credentials from the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN environment variables (AWS_SESSION_TOKEN can be optional for some accounts). For information on obtaining credentials, see the AWS guide.
To export the environment variables (On Linux/Mac - similar for Windows):
export AWS_ACCESS_KEY_ID={Your AWS Access Key ID}
export AWS_SECRET_ACCESS_KEY={Your AWS secret access key}
export AWS_SESSION_TOKEN={Your AWS session token}

Shared Configuration files #

The plugin can use credentials from your credentials and config files in the .aws directory in your home folder. The contents of these files are practically interchangeable, but AWS plugin will prioritize credentials in the credentials file.
For information about obtaining credentials, see the AWS guide.
Here are example contents for a credentials file:
[default]
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
You can also specify credentials for a different profile, and instruct the plugin to use the credentials from this profile instead of the default one.
For example:
[myprofile]
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
Then, you can either export the AWS_PROFILE environment variable (On Linux/Mac, similar for Windows):
export AWS_PROFILE=myprofile
or, configure your desired profile in the local_profile field:
accounts:
  id: <account_alias>
  local_profile: myprofile

IAM Roles for AWS Compute Resources #

The plugin can use IAM roles for AWS compute resources (including EC2 instances, Fargate and ECS containers). If you configured your AWS compute resources with IAM, the plugin will use these roles automatically. For more information on configuring IAM, see the AWS docs here and here.

User Credentials with MFA #

In order to leverage IAM User credentials with MFA, aws sts get-session-token command may be used with the IAM User's long-term security credentials (Access Key and Secret Access Key). For more information, see here.
aws sts get-session-token --serial-number <YOUR_MFA_SERIAL_NUMBER> --token-code <YOUR_MFA_TOKEN_CODE> --duration-seconds 3600
Then export the temporary credentials to your environment variables.
export AWS_ACCESS_KEY_ID=<YOUR_ACCESS_KEY_ID>
export AWS_SECRET_ACCESS_KEY=<YOUR_SECRET_ACCESS_KEY>
export AWS_SESSION_TOKEN=<YOUR_SESSION_TOKEN>


Configuration #

Examples #

Basic example
kind: source
spec:
  # Source spec section
  name: aws
  path: cloudquery/aws
  registry: cloudquery
  version: "v27.23.1"
  tables: ["aws_ec2_instances"]
  destinations: ["postgresql"]
  # Learn more about the configuration options at https://cql.ink/aws_source
  spec:
    # Optional parameters
    # regions: []
    # accounts: []
    # org: nil
    # concurrency: 50000
    # initialization_concurrency: 4
    # aws_debug: false
    # max_retries: 10
    # max_backoff: 30
    # custom_endpoint_url: ""
    # custom_endpoint_hostname_immutable: nil # required when custom_endpoint_url is set
    # custom_endpoint_partition_id: "" # required when custom_endpoint_url is set
    # custom_endpoint_signing_region: "" # required when custom_endpoint_url is set
    # use_paid_apis: false
    # table_options: nil
    # scheduler: shuffle # options are: dfs, round-robin or shuffle
    # use_nested_table_rate_limiting: false 
    # enable_api_level_tracing: false
AWS Organization Example
The AWS plugin supports discovery of AWS Accounts via AWS Organizations. This means that as Accounts get added or removed from your organization the plugin will be able to handle new or removed accounts without any configuration changes.
kind: source
spec:
  name: aws
  path: cloudquery/aws
  registry: cloudquery
  version: "v27.23.1"
  tables: ['aws_s3_buckets']
  destinations: ["postgresql"]
  spec:
    aws_debug: false
    org:
      admin_account:
        local_profile: "<NAMED_PROFILE>"
      member_role_name: OrganizationAccountAccessRole
    regions:
      - '*'
    # Optional parameters
    # regions: []
    # accounts: []
    # org: nil
    # concurrency: 50000
    # initialization_concurrency: 4
    # aws_debug: false
    # max_retries: 10
    # max_backoff: 30
    # custom_endpoint_url: ""
    # custom_endpoint_hostname_immutable: nil # required when custom_endpoint_url is set
    # custom_endpoint_partition_id: "" # required when custom_endpoint_url is set
    # custom_endpoint_signing_region: "" # required when custom_endpoint_url is set
    # use_paid_apis: false
    # table_options: nil
    # scheduler: shuffle # options are: dfs, round-robin or shuffle

Configuration spec #

This is the (nested) spec used by the AWS source plugin.
  • regions ([]string) (default: []. Will use all enabled regions)
    Regions to use.
  • accounts ([]Account) (default: current account)
    List of all accounts to fetch information from.
  • org (Org) (default: not used)
    In AWS organization mode, the plugin will source all accounts underneath automatically.
  • concurrency (integer) (default: 50000)
    The best effort maximum number of Go routines to use. Lower this number to reduce memory usage.
  • initialization_concurrency (integer) (default: 4)
    During initialization the AWS source plugin fetches information about each account and region. This setting controls how many accounts can be initialized concurrently.
    Only configurations with many accounts (either hardcoded or discovered via Organizations) should require modifying this setting, to either lower it to avoid rate limit errors, or to increase it to speed up the initialization process.
  • scheduler (string) (default: shuffle):
    The scheduler to use when determining the priority of resources to sync. Currently, the only supported values are dfs (depth-first search), round-robin and shuffle.
    For more information about this, see performance tuning.
  • aws_debug (boolean) (default: false)
    If true, will log AWS debug logs, including retries and other request/response metadata.
  • max_retries (integer) (default: 10)
    Defines the maximum number of times an API request will be retried.
  • max_backoff (integer in seconds) (default: 30 meaning 30s)
    Defines the maximum duration (in seconds) between retry attempts.
  • use_nested_table_rate_limiting (boolean) (default: false)
    If true, the plugin will limit the number of nested tables that are synced concurrently.
  • enable_api_level_tracing (boolean) (default: false)
    If true, the plugin will extend table level traces to include API requests to AWS Services
  • custom_endpoint_url (string) (default: not used)
    The base URL endpoint the SDK API clients will use to make API calls to. The SDK will suffix URI path and query elements to this endpoint
  • custom_endpoint_hostname_immutable (boolean) (default: not used)
    Specifies if the endpoint's hostname can be modified by the SDK's API client. When using something like LocalStack make sure to set it equal to true.
  • custom_endpoint_partition_id (string) (default: not used)
    The AWS partition the endpoint belongs to.
  • custom_endpoint_signing_region (string) (default: not used)
    The region that should be used for signing the request to the endpoint.
  • use_paid_apis (boolean) (default: false)
    When set to true plugin will sync data from APIs that incur a fee.
    Tables that require this setting to be set to true include (but not limited to):
    • aws_costexplorer*
    • aws_cloudwatch_metric*
  • skip_specific_apis (map) (default: not used)
    This feature enables users to skip specific APIs where ever it is used. In cases where the skipped API call is enriching data from a List call, the plugin will persist data from the List call and skip the enriching API call.
    The format of the skip_specific_apis object is as follows:
    skip_specific_apis:
      <aws_service>:
        <api_action>: true
  An example of the skip_specific_apis object is as follows:
spec:
  regions: ["us-east-1","us-east-2"]
  skip_specific_apis:
    lambda:
      GetRuntimeManagementConfig: true
      GetFunction: true
The following Services and API Actions are supported:
lambda:
    GetRuntimeManagementConfig
    GetFunction
    GetFunctionCodeSigningConfig
    GetFunctionConcurrency
  kms:
    DescribeKey
    GetKeyRotationStatus
    ListResourceTags
  ssm:
    ListTagsForResource
  glacier:
    ListTagsForVault
  wafv2:
    ListResourcesForWebACL
  • table_options (map) (default: not used)
    Table options is a premium feature. Even if some tables are free, syncing data for them (& their relations) using table options counts towards paid usage.
    Please refer to the Table Options documentation for more information.
  • event_based_sync ([]Event-based sync) (default: empty)
    Event-based sync is a premium feature. Even if some tables are free, syncing data for them (and their relations) using event-based sync counts towards paid usage.

Account #

This is used to specify one or more accounts to extract information from.
  • account_name (string) (optional) (default: empty)
    Account name. Will be used as an alias in the source plugin and in the logs.
  • local_profile (string) (default: will use current credentials)
    Local profile to use to authenticate this account with. Please note this should be set to the name of the profile.
    For example, with the following credentials file:
    [default]
    aws_access_key_id=xxxx
    aws_secret_access_key=xxxx
    
    [user1]
    aws_access_key_id=xxxx
    aws_secret_access_key=xxxx
    local_profile should be set to either default or user1.
  • role_arn (string)
    If specified will use this to assume role.
  • role_session_name (string)
    If specified will use this session name when assume role to role_arn.
  • external_id (string)
    If specified will use this when assuming role to role_arn.
  • default_region (string) (default: us-east-1)
    If specified, this region will be used as the default region for the account.
  • regions ([]string)
    Regions to use for this account. Defaults to global regions setting.

org #

  • admin_account (Account)
    Configuration for how to grab credentials from an Admin account.
  • member_trusted_principal (Account)
    Configuration for how to specify the principle to use in order to assume a role in the member accounts.
  • member_role_name (string) (required)
    Role name that the plugin should use to assume a role in the member account from the admin account.
    Note: This is not a full ARN, it is just the name.
  • member_role_session_name (string)
    Overrides the default session name.
  • member_external_id (string)
    Specify an external ID for use in the trust policy.
  • member_regions ([]string)
    Limit fetching resources within this specific account to only these regions. This will override any regions specified in the provider block. You can specify all regions by using the * character as the only argument in the array.
  • organization_units ([]string)
    List of Organizational Units that AWS plugin should use to source accounts from. If you specify an OU, the plugin will also traverse nested OUs.
  • skip_organization_units ([]string)
    List of Organizational Units to skip. This is useful in conjunction with organization_units if there are child OUs that should be ignored.
  • skip_member_accounts ([]string)
    List of OU member accounts to skip. This is useful if there are accounts under the selected OUs that should be ignored.

Event-based sync #

Event-based sync is a premium feature. Even if some tables are free, syncing data for them (& their relations) using event-based sync counts towards paid usage.
  • kinesis_stream_arn (string) (required if sqs_queue_url is not provided)
    ARN for the Kinesis stream that will hold all the CloudTrail records.
  • sqs_queue_url (string) (required if kinesis_stream_arn is not provided)
    URL for the SQS queue that will hold the S3 Bucket Notifications.
  • account (Account)
    Configuration for the credentials that will be used to grab records from the specified Kinesis Stream. If this is not specified the default credentials will be used.
  • start_time (string for RFC 3339 timestamp) (default: the time at which the sync began)
    Defines the place in the stream where record processing should begin. The value should follow the RFC 3339 format, for example: 2023-09-04T19:24:14Z.
  • full_sync (boolean) (default: true)
    By default, AWS plugin will do a full sync on the specified tables before starting to consume the events in the stream. This parameter enables users to skip the full pull based sync and go straight to the event based sync.

Skip Tables #

AWS has tables that may contain many resources, nested information, and AWS-provided data. These tables may cause certain syncs to be slow due to the amount of AWS-provided data and may not be needed. We recommend only specifying syncing from necessary tables. If * is necessary for tables, below is a reference configuration of skip tables, where certain tables are skipped.
kind: source
spec:
  # Source spec section
  name: aws
  path: cloudquery/aws
  registry: cloudquery
  version: "v27.23.1"
  tables: ["*"]
  skip_tables:
    - aws_cloudtrail_events
    - aws_docdb_cluster_parameter_groups
    - aws_docdb_engine_versions
    - aws_ec2_instance_types
    - aws_ec2_vpc_endpoint_services
    - aws_elasticache_engine_versions
    - aws_elasticache_parameter_groups
    - aws_elasticache_reserved_cache_nodes_offerings
    - aws_elasticache_service_updates
    - aws_iam_group_last_accessed_details
    - aws_iam_policy_last_accessed_details
    - aws_iam_role_last_accessed_details
    - aws_iam_user_last_accessed_details
    - aws_neptune_cluster_parameter_groups
    - aws_neptune_db_parameter_groups
    - aws_rds_cluster_parameter_groups
    - aws_rds_db_parameter_groups
    - aws_rds_engine_versions
    - aws_servicequotas_services
    - aws_stepfunctions_map_run_executions
    - aws_stepfunctions_map_runs
  destinations: ["postgresql"]
  spec:
    # AWS Spec section described below


Event-based sync #

Event based sync is a type of a long running sync that enables syncing only resources that need to be synced based on the incoming events. By configuring the AWS plugin to listen to the supported CloudTrail events, the plugin will trigger selective syncs to update only the resources that had a configuration change.

How it works #

AWS CloudTrail enables users to get an audit log of events occurring within their account.
There are two ways that users can consume CloudTrail Events:
  1. The fastest and lowest latency is subscribing to a stream of AWS CloudTrail events in a Kinesis Data stream.
  2. Alternatively, if you are already using CloudTrail and persisting the logs in an S3 bucket, you can configure CloudQuery to grab the data from the S3 bucket by using Event Notifications to subscribe to events that indicate a new batch of logs has been written.

Supported Services and Events #

Each table in the supported list is a top level table. When an event is received for a table, all child tables are re-synced too by default. To skip some child tables you can use skip_tables.
EC2
ServiceEventPlugin table
ec2.amazonaws.comAssociateRouteTableaws_ec2_route_tables
ec2.amazonaws.comAttachInternetGatewayaws_ec2_internet_gateways
ec2.amazonaws.comAuthorizeSecurityGroupEgressaws_ec2_security_groups
ec2.amazonaws.comAuthorizeSecurityGroupIngressaws_ec2_security_groups
ec2.amazonaws.comCreateImageaws_ec2_images
ec2.amazonaws.comCreateInternetGatewayaws_ec2_internet_gateways
ec2.amazonaws.comCreateNetworkInterfaceaws_ec2_network_interfaces
ec2.amazonaws.comCreateSecurityGroupaws_ec2_security_groups
ec2.amazonaws.comCreateSubnetaws_ec2_subnets
ec2.amazonaws.comCreateTagsaws_ec2_instances
ec2.amazonaws.comCreateVpcaws_ec2_vpcs
ec2.amazonaws.comDeleteTagsaws_ec2_instances
ec2.amazonaws.comDeleteInternetGatewayaws_ec2_internet_gateways
ec2.amazonaws.comDeleteNetworkInterfaceaws_ec2_network_interfaces
ec2.amazonaws.comDeleteRouteTableaws_ec2_route_tables
ec2.amazonaws.comDeleteInternetGatewayaws_ec2_internet_gateways
ec2.amazonaws.comDeleteSubnetaws_ec2_subnets
ec2.amazonaws.comDeleteVpcaws_ec2_vpcs
ec2.amazonaws.comDeregisterImageaws_ec2_images
ec2.amazonaws.comDetachInternetGatewayaws_ec2_internet_gateways
ec2.amazonaws.comModifySubnetAttributeaws_ec2_subnets
ec2.amazonaws.comRevokeSecurityGroupIngressaws_ec2_security_groups
ec2.amazonaws.comRevokeSecurityGroupEgressaws_ec2_security_groups
ec2.amazonaws.comRunInstancesaws_ec2_instances
ec2.amazonaws.comTerminateInstancesaws_ec2_instances
IAM
ServiceEventPlugin table
iam.amazonaws.comCreateGroupaws_iam_groups
iam.amazonaws.comCreateRoleaws_iam_roles
iam.amazonaws.comCreateUseraws_iam_users
iam.amazonaws.comDeleteGroupaws_iam_groups
iam.amazonaws.comDeleteRoleaws_iam_roles
iam.amazonaws.comDeleteUseraws_iam_users
iam.amazonaws.comTagRoleaws_iam_roles
iam.amazonaws.comTagUseraws_iam_users
iam.amazonaws.comUntagRoleaws_iam_roles
iam.amazonaws.comUntagUseraws_iam_users
iam.amazonaws.comUpdateGroupaws_iam_groups
iam.amazonaws.comUpdateRoleaws_iam_roles
iam.amazonaws.comUpdateRoleDescriptionaws_iam_roles
iam.amazonaws.comUpdateUseraws_iam_users
RDS
ServiceEventPlugin table
rds.amazonaws.comCreateDBClusteraws_rds_clusters
rds.amazonaws.comCreateDBInstanceaws_rds_instances
rds.amazonaws.comModifyDBClusteraws_rds_clusters
rds.amazonaws.comModifyDBInstanceaws_rds_instances
rds.amazonaws.comDeleteDBClusteraws_rds_clusters
rds.amazonaws.comDeleteDBInstanceaws_rds_instances
Route53
ServiceEventPlugin table
route53.amazonaws.comChangeTagsForResourceaws_route53_hosted_zones
route53domains.amazonaws.comRegisterDomainaws_route53_domains
route53domains.amazonaws.comTransferDomainaws_route53_domains
route53domains.amazonaws.comPushDomainaws_route53_domains
route53domains.amazonaws.comRenewDomainaws_route53_domains
route53domains.amazonaws.comEnableDomainTransferLockaws_route53_domains
route53domains.amazonaws.comDisableDomainTransferLockaws_route53_domains
route53domains.amazonaws.comEnableDomainAutoRenewaws_route53_domains
route53domains.amazonaws.comDisableDomainAutoRenewaws_route53_domains
route53domains.amazonaws.comUpdateTagsForDomainaws_route53_domains
route53domains.amazonaws.comDeleteTagsForDomainaws_route53_domains
route53domains.amazonaws.comUpdateDomainContactaws_route53_domains
route53domains.amazonaws.comUpdateDomainContactPrivacyaws_route53_domains
route53domains.amazonaws.comAssociateDelegationSignerToDomainaws_route53_domains
route53domains.amazonaws.comDisassociateDelegationSignerFromDomainaws_route53_domains
route53.amazonaws.comCreateHostedZoneaws_route53_hosted_zones
route53domains.amazonaws.comDeleteDomainaws_route53_domains
route53.amazonaws.comDeleteHostedZoneaws_route53_hosted_zones
route53.amazonaws.comDeleteQueryLoggingConfigaws_route53_hosted_zone_query_logging_configs
route53.amazonaws.comDeleteTrafficPolicyInstanceaws_route53_hosted_zone_traffic_policy_instances
route53.amazonaws.comEnableHostedZoneDNSSECaws_route53_hosted_zones
route53.amazonaws.comDisableHostedZoneDNSSECaws_route53_hosted_zones
route53.amazonaws.comAssociateVPCWithHostedZoneaws_route53_hosted_zones
route53.amazonaws.comDisassociateVPCFromHostedZoneaws_route53_hosted_zones
route53.amazonaws.comCreateVPCAssociationAuthorizationaws_route53_hosted_zones
route53.amazonaws.comDeleteVPCAssociationAuthorizationaws_route53_hosted_zones
route53.amazonaws.comUpdateHostedZoneCommentaws_route53_hosted_zones
route53.amazonaws.comCreateQueryLoggingConfigaws_route53_hosted_zones
route53.amazonaws.comCreateTrafficPolicyInstanceaws_route53_hosted_zones
route53.amazonaws.comChangeResourceRecordSetsaws_route53_hosted_zones

Configuration Using Kinesis Data Stream #

  1. Configure an AWS CloudTrail Trail to send management events to a Kinesis Data Stream via CloudWatch Logs. The most straight-forward way to do this is to use the CloudFormation template provided by CloudQuery.
    The CloudFormation template will deploy the following architecture:
    The template contents can be found in CloudFormation Template contents section below.
    aws cloudformation deploy --template-file ./streaming-deployment.yml --stack-name <STACK-NAME> --capabilities CAPABILITY_IAM --disable-rollback --region <DESIRED-REGION>
  2. Copy the ARN of the Kinesis stream. If you used the CloudFormation template you can run the following command:
    aws cloudformation describe-stacks --stack-name <STACK-NAME> --query "Stacks[].Outputs" --region <DESIRED-REGION>
  3. Define a config.yml file like the one below
    kind: source
    spec:
      name: aws
      path: cloudquery/aws
      registry: cloudquery
      version: "v27.23.1"
      tables:
        - aws_ec2_instances
        - aws_ec2_internet_gateways
        - aws_ec2_security_groups
        - aws_ec2_subnets
        - aws_ec2_vpcs
        - aws_ecs_cluster_tasks
        - aws_iam_groups
        - aws_iam_roles
        - aws_iam_users
        - aws_rds_instances
      destinations: ["postgresql"]
      spec:
        event_based_sync:
          # account:
          #  local_profile: "<ROLE-NAME>"
          kinesis_stream_arn: <OUTPUT-FROM-CLOUDFORMATION-STACK>
  4. Sync the data!
    cloudquery sync config.yml
This will start a long-lived process that will only stop when there is an error, or you stop it.
Limitations
  • Kinesis Stream can only have a single shard (this is a limitation that we expect to remove in the future).
  • Stale records will only be deleted if the plugin stops consuming the Kinesis Stream, which only can occur if there is an error.

Configuration Using S3 Bucket Notifications #

  1. Create a new SQS queue:
    aws sqs create-queue --queue-name <REPLACE_WITH_QUEUE_NAME> bucket-notifications
  2. Create a file defining the permissions for the SQS queue and save it as sqs-policy.json:
    {
        "Version": "2012-10-17",
        "Statement": [{"Effect": "Allow",
                "Principal": {
                    "Service": "s3.amazonaws.com"
                },
                "Action": [
                    "SQS:SendMessage"
                ],
                "Resource": "arn:aws:sqs:<REGION>:<ACCOUNT_ID>:<REPLACE_WITH_QUEUE_NAME>",
                "Condition": {
                    "ArnLike": {
                        "aws:SourceArn": "arn:aws:s3:*:*:<REPLACE_WITH_BUCKET_NAME>"
                    },
                    "StringEquals": {
                        "aws:SourceAccount": "<REPLACE_WITH_BUCKET_OWNER_ACCOUNT_ID>"
                    }
                }
            }
        ]
    }
    and then attach it by running the following command:
    aws sqs set-queue-attributes --queue-url <queue_url> --policy file://sqs-policy.json
  3. Create a file defining the integration between the S3 bucket and the SQS queue and save it as s3-notification.json:
    {
        "QueueConfigurations": [
            {
                "QueueArn": "arn:aws:sqs:<REGION>:<ACCOUNT_ID>:<REPLACE_WITH_QUEUE_NAME>",
                "Events": [
                    "s3:ObjectCreated:*"
                ]
            }
        ]
    }
    and then create it by running the following command:
    aws s3api put-bucket-notification-configuration --bucket <REPLACE_WITH_BUCKET_NAME> --notification-configuration file://s3-notification.json
  4. Define a config.yml file like the one below
    kind: source
    spec:
      name: aws
      path: cloudquery/aws
      registry: cloudquery
      version: "v27.23.1"
      tables:
        - aws_ec2_instances
        - aws_ec2_internet_gateways
        - aws_ec2_security_groups
        - aws_ec2_subnets
        - aws_ec2_vpcs
        - aws_ecs_cluster_tasks
        - aws_iam_groups
        - aws_iam_roles
        - aws_iam_users
        - aws_rds_instances
      destinations: ["postgresql"]
      spec:
        event_based_sync:
          # account:
          #  local_profile: "<ROLE-NAME>"
          sqs_queue_url: <OUTPUT-FROM-CREATE-QUEUE-COMMAND>
  5. Sync the data!
    cloudquery sync config.yml
This will start a long-lived process that will only stop when there is an error, or you stop it.
Limitations
  • This method is not the fastest way to consume CloudTrail events as the data gets buffered before being sent to S3 and then bucket notifications can also have a delay.
  • Stale records will only be deleted if the plugin stops consuming the events from SQS, which only can occur if there is an error.


Event-based sync CloudFormation template #

This CloudFormation template will create a Kinesis Data Stream and a CloudWatch Logs group that will be used to pipe CloudTrail events to CloudQuery. It is intended to be a reference, but users should amend it to fit their needs.

Template contents: #

AWSTemplateFormatVersion: 2010-09-09
Description: Configures Cloudtrail Events to be piped to a Kinesis Data stream via CloudWatch Logs.

Parameters:

  KinesisMessageDuration:
    Type: Number
    Description: Number of hours Kinesis will persist a record before it is purged.
    Default: 24
  ExistingS3BucketName:
    Type: String
    Description: Name of the S3 Bucket that CloudTrail will use to store logs.
    Default: ""

Conditions:
  CreateS3Bucket: !Equals [!Ref ExistingS3BucketName, ""]

Resources:
  # Stream that CQ will poll for changes
  CQSyncingKinesisStream:
    Type: AWS::Kinesis::Stream
    Properties:
      ShardCount: 1
      RetentionPeriodHours: !Ref KinesisMessageDuration

  # IAM Role for allowing CloudWatch Log to write to Kinesis Stream
  CloudWatchLogsToKinesisRole:
    Type: AWS::IAM::Role
    Properties:
      Policies:
      - PolicyName: CloudWatchLogsToKinesisPolicy
        PolicyDocument: 
          Version: 2012-10-17
          Statement:
            - Effect: Allow
              Action:
                - kinesis:PutRecord
              Resource: !GetAtt CQSyncingKinesisStream.Arn
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service: logs.amazonaws.com
            Action:
              - sts:AssumeRole


  CloudTrailS3Bucket:
    Type: AWS::S3::Bucket
    Condition: CreateS3Bucket
    Properties:
      LifecycleConfiguration:
        Rules:
          - ExpirationInDays: 30
            Status: Enabled

  CloudTrailS3BucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !If [CreateS3Bucket,!Ref CloudTrailS3Bucket, !Ref ExistingS3BucketName]
      PolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Sid: AWSCloudTrailAclCheck
            Effect: Allow
            Principal:
              Service: cloudtrail.amazonaws.com
            Action: s3:GetBucketAcl
            Resource: !Sub
                - arn:${AWS::Partition}:s3:::${Bucket}
                - { Bucket: !If [CreateS3Bucket,!Ref CloudTrailS3Bucket, !Ref ExistingS3BucketName] }
            Condition:
              StringEquals:
                'aws:SourceAccount': !Sub ${AWS::AccountId}
          - Sid: AWSCloudTrailWrite
            Effect: Allow
            Principal:
              Service: cloudtrail.amazonaws.com
            Action: s3:PutObject
            Resource: !Sub
                - arn:${AWS::Partition}:s3:::${Bucket}/*
                - { Bucket: !If [CreateS3Bucket,!Ref CloudTrailS3Bucket, !Ref ExistingS3BucketName] }
            Condition:
              StringEquals:
                's3:x-amz-acl': bucket-owner-full-control
                'aws:SourceAccount': !Sub ${AWS::AccountId}

  CloudWatchLogsGroup:
    Type: AWS::Logs::LogGroup
    UpdateReplacePolicy: Delete
    DeletionPolicy: Delete
    Properties:
      LogGroupName: "CloudTrailLogGroup"
      RetentionInDays: 1

  # Role for allowing CLoudTrail to write to CloudWatch Logs
  CloudWatchRole:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Sid: AssumeRole
          Effect: Allow
          Principal:
            Service: 'cloudtrail.amazonaws.com'
          Action: 'sts:AssumeRole'
      Policies:
      - PolicyName: 'cloudtrail-policy'
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action: 'logs:CreateLogStream'
            Resource: !GetAtt CloudWatchLogsGroup.Arn
          - Effect: Allow
            Action: 'logs:PutLogEvents'
            Resource: !GetAtt CloudWatchLogsGroup.Arn
  CloudTrailTrail:
    Type: AWS::CloudTrail::Trail
    DependsOn:
         - CloudTrailS3BucketPolicy
    Properties:
      CloudWatchLogsLogGroupArn: !GetAtt CloudWatchLogsGroup.Arn
      CloudWatchLogsRoleArn: !GetAtt CloudWatchRole.Arn
      EventSelectors:
        - IncludeManagementEvents: True
          ReadWriteType: WriteOnly
      IncludeGlobalServiceEvents: True
      IsLogging: True
      IsMultiRegionTrail: True
      S3BucketName: !If [CreateS3Bucket,!Ref CloudTrailS3Bucket, !Ref ExistingS3BucketName]

  SubscriptionFilter:
    Type: AWS::Logs::SubscriptionFilter
    Properties:
      LogGroupName: !Ref CloudWatchLogsGroup
      DestinationArn: !GetAtt CQSyncingKinesisStream.Arn
      RoleArn: !GetAtt CloudWatchLogsToKinesisRole.Arn
      FilterPattern: ""



Outputs:
  KinesisStreamArn:
    Description: The ARN of the Kinesis Data Stream that CloudQuery will use to listen for changes.
    Value: !GetAtt CQSyncingKinesisStream.Arn


Multi-account configuration #

AWS Organizations #

The plugin supports discovery of AWS Accounts via AWS Organizations. This means that as Accounts get added or removed from your organization, it will be able to handle new or removed accounts without any configuration changes.
kind: source
spec:
  name: aws
  path: cloudquery/aws
  registry: cloudquery
  version: "v27.23.1"
  tables: ['aws_s3_buckets']
  destinations: ["postgresql"]
  spec:
    aws_debug: false
    org:
      admin_account:
        local_profile: "<NAMED_PROFILE>"
      member_role_name: cloudquery-ro
    regions:
      - '*'
Prerequisites for using AWS Org functionality:
  1. Have a role (or user) in an Admin account with the following access:
    • organizations:ListAccounts
    • organizations:ListAccountsForParent
    • organizations:ListChildren
  2. Have a role in each member account that has a trust policy with a single principal.
    The default profile name is OrganizationAccountAccessRole. The OrganizationAccountAccessRole is created by default in AWS Accounts created as part of an AWS Organization. We do not recommend using the OrganizationAccountAccessRole due to the level of permissions typically granted to the role, but instead recommend for AWS plugin users to create their own IAM roles in each member account with the appropriate read-only permissions. We also recommend ensuring that the IAM roles and policies used for AWS plugin adhere to company security standards.
    Reference IAM assets and the CloudFormation templates for deployment in an AWS Organization for CloudQuery can be found here.

Configuring AWS Organization: #

  1. It is always necessary to specify a member role name:
    org:
          member_role_name: cloudquery-ro
  2. Sourcing credentials that have the necessary organizations permissions can be done in any of the following ways:
    1. Source credentials from the default credential tool chain:
      org:
            member_role_name: cloudquery-ro
    2. Source credentials from a named profile in the shared configuration or credentials file
      org:
            member_role_name: cloudquery-ro
            admin_account:
              local_profile: <Named-Profile>
    3. Assume a role in admin account using credentials in the shared configuration or credentials file:
      org:
            member_role_name: cloudquery-ro
            admin_account:
              local_profile: <Named-Profile>
              role_arn: arn:aws:iam::<ACCOUNT_ID>:role/<ROLE_NAME>
      
              # Optional. Specify the name of the session
              # role_session_name: ""
      
              # Optional. Specify the ExternalID if required for trust policy
              # external_id: ""
  3. Optional. If the trust policy configured for the member accounts requires different credentials than you configured in the previous step, then you can specify the credentials to use in the member_trusted_principal block:
    org:
          member_role_name: cloudquery-ro
          member_trusted_principal:
            local_profile: <Named-Profile-Member>
  4. Optional. If you want to specify specific Organizational Units to fetch from you can add them to the organization_units list.
    org:
          member_role_name: cloudquery-ro
          organization_units:
            - ou-<ID-1>
            - ou-<ID-2>
    Child OUs will also be included. To skip a child OU or account, use the skip_organization_units or skip_member_accounts options respectively:
    org:
          member_role_name: cloudquery-ro
          organization_units:
            - ou-<ID-1>
            - ou-<ID-2>
          skip_organization_units:
            - ou-<ID-3>
          skip_member_accounts:
            - <ACCOUNT_ID>

Specific Accounts #

The AWS plugin can fetch from multiple accounts in parallel by using AssumeRole (you will need to use credentials that can AssumeRole to all other specified accounts).
Below is an example configuration:
accounts:
  - account_name: <AccountName_1>
    role_arn: <YOUR_ROLE_ARN_1>
    # Optional. Local Profile is the named profile in your shared configuration file (usually `~/.aws/config`) that you want to use for this specific account
    local_profile: <NAMED_PROFILE>
    # Optional. Specify the Role Session name
    role_session_name: ""
  - account_name: <AccountName_2>
    local_profile: provider
    # Optional. Role ARN we want to assume when accessing this account
    role_arn: <YOUR_ROLE_ARN_2>


Query examples #

Find all public-facing load balancers #

SELECT * FROM aws_elbv2_load_balancers WHERE scheme = 'internet-facing';

Find all unencrypted RDS instances #

SELECT * FROM aws_rds_clusters WHERE storage_encrypted IS FALSE;

Find all S3 buckets that are permitted to be public #

SELECT arn, region
FROM aws_s3_buckets
WHERE block_public_acls IS NOT TRUE
   OR block_public_policy IS NOT TRUE
   OR ignore_public_acls IS NOT TRUE
   OR restrict_public_buckets IS NOT TRUE


Table options #

This feature enables users to override the default options for specific tables. The root of the object takes a table name, and the next level takes an API method name. The final level is the actual input object as defined by the API.
The format of the table_options object is as follows:
table_options:
  <table_name>:
    <api_method_name>:
      - <input_object>
A list of <input_object> objects should be provided. The plugin will iterate through these to make multiple API calls. This is useful for APIs like CloudTrail's LookupEvents that only supports a single event type per call. For example:
table_options:
    aws_cloudtrail_events:
      lookup_events:
        - start_time: 2023-05-01T20:20:52Z
          end_time:   2023-05-03T20:20:52Z
          lookup_attributes:
            - attribute_key:   EventName
              attribute_value: RunInstances
        - start_time: 2023-05-01T20:20:52Z
          end_time:   2023-05-03T20:20:52Z
          lookup_attributes:
            - attribute_key:   EventName
              attribute_value: StartInstances
        - start_time: 2023-05-01T20:20:52Z
          end_time:   2023-05-03T20:20:52Z
          lookup_attributes:
            - attribute_key:   EventName
              attribute_value: StopInstances
The naming for all the fields is the same as the AWS API but in snake case. For example EndTime is represented as end_time.
The following tables and APIs are supported:
table_options:
  aws_accessanalyzer_analyzer_findings:
    list_findings:
      - <AccessAnalyzer.ListFindings> # NextToken & AnalyzerArn are prohibited

  aws_accessanalyzer_analyzer_findings_v2:
    list_findings_v2:
      - <AccessAnalyzer.ListFindingsV2> # NextToken & AnalyzerArn are prohibited

  aws_cloudtrail_events:
    lookup_events:
      - <CloudTrail.LookupEvents> # NextToken is prohibited

  aws_cloudtrail_trails:
    describe_trails:
      - <CloudTrail.DescribeTrails>

  aws_cloudwatch_metrics:
    - list_metrics: <CloudWatch.ListMetrics> # NextToken is prohibited
      get_metric_statistics:
        - <CloudWatch.GetMetricStatistics>  # Namespace, MetricName and Dimensions are prohibited

  aws_costexplorer_cost_custom:
    get_cost_and_usage:
      - <CostExplorer.GetCostAndUsage> # NextPageToken is prohibited

  aws_ec2_images:
    describe_images:
      - <EC2.DescribeImages> # NextToken and ImageIds are prohibited. MaxResults should be in range [1-1000].

  aws_ec2_instances:
    describe_instances:
      - <EC2.DescribeInstances> # NextToken is prohibited. MaxResults should be in range [1-1000].

  aws_ec2_internet_gateways:
    describe_internet_gateways:
      - <EC2.DescribeInternetGateways> # NextToken is prohibited. MaxResults should be in range [5-1000].

  aws_ec2_network_interfaces:
    describe_network_interfaces:
      - <EC2.DescribeNetworkInterfaces> # NextToken is prohibited. MaxResults should be in range [5-1000].

  aws_ec2_route_tables:
    describe_route_tables:
      - <EC2.DescribeRouteTables> # NextToken is prohibited. MaxResults should be in range [5-100].

  aws_ec2_security_groups:
    describe_security_groups:
      - <EC2.DescribeSecurityGroups> # NextToken is prohibited. MaxResults should be in range [5-1000].

  aws_ec2_subnets:
    describe_subnets:
      - <EC2.DescribeSubnets> # NextToken is prohibited. MaxResults should be in range [5-1000].

  aws_ec2_vpcs:
    describe_vpcs:
      - <EC2.DescribeVpcs> # NextToken is prohibited. MaxResults should be in range [5-1000].

  aws_ecs_cluster_tasks:
    list_tasks:
      - <ECS.ListTasks> # Cluster and NextToken are prohibited. MaxResults should be in range [1-100].

  aws_guardduty_detectors:
    - list_detectors: <GuardDuty.ListDetectors> # NextToken is prohibited
      list_findings: <GuardDuty.ListFindings> # NextToken and DetectorID are prohibited

  aws_iam_groups:
    get_group:
      - <IAM.GetGroup> # Marker is prohibited. MaxItems should be in range [1-1000].

  aws_iam_policies:
    list_policies:
      - <IAM.ListPolicies> # Marker is prohibited. MaxItems should be in range [1-1000].

  aws_iam_roles:
    get_role:
      - <IAM.GetRole> # RoleName is required.

  aws_iam_users:
    get_user:
      - <IAM.GetUser> # UserName is required.

  aws_inspector_findings:
    list_findings:
      - <Inspector.ListFindings> # NextToken is prohibited. MaxResults should be in range [1-500].

  aws_inspector2_covered_resources:
    list_coverage:
      - <InspectorV2.ListCoverage> # NextToken is prohibited. MaxResults should be in range [1-200].

  aws_inspector2_findings:
    list_findings:
      - <InspectorV2.ListFindings> # NextToken is prohibited.

  aws_rds_clusters:
    describe_db_clusters:
      - <RDS.DescribeDBClusters> # Marker is prohibited. MaxRecords should be in range [20-100].

  aws_rds_engine_versions:
    describe_db_engine_versions:
      - <RDS.DescribeDBEngineVersions> # Marker is prohibited. MaxRecords should be in range [20-100].

  aws_rds_global_clusters:
    describe_global_clusters:
      - <RDS.DescribeGlobalClusters> # Marker is prohibited. MaxRecords should be in range [20-100].

  aws_rds_instances:
      # Marker is prohibited. MaxRecords should be in range [20-100].
    - describe_db_instances: <RDS.DescribeDBInstances>
      # NextToken, ServiceType and Identifier are prohibited.
      # StartTime, EndTime and MetricQueries are required.
      # MaxResults should be in range [1-25]. PeriodInSeconds should be in range [1-86400].
      get_resource_metrics: <PI.GetResourceMetrics>

  aws_route53_hosted_zones:
    list_hosted_zones: 
      # NextToken, DelegationSetId and HostedZoneType are prohibited. MaxResults should be in range [1-100].
      - <Route53.GetHostedZone>

  aws_securityhub_findings:
    get_findings:
      - <SecurityHub.GetFindings> # NextToken is prohibited. MaxResults should be in range [1-100].
  
  aws_servicequotas_services:
    - ListServices: <ServiceQuota.ListServices> # NextToken is prohibited. MaxResults should be in range [1-100].
      ListServiceQuotas:
        - <ServiceQuota.ListServiceQuotas>

  aws_ssm_sessions:
    describe_sessions:
      - <SSM.DescribeSessions> # NextToken is prohibited. MaxResults should be in range [1-200].

  aws_ssm_inventory_entries:
    list_inventory_entries:
      # NextToken is prohibited. MaxResults should be in range [1-50].
      # InstanceId and TypeName are required.
      - <SSM.ListInventoryEntries>
The full list of supported options are documented under the Table Options section of each table in the AWS plugin tables documentation.


Versioning #

Changes to schema, configurations and required user permissions are all factors that go into the versioning of the AWS plugin. Any release that requires manual changes to an existing deployment of the AWS plugin in order to retain the same functionality will be indicated by an increase to the major version. When support for additional resources is added it will result in a minor version bump. This is important to be aware of because if you are using tables: ["*"] to specify the set of tables to sync then in minor versions new resources that might require additional IAM permissions might result in errors being raised.

Breaking changes #

The following examples are some of the most common examples of reasons for a major version change:
  1. Changing a primary key for a table
  2. Changing the name of a table
  3. Changing the permissions required to sync a resource
All releases contain a change log that indicates all the changes (and highlights the breaking changes). If you are ever unsure about a change that is included feel free to reach out to the CloudQuery team on Discord to find out more.

Preview features #

Sometimes features or tables will be released and marked as alpha. This indicates that future minor versions might change, break or remove functionality. This enables the CloudQuery team to release functionality prior to it being fully stable so that the community can give feedback. Once a feature is released as Generally Available then all of the above rules for semantic versioning will apply.
Current Preview features
The following features are currently in Preview:
  • All tables that are prefixed with aws_alpha_
  • table_options feature