Back to source list
Official
Premium
PagerDuty
The CloudQuery PagerDuty plugin extracts PagerDuty data and loads it into any supported CloudQuery destination
Publisher
cloudquery
Latest version
v5.13.1
Type
Source
Platforms
Date Published
Price per 1M rows
Starting from $15
monthly free quota
1M rows
Set up process #
brew install cloudquery/tap/cloudquery
1. Download CLI and login
2. Create source and destination configs
Plugin configurationOverview #
PagerDuty Source Plugin
The CloudQuery PagerDuty plugin extracts PagerDuty data and loads it into any supported CloudQuery destination (e.g. PostgreSQL, BigQuery, Snowflake, and more). It is based on The PagerDuty Go SDK and the PagerDuty REST API.
Authentication #
In order to authenticate with your PagerDuty account, you will need a PagerDuty authorization token.
CloudQuery supports three methods of reading the authorization token (you must use one of these):
- From the
auth_token
spec field - From a
~/.pd.yml
file, such as:
authtoken: <YOUR_AUTH_TOKEN>
- From an environment variable
PAGERDUTY_AUTH_TOKEN
.
Rate Limiting #
The PagerDuty REST API is rate limited to 900 requests per minute per organization (15 requests per second).
Because of this, PagerDuty plugin syncs may take relatively long. This may fit your use case if you sync
on something like a nightly cadence - but there are a few things you can do to speed this up:
- Relational tables (e.g.
pagerduty_incident_notes
,pagerduty_incident_alerts
) require the most API calls, because they require an API call per resource (e.g. incident). You may be able to speed up the sync by skipping these relational tables:skip_tables: [ "pagerduty_incident_alerts", "pagerduty_incident_notes", "pagerduty_incident_log_entries", "pagerduty_ruleset_rules", "pagerduty_user_contact_methods", "pagerduty_user_notification_rules" ]
- If you are only interested in some of the teams in your organization, you can filter to only sync their resources with
team_ids: ["<team_id>"]
See the PagerDuty Configuration Page for more info.
Incremental Syncing #
The PagerDuty plugin supports incremental syncing for incident data. This is how it works:
- Unresolved incidents will be synced in full on each sync run.
- Once an incident has been marked as resolved it will not be synced again.
- Changes made incidents after they are resolved (e.g. new notes, log entries) will not be synced. If you want to sync this information omit the
backend_options
field from your spec.
To enable this,
backend_options
must be set in the spec (as shown below). This is documented in the Managing Incremental Tables section.Example Queries #
The following example queries are SQL queries in the PostgreSQL flavour (i.e. can be used with the
PostgreSQL destination).
Top 10 services that generate the most incidents (last 3 months) #
SELECT pagerduty_services.id AS service_id,
pagerduty_services.name AS service_name,
COUNT(pagerduty_incidents.id) AS incident_count
FROM pagerduty_services
INNER JOIN pagerduty_incidents
ON pagerduty_services.id = pagerduty_incidents.service->>'id'
WHERE pagerduty_incidents.created_at > NOW() - INTERVAL '3 months'
GROUP BY service_id, service_name
ORDER BY incident_count DESC
LIMIT 10
service_id | service_name | incident_count
------------+-----------------+----------------
PYS6MP5 | UnstableService | 25
PAZ9U1C | StableService | 3
Average time to respond to a query, grouped by priority (last 3 months) #
WITH incident_ack_logs AS (
SELECT pagerduty_incidents.id AS incident_id,
pagerduty_incidents.priority->>'name' AS priority,
pagerduty_incident_log_entries.created_at - pagerduty_incidents.created_at AS time_to_log
FROM pagerduty_incidents
INNER JOIN pagerduty_incident_log_entries
ON pagerduty_incidents.id = pagerduty_incident_log_entries.incident->>'id'
WHERE pagerduty_incident_log_entries.type = 'acknowledge_log_entry'
AND pagerduty_incidents.created_at > NOW() - INTERVAL '3 months'
),
incident_ack_time AS ( -- Make sure only the first acknowledgement is used (incidents may be acknowledged twice)
SELECT incident_id,
priority,
MIN(time_to_log) AS time_to_ack
FROM incident_ack_logs
GROUP BY incident_id, priority
)
SELECT priority, AVG(time_to_ack) AS average_time_to_ack
FROM incident_ack_time
GROUP BY priority
priority | average_time_to_ack
----------+---------------------
P1 | 00:05:10
P2 | 00:30:00
P3 | 06:20:00
Which users acknowledged the most incidents (last 3 months) #
WITH incident_ack_logs AS (
SELECT pagerduty_incidents.id AS incident_id,
pagerduty_incident_log_entries.agent->>'id' AS agent_id
FROM pagerduty_incidents
INNER JOIN pagerduty_incident_log_entries
ON pagerduty_incidents.id = pagerduty_incident_log_entries.incident->>'id'
WHERE pagerduty_incident_log_entries.type = 'acknowledge_log_entry'
AND pagerduty_incidents.created_at > NOW() - INTERVAL '3 months'
)
SELECT pagerduty_users.id AS user_id,
pagerduty_users.name AS user_name,
COUNT(DISTINCT incident_id) AS acknowledge_count FROM
pagerduty_users INNER JOIN incident_ack_logs
ON pagerduty_users.id = incident_ack_logs.agent_id
GROUP BY user_id, user_name
ORDER BY acknowledge_count DESC
user_id | user_name | acknowledge_count
---------+-------------+-------------------
PDYR2Y8 | John | 15
PDYR2Y9 | Dave | 12
PDYR2Z5 | Jane | 8
Configuration #
CloudQuery PagerDuty Source Plugin Configuration Reference
Example #
In order to get started with the PagerDuty plugin, you need to create a YAML file in your CloudQuery configuration directory (e.g. named
pagerduty.yml
).This example connects a single PagerDuty subscription to a Postgres destination. The (top level) source spec section is described in the Source Spec Reference.
kind: source
spec:
# Source spec section
name: "pagerduty"
path: "cloudquery/pagerduty"
registry: "cloudquery"
version: "v5.13.1"
tables: ["*"]
destinations: ["postgresql"]
backend_options:
table_name: "cq_state_pagerduty"
connection: "@@plugins.postgresql.connection"
# Learn more about the configuration options at https://cql.ink/pagerduty_source
spec:
# optional
auth_token: "<auth_token>"
# optional
team_ids: ["<team_id>"]
PagerDuty Spec #
This is the (nested) spec used by the PagerDuty source plugin.
auth_token
(string
) (optional)An authentication token for the PagerDuty API. You can omit this field if you use one of the other authentication methods available in this plugin.auth_method
(string
) (optional, default:api_token
)This plugin supports different authentication methods when communicating with the PagerDuty API. Supported values areapi_token
andoauth_token
.team_ids
([]string
) (default: empty. will sync data from all available teams)If specified, limits the sync to only resources related to the specified teams.max_requests_per_second
(integer
) (default:10
) PagerDuty API is heavily rate-limited (900 requests/min = 15 requests/sec, across the entire organization). This option allows you to control the rate at which the plugin will make requests to the API. You can reduce this parameter in case you are still seeing rate limit errors (status code 429), or increase it if your PagerDuty API quota is higher. See https://developer.pagerduty.com/docs/ZG9jOjExMDI5NTUz-rate-limiting#what-are-our-limits for more info. Set to a negative number (e.g.-1
) to disable rate limiting, but be aware that this may cause rate limit errors.retry_max
(integer
) (optional) (default:10
) Maximum number of retries to make for each request when a transient error occurs (e.g. rate limit error). Set to a value equal to or less than0
to disable retries.retry_wait_min_seconds
(integer
) (optional) (default:1
) Minimum number of seconds to wait before retrying a request when a transient error occurs (e.g. rate limit error).retry_wait_max_seconds
(integer
) (optional) (default:30
) Maximum number of seconds to wait before retrying a request when a transient error occurs (e.g. rate limit error).concurrency
(integer
) (optional) (default:1000
) A best effort maximum number of Go routines to use. Lower this number to reduce memory usage.scheduler
(string
) (optional) (default:dfs
) The scheduler to use when determining the priority of resources to sync. Supported values aredfs
(depth-first search),round-robin
,shuffle
andshuffle-queue
.For more information about this, see performance tuning.table_options
(Table Options spec) (optional)Options to apply to specific tables. See table options for more information.
PagerDuty Table Options Spec #
pagerduty_incidents
since
anduntil
defaults to empty strings, which results in syncing all incidents by default.since
(string
in RFC3339 format) (optional) Example:2024-01-01T12:00:00+00:00
. Defaults to an empty string which is interpreted as beginning of time.until
(string
in RFC3339 format) (optional) Example:2024-02-01T12:00:00+00:00
. Defaults to an empty string which is interpreted as current time.
pagerduty_schedules
To retrievefinal_schedule
oroverride_subschedule
schedule data eithersince
oruntil
must be provided.Specifyingsince
oruntil
will incur an addition API call to get each schedule is required. This can slow down the sync process.since
(string
in RFC3339 format) (optional) Example:2024-01-01T12:00:00+00:00
. Defaults to 2 weeks before until if an until is given.until
(string
in RFC3339 format) (optional) Example:2024-02-01T12:00:00+00:00
. Defaults to 2 weeks after since if a since is given.
pagerduty_oncalls
since
anduntil
defaults to empty strings, which are interpreted as current time by the PagerDuty API. As a result this table syncs ongoing on-calls by default. Usesince
anduntil
to expand the range of on-calls to sync.since
(string
in RFC3339 format) (optional) Example:2024-01-01T12:00:00+00:00
. Defaults to an empty string which is interpreted as current time.until
(string
in RFC3339 format) (optional) Example:2024-02-01T12:00:00+00:00
. Defaults to an empty string which is interpreted as current time.