Datadog
Datadog is a cloud-scale application observability solution that monitors servers, databases, tools, and services. Nobl9 connects with Datadog to collect SLI measurements and compare them to SLO targets. Nobl9 can activate processes and notifications when the error budget burn rate is too high or has been surpassed because it calculates error budgets of acceptable thresholds.
Users can pass business context through monitoring data, developing and measuring reliability targets, and aligning activities against the error budget's priorities using Nobl9 integration with Datadog.
Datadog parameters and supported features in Nobl9
- General support:
- Release channel: Stable, Beta
- Connection method: Agent, Direct
- Replay and SLI Analyzer: Supported
- Event logs: Supported
- Query checker: Supported
- Query parameters retrieval: Supported
- Timestamp cache persistence: Supported
- Query parameters:
- Query interval: 2 min
- Query delay: 1 min
- Jitter: 15 sec
- Timeout: 30 sec
- Agent details and minimum required versions for supported features:
- Environment variable:
DATADOG_QUERY_DELAY
- Plugin name:
n9datadog
- Replay and SLI Analyzer:
0.65.0
- Maximum historical data retrieval period:
30 days
- Query parameters retrieval:
0.73.2
- Timestamp cache persistence:
0.65.0
Authenticationβ
To deploy the Nobl9 agent, provide an API Key and Application Key with the DD_API_KEY
and DD_APPLICATION_KEY
environment variables.
Alternatively,
you can pass credentials using a local configuration file with the api_key
and application_key
keys under the n9datadog
section.
Learn how to obtain API and Application Keys.
To connect to Datadog,
the Nobl9 agent scrapes the /api/v1/query
endpoint that requires the timeseries_query
authorization scope.
Make sure your application has access to this scope before you connect to Datadog.
Learn more about Query timeseries data across multiple products.
Adding Datadog as a data sourceβ
To ensure data transmission between Nobl9 and Datadog, it may be necessary to list Nobl9 IP addresses as trusted.
app.nobl9.com
instance:- 18.159.114.21
- 18.158.132.186
- 3.64.154.26
us1.nobl9.com
instance:- 34.121.54.120
- 34.123.193.191
- 34.134.71.10
- 35.192.105.150
- 35.225.248.37
- 35.226.78.175
- 104.198.44.161
You can add Datadog as the data source using the direct or agent connection methods.
Direct connection methodβ
Direct connection to Datadog requires entering your Datadog credentials.
Nobl9 Webβ
- Navigate to Integrations > Sources.
- Click .
- Click the required Source button.
- Choose Direct.
-
Select one of the following Release Channels:
- The
stable
channel is fully tested by the Nobl9 team. It represents the final product; however, this channel does not contain all the new features of abeta
release. Use it to avoid crashes and other limitations. - The
beta
channel is under active development. Here, you can check out new features and improvements without the risk of affecting any viable SLOs. Remember that features in this channel can change.
- The
-
Enter the Datadog Site for connection.
It is a Datadog SaaS instance that corresponds to one of Datadog's available locations:datadoghq.com
(formerly referred to ascom
),us3.datadoghq.com
us5.datadoghq.com
datadoghq.eu
(formerly referred to aseu
),ddog-gov.com
,ap1.datadoghq.com
-
Enter your Datadog API Key and Application Key.
- Select a Project.
Specifying a project is helpful when multiple users are spread across multiple teams or projects. When the Project field is left blank, Nobl9 uses thedefault
project. - Enter a Display Name.
You can enter a user-friendly name with spaces in this field. - Enter a Name.
The name is mandatory and can only contain lowercase, alphanumeric characters, and dashes (for example,my-project-1
). Nobl9 duplicates the display name here, transforming it into the supported format, but you can edit the result. - Enter a Description.
Here you can add details such as who is responsible for the integration (team/owner) and the purpose of creating it. - Specify the Query delay to set a customized delay for queries when pulling the data from the data source.
- The default value in Datadog integration for Query delay is
1 minute
.
infoChanging the Query delay may affect your SLI data. For more details, check the Query delay documentation. - The default value in Datadog integration for Query delay is
- Enter a Maximum Period for Historical Data Retrieval.
- This value defines how far back in the past your data will be retrieved when replaying your SLO based on this data source.
- The maximum period value depends on the data source.
Find the maximum value for your data source. - A greater period can extend the loading time when creating an SLO.
- The value must be a positive integer.
- Enter a Default Period for Historical Data Retrieval.
- It is used by SLOs connected to this data source.
- The value must be a positive integer or
0
. - By default, this value is set to 0. When you set it to
>0
, you will create SLOs with Replay.
- Click Add Data Source
sloctlβ
Create a configuration YAML file using the provided sample.
Then, run sloctl apply
specifying the path to your configuration file.
apiVersion: n9/v1alpha
kind: Direct
metadata:
name: datadog-data-source
# All displayName fields are optional
displayName: Datadog data source
# The name identifier of the project you want to locate your data source in
project: my-project
spec:
# All descriptions are optional
description: My Datadog data source connected using the direct method
releaseChannel: stable
datadog:
site: datadoghq.com
apiKey: "[secret]"
applicationKey: "[secret]"
logCollectionEnabled: false # boolean, defaults to 'false'. Set to true if you'd like your source to collect logs. Available for data sources connected using the direct method only. Reach out to support@nobl9.com to activate it.
historicalDataRetrieval:
maxDuration:
value: 30
unit: Day
defaultDuration:
value: 15
unit: Day
queryDelay:
value: 2
unit: Minute
Field | Type | Description |
---|---|---|
queryDelay.unit mandatory | enum | Specifies the unit for the query delay. Possible values: Second | Minute . β’ Check query delay documentation for default unit of query delay for each source. |
queryDelay.value mandatory | numeric | Specifies the value for the query delay. β’ Must be a number less than 1440 minutes (24 hours). β’ Check query delay documentation for default unit of query delay for each source. |
logCollectionEnabled optional | boolean | Optional. Defaults to false . Set to true if you'd like your direct to collect event logs. Contact us to activate it. |
releaseChannel mandatory | enum | Specifies the release channel. Accepted values: beta | stable . |
Source-specific fields | ||
datadog.site mandatory | string | Datadog SaaS instance that corresponds to one of Datadog's available locations: • datadoghq.com (formerly referred to as `COM`)• us3.datadoghq.com • us5.datadoghq.com • datadoghq.eu (formerly referred to as `EU`)• ddog-gov.com • ap1.datadoghq.com |
datadog.apiKey mandatory | string, secret | See authentication section above for more details. |
datadog.applicationKey mandatory | string, secret | See authentication section above for more details. |
Replay-related fields | ||
historicalDataRetrieval optional | n/a | Optional structure related to configuration related to Replay. β Use only with supported sources. β’ If omitted, Nobl9 uses the default values of value: 0 and unit: Day for maxDuration and defaultDuration . |
maxDuration.value optional | numeric | Specifies the maximum duration for historical data retrieval. Must be integer β₯ 0 . See Replay documentation for values of max duration per data source. |
maxDuration.unit optional | enum | Specifies the unit for the maximum duration of historical data retrieval. Accepted values: Minute | Hour | Day . |
defaultDuration.value optional | numeric | Specifies the default duration for historical data retrieval. Must be integer β₯ 0 and β€ maxDuration . |
defaultDuration.unit optional | enum | Specifies the unit for the default duration of historical data retrieval. Accepted values: Minute | Hour | Day . |
Agent connection methodβ
For the agent connection method, credentials aren't required. With this method, you must deploy your Datadog agent to activate data collection.
Nobl9 Webβ
- Navigate to Integrations > Sources.
- Click .
- Click the required Source button.
- Choose Agent.
-
Select one of the following Release Channels:
- The
stable
channel is fully tested by the Nobl9 team. It represents the final product; however, this channel does not contain all the new features of abeta
release. Use it to avoid crashes and other limitations. - The
beta
channel is under active development. Here, you can check out new features and improvements without the risk of affecting any viable SLOs. Remember that features in this channel can change.
- The
-
Enter the Datadog Site for connection.
It is a Datadog SaaS instance that corresponds to one of Datadog's available locations:datadoghq.com
(formerly referred to ascom
),us3.datadoghq.com
us5.datadoghq.com
datadoghq.eu
(formerly referred to aseu
),ddog-gov.com
,ap1.datadoghq.com
- Select a Project.
Specifying a project is helpful when multiple users are spread across multiple teams or projects. When the Project field is left blank, Nobl9 uses thedefault
project. - Enter a Display Name.
You can enter a user-friendly name with spaces in this field. - Enter a Name.
The name is mandatory and can only contain lowercase, alphanumeric characters, and dashes (for example,my-project-1
). Nobl9 duplicates the display name here, transforming it into the supported format, but you can edit the result. - Enter a Description.
Here you can add details such as who is responsible for the integration (team/owner) and the purpose of creating it. - Specify the Query delay to set a customized delay for queries when pulling the data from the data source.
- The default value in Datadog integration for Query delay is
1 minute
.
infoChanging the Query delay may affect your SLI data. For more details, check the Query delay documentation. - The default value in Datadog integration for Query delay is
- Enter a Maximum Period for Historical Data Retrieval.
- This value defines how far back in the past your data will be retrieved when replaying your SLO based on this data source.
- The maximum period value depends on the data source.
Find the maximum value for your data source. - A greater period can extend the loading time when creating an SLO.
- The value must be a positive integer.
- Enter a Default Period for Historical Data Retrieval.
- It is used by SLOs connected to this data source.
- The value must be a positive integer or
0
. - By default, this value is set to 0. When you set it to
>0
, you will create SLOs with Replay.
- Click Add Data Source
sloctlβ
Create a configuration YAML file using the provided sample.
Then, run sloctl apply
specifying the path to your configuration file.
apiVersion: n9/v1alpha
kind: Agent
metadata:
name: datadog-data-source
# All displayName fields are optional
displayName: Datadog data source
# The name identifier of the project you want to locate your data source in
project: my-project
spec:
# All descriptions are optional
description: My Datadog data source connected using the direct method
releaseChannel: stable
datadog:
site: datadoghq.com
historicalDataRetrieval:
maxDuration:
value: 30
unit: Day
defaultDuration:
value: 15
unit: Day
queryDelay:
value: 2
unit: Minute
Field | Type | Description |
---|---|---|
queryDelay.unit mandatory | enum | Specifies the unit for the query delay. Possible values: Second | Minute . β’ Check query delay documentation for default unit of query delay for each source. |
queryDelay.value mandatory | numeric | Specifies the value for the query delay. β’ Must be a number less than 1440 minutes (24 hours). β’ Check query delay documentation for default unit of query delay for each source. |
releaseChannel mandatory | enum | Specifies the release channel. Accepted values: beta | stable . |
Source-specific fields | ||
datadog.site mandatory | string | Datadog SaaS instance that corresponds to one of Datadog's available locations: • datadoghq.com (formerly referred to as `com`)• us3.datadoghq.com • us5.datadoghq.com • datadoghq.eu (formerly referred to as `eu`)• ddog-gov.com • ap1.datadoghq.com |
Replay-related fields | ||
historicalDataRetrieval optional | n/a | Optional structure related to configuration related to Replay. β Use only with supported sources. β’ If omitted, Nobl9 uses the default values of value: 0 and unit: Day for maxDuration and defaultDuration . |
maxDuration.value optional | numeric | Specifies the maximum duration for historical data retrieval. Must be integer β₯ 0 . See Replay documentation for values of max duration per data source. |
maxDuration.unit optional | enum | Specifies the unit for the maximum duration of historical data retrieval. Accepted values: Minute | Hour | Day . |
defaultDuration.value optional | numeric | Specifies the default duration for historical data retrieval. Must be integer β₯ 0 and β€ maxDuration . |
defaultDuration.unit optional | enum | Specifies the unit for the default duration of historical data retrieval. Accepted values: Minute | Hour | Day . |
You can deploy only one agent per YAML file using sloctl apply
.
Agent deploymentβ
When you add a data source, Nobl9 automatically generates a Kubernetes configuration and a Docker command line for agent deployment. To find them, go to your data source details > Agent Configuration.
Copy the required configuration and apply it to deploy the agent. Use you actual credentials instead of the placeholders
(e.g., replace the <DATADOG_API_KEY>
and <DATADOG_APPLICATION_KEY>
with your organization keys).
- Kubernetes
- Docker
Apply the copies YAML configuration to your Kubernetes cluster:
apiVersion: v1
kind: Secret
metadata:
name: datadog-data-source
namespace: my-namespace
type: Opaque
stringData:
datadog_api_key: "<DATADOG_API_KEY>"
datadog_application_key: "<DATADOG_APPLICATION_KEY>"
client_id: "unique_client_id"
client_secret: "unique_client_secret"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: datadog-data-source
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
nobl9-agent-name: "datadog-data-source"
nobl9-agent-project: "my-project"
nobl9-agent-organization: "my-organization"
template:
metadata:
labels:
nobl9-agent-name: "datadog-data-source"
nobl9-agent-project: "my-project"
nobl9-agent-organization: "my-organization"
spec:
containers:
- name: agent-container
image: nobl9/agent:0.82.2
resources:
requests:
memory: "350Mi"
cpu: "0.1"
env:
- name: N9_CLIENT_ID
valueFrom:
secretKeyRef:
key: client_id
name: datadog-data-source
- name: N9_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client_secret
name: datadog-data-source
- name: DD_API_KEY
valueFrom:
secretKeyRef:
key: datadog_api_key
name: datadog-data-source
- name: DD_APPLICATION_KEY
valueFrom:
secretKeyRef:
key: datadog_application_key
name: datadog-data-source
# The N9_METRICS_PORT is a variable specifying the port to which the /metrics and /health endpoints are exposed.
# The 9090 is the default value and can be changed.
# If you donβt want the metrics to be exposed, comment out or delete the N9_METRICS_PORT variable.
- name: N9_METRICS_PORT
value: "9090"
Run the Docker command for agent deployment:
docker run -d --restart on-failure \
--name datadog-data-source \
-e N9_CLIENT_ID="unique_client_id" \
-e N9_CLIENT_SECRET="unique_client_secret" \
# The N9_METRICS_PORT is a variable specifying the port to which the /metrics and /health endpoints are exposed.
# The 9090 is the default value and can be changed.
# If you donβt want the metrics to be exposed, comment out or delete the N9_METRICS_PORT variable.
-e N9_METRICS_PORT=9090 \
-e DD_API_KEY="<DATADOG_API_KEY>" \
-e DD_APPLICATION_KEY="<DATADOG_APPLICATION_KEY>" \
nobl9/agent:0.82.2
Creating SLOs with Datadogβ
Nobl9 Webβ
Follow the instructions below to create your SLOs with Datadog in the UI:
-
Navigate to Service Level Objectives.
-
Click .
-
In step 1 of the SLO wizard, select the Service your SLO will be associated with.
-
In step 2, select Datadog as the Data Source for your SLO.
-
Modify Period for Historical Data Retrieval, when necessary.
- This value defines how far back in the past your data will be retrieved when replaying your SLO based on Datadog.
- A longer period can extend the data loading time for your SLO.
- Must be a positive whole number up to the maximum period value you've set when adding the Datadog data source.
-
Select the metric type:
- A Threshold metric where a single time series is evaluated against a threshold.
- A Ratio metric that allows you to enter two-time series for comparison.
- Non-incremental: counts incoming metric values one-by-one. So the resulting SLO graph is pike-shaped.
- Incremental: counts the incoming metric values incrementally, adding every next value to previous values. It results in a constantly increasing SLO graph.
- For the ratio metric, select the Data Count Method:
-
Enter a Query or Good Query and Total Query for the metric you selected.
avg:trace.http.request.duration{service:my-service}.as_count()
A sample threshold metric for Datadog:
- Good
avg:trace.http.request.hits.by_http_status{service:my-service,!http.status_class:5xx}.as_count()
- Total
avg:trace.http.request.hits.by_http_status{service:my-service}.as_count()
A sample ratio metric for Datadog:
SLI values for good and totalWhen choosing the query for the ratio SLI (countMetrics
), keep in mind that the values ββresulting from that query for both good and total:- Must be positive.
- While we recommend using integers, fractions are also acceptable.
- If using fractions, we recommend them to be larger than
1e-4
=0.0001
. - Shouldn't be larger than
1e+20
.
- Define the Time Window for your SLO:
- Rolling time windows constantly move forward as time passes. This type can help track the most recent events.
- Calendar-aligned time windows are usable for SLOs intended to map to business metrics measured on a calendar-aligned basis.
- Configure the Error budget calculation method and Objectives:
- Occurrences method counts good attempts against the count of total attempts.
- Time Slices method measures how many good minutes were achieved (when a system operates within defined boundaries) during a time window.
- You can define up to 12 objectives for an SLO.
Similar threshold values for objectivesTo use similar threshold values for different objectives in your SLO, we recommend differentiating them by setting varying decimal points for each objective.
For example, if you want to use threshold value1
for two objectives, set it to1.0000001
for the first objective and to1.0000002
for the second one.
Learn more about threshold value uniqueness. - Add the Display name, Name, and other settings for your SLO:
- Name identifies your SLO in Nobl9. After you save the SLO, its name becomes read-only.
Use only lowercase letters, numbers, and dashes. - Create Composite SLO: with this option selected, you create a composite SLO 1.0. Composite SLOs 1.0 are deprecated. They're fully operable; however, we encourage you to create new composite SLOs 2.0.
You can create composite SLOs 2.0 withsloctl
using the provided template. Alternatively, you can create a composite SLO 2.0 with Nobl9 Terraform provider. - Set Notifications on data. With it, Nobl9 will notify you in the cases when SLO won't be reporting data or report incomplete data for more than 15 minutes.
- Add alert policies, labels, and links, if required.
Up to 20 items of each type per SLO is allowed.
- Name identifies your SLO in Nobl9. After you save the SLO, its name becomes read-only.
- Click CREATE SLO
sloctlβ
- rawMetric
- countMetric
Hereβs an example of Datadog using a rawMetric
(threshold metric):
- apiVersion: n9/v1alpha
kind: SLO
metadata:
name: my-datadog-threshold-slo
# All displayName fields are optional
displayName: My Datadog threshold SLO
# The name identifier of the project you intend to locate your SLO in
project: my-project
# Labels, annotations: optional
labels:
area:
- latency
team:
- sales
annotations:
area: latency
team: sales
spec:
# All descriptions are optional
description: My sample Datadog threshold SLO
indicator:
metricSource:
# The name indicator of the data source for your SLO.
name: datadog-data-source
# The name indicator of the project holding your data source.
project: my-project
# Enum: Agent | Direct, mandatory. The connection method you used to add your data source.
kind: Agent
# Enum: Occurrences | Timeslices, mandatory. The budgeting method for your SLO.
budgetingMethod: Occurrences
objectives:
- displayName: My objective
value: 200.0
name: my-objective
target: 0.95
rawMetric:
query:
datadog:
query: avg:trace.http.request.duration{*}
# Enum: lte (less than or equal to) | lt (less than) | gte (greater than or equal to) | gt (greater than).
# The operator to compare incoming data against your threshold.
op: lte
# Boolean, mandatory. The primary objective indicator of your SLO.
primary: true
service: my-service
# Mandatory. The time window for your SLO. Can be rolling or calendar-aligned.
timeWindows:
- unit: Month
count: 1
isRolling: false
calendar:
startTime: 2022-12-01 00:00:00
timeZone: UTC
# Alert policies, attachments, and anomalyConfig are optional.
# Make sure you have alert policies and alert methods created before adding them to your SLO.
alertPolicies:
- my-alert-policy
attachments:
- url: https://{my-url}.com
displayName: My URL attached to this SLO
anomalyConfig:
noData:
alertMethods:
- name: my-alert-method
project: my-project
Hereβs an example of Datadog using a countMetric
(ratio metric):
- apiVersion: n9/v1alpha
kind: SLO
metadata:
name: my-datadog-ratio-slo
# All displayName fields are optional
displayName: My Datadog ratio SLO
# The name identifier of the project you intend to locate your SLO in
project: my-project
# Labels, annotations: optional
labels:
area:
- latency
team:
- sales
annotations:
area: latency
team: sales
spec:
# All descriptions are optional
description: My sample Datadog ratio SLO
indicator:
metricSource:
# The name indicator of the data source for your SLO.
name: datadog-data-source
# The name indicator of the project holding your data source.
project: my-project
# Enum: Agent | Direct, mandatory. The connection method you used to add your data source.
kind: Agent
# Enum: Occurrences | Timeslices, mandatory. The budgeting method for your SLO.
budgetingMethod: Occurrences
objectives:
- displayName: My objective
value: 1.0
name: my-objective
target: 0.95
countMetrics:
# Boolean, mandatory. The data count method for your ratio metric.
incremental: true
good:
datadog:
query: sum:trace.http.request.hits.by_http_status{http.status_class:2xx}.as_count()
total:
datadog:
query: sum:trace.http.request.hits.by_http_status{*}.as_count()
# Boolean, mandatory. The primary objective indicator of your SLO.
primary: true
service: my-service
# Mandatory. The time window for your SLO. Can be rolling or calendar-aligned.
timeWindows:
- unit: Month
count: 1
isRolling: false
calendar:
startTime: 2022-12-01 00:00:00
timeZone: UTC
# Alert policies, attachments, and anomalyConfig are optional.
# Make sure you have alert policies and alert methods created before adding them to your SLO.
alertPolicies:
- my-alert-policy
attachments:
- url: https://{my-url}.com
displayName: My URL attached to this SLO
anomalyConfig:
noData:
alertMethods:
- name: my-alert-method
project: my-project
Important notes:
Learn more about Datadog queries.
Queries to Datadog must return only one time series.
Example:
β Grouping metrics will often result in a multiple time series:
avg:system.load.1{*} by {host}
β Same query without grouping
avg:system.load.1{*}
Try avoiding the .rollup()
or .moving_rollup()
functions in your queries.
The Nobl9 agent uses enforced rollup
described in the Rollup Interval: Enforced vs Custom
to control the number of points returned from the queries.
Using .rollup()
or .moving_rollup()
can affect the number of returned points or the way they are aggregated.
This fact, in conjunction with the time range of each query by the Nobl9 agent can skew the calculated error budgets.
Learn more about Rollup.
Querying the Datadog server and API rate limitsβ
The Nobl9 agent leverages the Query Timeseries API parameters at a two-minute interval.
Requests to Datadog APIs are rate limited. The following is applied:
- The default rate limit for the
Query Timeseries API
call is 1600 per hour per organization. - Nobl9 batches queries to Datadog into single 1024-character requests, including commas. Identical queries are sent only once to prevent redundancy.
For this reason, the amount of SLIs per interval depends on query length. With no batching, sending one query per API request, Nobl9 sends 52 SLIs per two-minute interval minimum (where an SLI represents a single time series).
When you need greater limits, refer to your Datadog contact.
An incorrect query can occasionally cause delays for all queries in that batch. Identifying the problematic query may take time, leading to temporary processing delays.