Replay Beta
Replay (currently in beta) lets users retrieve historical SLI data and recalculate their SLO error budgets. You can use this feature when your SLI source data is missing or corrupt or if you want to create a new SLO with historical data.
You can also leverage Replay to backfill your SLO reporting: if you have a backlog of SLI data from the last few days or even weeks, Replay will allow you to fetch that data and use it to recalculate your remaining error budget.
With Replay, you can access your historical data minutes after creating an SLO. This allows you to draw conclusions and make adjustments to your metrics much earlier.
Replay pulls in the historical data while your SLO starts collecting new data in real time. The historical and current data are merged, producing an error budget calculated for the entire period.
Scope of support
Currently, the following integrations support Replay (see the requirements table):
- Amazon CloudWatch
- AMS Prometheus
- AppDynamics
- Azure Monitor beta
- Datadog
- Dynatrace
- Graphite
- New Relic
- Prometheus
- ServiceNow Cloud Observability
- Splunk
Other Sources will be supported soon.
Requirements
To use Replay for specific data sources, you may need to update the version of the Nobl9 agent you use. See the table below to determine the minimum agent version required to use this feature:
Source | Replay support | Agent version | Direct support | Max period for historical data retrieval |
---|---|---|---|---|
Amazon CloudWatch | ≥ 0.60.0 | 15 days1 | ||
AMS Prometheus | ≥ 0.55.0 | 30 days | ||
AppDynamics | ≥ 0.68.0 | 30 days | ||
Azure Monitor beta | ≥ 0.69.0-beta01 | 30 days | ||
Datadog | ≥ 0.54.2 | 30 days | ||
Dynatrace | ≥ v0.66.0 | 28 days 2 | ||
Graphite | ≥ 0.55.0 | 30 days | ||
ServiceNow Cloud Observability | ≥ 0.56.0 | 30 days | ||
New Relic | ≥ 0.56.0 | 30 days | ||
Prometheus | ≥ 0.54.2 | 30 days | ||
Splunk | ≥ 0.55.0 | 30 days |
2 When you run Replay for the maximum period for historical data retrieval for Dynatrace (28 days), remember that due to Dynatrace limitations, there may be one hour of degraded resolution at the beginning of the selected time range.
Create Replay
There are two fields that you must define to activate Replay for data sources that support it:
Maximum Period for Historical Data Retrieval, which corresponds to the
historicalDataRetrieval.[n].maxDuration
object in YAML.The object defines the maximum period for which data can be retrieved:
value
must be an integer greater than or equal to 0unit
must be one ofMinute
,Hour
, orDay
It must be a duration that is less than or equal to 30 days
It must be a duration that is greater than or equal to the value for the default period (see below). Otherwise, a validation error is returned
Default Period for Historical Data Retrieval, which corresponds to the
historicalDataRetrieval.[n].defaultDuration
object in YAML.This period will be used by default for any SLOs connected to this data source. This field has the following requirements:
value
must be an integer greater than or equal to 0unit
must be one ofMinute
,Hour
, orDay
It must be a duration that is less than or equal to the value for the maximum period. Otherwise, a validation error is returned
By default, this value is set to
0
. If you set it to>0
, you will create an SLO with Replay
You can configure these fields in the UI or in YAML when you set up the data source.
To activate Replay for an SLO, you must complete the following two steps:
- Step 1:
Configure and create the agent/direct using the data source configuration wizard or apply the YAML via sloctl. - Step 2:
Configure and create Replay for the SLO in the SLO wizard.
Step 1: Create agent/direct
Replay configuration in YAML
Data sources that support Replay accept an additional object called historicalDataRetrieval
in their YAML definition (see Configuring Replay) above for an extended description of the field values). Use sloctl apply --replay
or sloctl replay
commands to run Replay via sloctl
:
- apiVersion: n9/v1alpha
kind: Agent
metadata:
name: datadog
project: datadog
spec:
datadog:
site: com
sourceOf:
- Metrics
- Services
# Additional fields related to Replay
historicalDataRetrieval:
maxDuration:
value: 30 # integer greater than or equal to 0
unit: Day # accepted values: Minute, Hour, Day
defaultDuration: # value must be less than or equal to value of maxDuration
value: 0 # integer greater than or equal to 0; defaults to 0
unit: Day # accepted values: Minute, Hour, Day
If the historicalDataRetrieval
section is omitted when configuring a data source that supports Replay, the following values are used as defaults:
historicalDataRetrieval:
maxDuration:
value: 0
unit: Day
defaultDuration:
value: 0
unit: Day
These default values are also used for data sources that support Replay that were configured before the Replay feature was activated.
historicalDataRetrieval
can't be used for data sources that don't support Replay. Adding it will result in a validation error.
Configuring Replay in the UI
You can find the values for Replay in the Advanced Settings section of the data source configuration wizard (direct and agent):

Step 2: Create SLO
Configuring Replay in the SLO wizard
When you start a SLO wizard and pick a data source that has support for Replay, an additional field will be displayed in step 2 of the SLO wizard:

The Period for Historical Data Retrieval field defines the period that will be used by the SLO:
The value displayed is the Default Period for Historical Data Retrieval that you specified when setting up the data source.
You can override this value, but you can't exceed the Maximum Period for Historical Data Retrieval specified for this data source. Be aware that entering a more extended period might slow down the loading time of your SLO.
The value must be a positive integer or 0.
Beta limitation for SLOs using Replay
The Period for Historical Data Retrieval field does not have a corresponding field in the YAML used to define an SLO.
This field won't be displayed in the UI if the selected data source doesn't support Replay.
By default,
the Period for Historical Data Retrieval field is set to the value of historicalDataRetrieval.defaultDuration
.
This parameter default value is 0
.
However, it can be set to any duration between 0 and historicalDataRetrieval.maxDuration
.
User experience
While historical data is being retrieved, you will notice a few things in the UI:
Charts for the SLO for which data is being retrieved will not be visible in the grid view until the processing of the data is complete:
Image 3: Loading historical data in the UI Charts for the SLO will also not be visible in the SLO details view during this time:

You will see the updated charts after historical data retrieval is finished.
Restrictions for Replay
Data downsampling
It is important to understand how a given data source alters data from the past. Metric gathering systems usually downsample older data to save space using different aggregate functions like mean or sum or simply by dropping data points. This can affect the result of a query made against a time range in the further past. Consult the documentation of the specific data source for more details.
Limits per organization
Historical data retrieval can only be performed for two SLOs at a time per organization.
Replay and SLI Analyzer share the same mechanism for fetching historical data.
Effectively, if you run a Replay process, imports for SLI analyses might be delayed until that Replay process is finished (and vice versa).
Job Status widget
You can track the progress of all ongoing processes for historical data import and empty slots available in your organization using the Job Status widget.
Click the icon next to the top right corner in any tab to access the Job Status widget:

Assumptions for Job Status widget:
The widget displays 3 most recent replays and analyses (the limit for concurrent replays or analyses +1).
When you run an analysis for a completed import job, it'll immediately disappear from the jobs list. This ensures that all recently triggered data import jobs are visible on the widget.
All jobs are sorted by status (the
in progress
status always takes precedence) and last triggered date, with the most recent date displayed at the top.The list may not update as expected if you run a reimport process on an SLO listed on the widget.
This is because reimport updates an existing record in the database and does not create a new one. For example, if you see 3 replays in the widget:Replay1
Replay2
Replay3
If you run a reimport forReplay2
, you'll see these processes displayed in the following order on the widget:
Replay2
Replay1
Replay3
Canceling a running historical retrieval process
You can't stop or cancel a historical data retrieval process that is in progress. You must wait until it is done, which could take up to around a dozen minutes, depending on the period configured.
Editing a running historical retrieval process
Editing an SLO with Replay activated while historical data retrieval is in progress will have different consequences depending on the type of edit made (see below).
Generally, the edit action will not immediately affect the background process. The initially requested data retrieval process will be completed for a snapshot of the SLO at the time of its creation, and the results of that retrieval may or may not be shown for the edited SLO.
The result depends on what fields you edit:
Adding a new objective:
Historical data retrieval will be completed for the original objective(s).
An error budget taking into account the historical period will not be calculated for the new objective. Error budget calculation for this objective will begin at the time of its creation.
Removing an existing objective:
Historical data retrieval results will be abandoned for that objective.
Removing an existing objective doesn't stop the background process of fetching metrics for it. The results are similar to deleting an entire SLO.
Editing an existing objective's Value or Target:
The edited objective will be treated as a new one, with the results described above.
The original objective will be treated as though it has been removed, again with the results described above.
Modifying the Query or Data Source:
Historical data will still be retrieved for the original query and data source.
Error budgets calculated from the moment of the edit will use data from the new query and data source.
For more details on editing SLOs, see the Editing SLOs guide.
Replay and composite SLOs
Creating an SLO with historical data retrieval is mutually exclusive with configuring an SLO as a composite SLO. When you create a composite SLO, you will see the following message in the UI:

Suppose you’ve created an SLO with Replay activated, and the historical data retrieval process is running. If you edit that SLO to make it a composite SLO, this will result in the following consequences:
Historical data retrieval will continue for the original objectives in that SLO.
The composite objective won't include data from the historical period in its error budget calculation. The calculation will start from the moment of the creation of the SLO.
Reimporting historical data for existing SLOs
The reimport process for existing SLOs is irreversible.
Reimporting also can have an impact on your existing SLOs.
Reimporting: user experience
Reimporting historical data in the UI
To reimport historical data for an existing SLO:
- Go to the SLO Details tab of the SLO in which you wish to run reimport.
- Click the Reimport Historical Data button in the left-hand corner of the screen:

Remember that if you want to run the reimport process, the Maximum Period for Historical Data Retrieval configured for your data source must be set to >0
.
Otherwise, reimporting will be disabled:

Reimporting historical data via sloctl
You can also reimport historical data using the sloctl replay
command. Refer to sloctl user guide for more details.
Duration of the reimport process
The reimport process for a single SLO may take up to an hour depending on:
- The length of the reimported period
- The number of objectives in your SLO
- The number of unique queries used in your SLO
Impact of the reimport process on your SLOs
Reimporting historical data for existing SLO has several important consequences on SLI data, and alerts.
Impact on SLI data
During the Replay process, live data are still gathered but will be included in an SLO after reimport has been completed.
Replay will query the data source again for the entire selected historical period. These results will completely replace SLI data already gathered for the same period.
Data resolution might be lower due to the downsampling of historical data depending on the data source you use. Because of that, the SLI chart might look different after the reimport process has been completed, even if it was run for the same query.
Replay won't fill periods with no data with the original data. The gap in data will be replaced by Replay, as in the example below:
- Original input SLI data:
2023-01-01 01:20:00 = 100
2023-01-01 01:21:00 = 230
2023-01-01 01:22:00 = 270
2023-01-01 01:24:00 = 220
2023-01-01 01:25:00 = 130
2023-01-01 01:26:00 = 280
2023-01-01 01:27:00 = 200- Reimported SLI data:
2023-01-01 01:20:00 = 100
2023-01-01 01:21:00 = 230
[...] # Gap in the data stream
2023-01-01 01:28:00 = 90
2023-01-01 01:29:00 = 220
2023-01-01 01:30:00 = 270
2023-01-01 01:31:00 = 190- SLI data after the reimport process is completed:
2023-01-01 01:20:00 = 100
2023-01-01 01:21:00 = 230
[...] # Gap in the data stream
2023-01-01 01:28:00 = 90
2023-01-01 01:29:00 = 220
2023-01-01 01:30:00 = 270
2023-01-01 01:31:00 = 190- This can happen when the retention period of the data source is shorter than the period selected for Replay.
- To avoid this, always set the Maximum Period for Historical Data Retrieval to a value equal to or lower than data source's retention period.
Impact on alerts
You won't receive any alerts from that SLO during the reimport process.
After Replay is done, you won't receive alerts for the reimported historical period that was recalculated.
After reimporting, you might receive missed alerts when Replay was running. These alerts will be triggered based on recalculated data.
Replay—API rate limits
Source | 1 API request pulls (in historical hours of data) |
---|---|
Amazon CloudWatch | 24 |
AMS Prometheus | 24 |
Datadog | 4 |
Dynatrace | 12 |
New Relic | 80 (minutes) |
ServiceNow Cloud Observability | 24 |
Prometheus | 24 |
Splunk | 24 |
These requests will count toward the data source’s API rate limit and the requests used to fetch current SLI data (see here for details on Datadog’s rate limiting). Exceeding your rate limit will cause delays in fetching SLI data and prolong the historical data retrieval process.
Replay troubleshooting
Datadog
If you exceed Datadog's API rate limit for SLOs with Replay, Replay will attempt to fetch data for 20 minutes. If all attempts fail, Nobl9 will create a standard SLO without historical data (that is, Nobl9 will collect data for such SLO from its creation).
ServiceNow Cloud Observability—missing data
ServiceNow Cloud Observability does not distinguish between missing and valid data with a 0
value in the stream. ServiceNow Cloud Observability considers these values equal in such cases and returns the 0
value.
New Relic
Before running Replay, check your New Relic data retention settings to ensure that all your historical data will be collected.
Prometheus
Since Prometheus is an on-site solution, if Replay keeps exceeding limits, remember to increase the limits on your side.
Incorrect source configuration
If you misconfigure your source (for example, incorrect credentials, incorrect query), Replay will attempt to fetch data for 20 minutes. When the last attempt is unsuccessful, the Replay job will fail. As a result, you will see the No data for this time period
error in the UI:

If you enter an incorrect query for Datadog, Replay will fail immediately, and you will see the No data for this time period
error in the UI.
Data loading time
Loading the historical data in Replay beta shouldn't exceed two hours (for extended periods of historical data retrieval). If your Replay process takes longer, contact Nobl9 direct.