Skip to main content

Alerting on SLOs - Overview

Reading time: 0 minute(s) (0 words)

When an incident is triggered, Nobl9 enables you to send an alert to a notification engine or tool (for example, PagerDuty). Nobl9 also supports integration with a web endpoint by using webhooks where you define the endpoint and parameters to pass.

Alerting on SLOs allows you to react immediately to incidents that matter from the perspective of the user experience of your Service (e.g., in terms of latency, errors, correctness, and other SLO-related concepts). Alerts improve the control of what’s going on in your system and enable you to do better-contributing factor analysis when something goes wrong.

Here are important things to keep in mind while setting up your Alerts:

  • Both, our attention and energy are limited resources. SLO Alerts must correspond to real and urgent issues of your system.

  • Keep in mind that to improve your monitoring, these Alerts have to be intentional (i.e., well-defined) and need to evolve together with your system.

tip

With Nobl9 system annotations feature, you can see an annotation added to the SLO Objective charts by default each time an alert is triggered or resolved (they are displayed regardless of whether an Alert Policy is silenced). For more detailed information, refer to the SLO Annotations documentation.

system-annotation-example
Image 1: System annotation example
caution

Since January 31, 2023, Nobl9 no longer supports Lightstep Incident Response as an alert method.

Alert Policy & Alert Method Lifecycle​

Cooldown period​

You can configure Cooldown period for your Alert Policies. Follow YAML Guide to see how to set up the cooldown period through YAML.

What is a Cooldown Period?​

Cooldown period was designed to prevent from sending too many alerts. It is an interval measured from the last time stamp when all Alert Policy conditions were satisfied. Each SLO's objective triggers its own alerts and keeps its own cooldown period.

Assumptions:

  • When cooldown conditions are satisfied (i.e., no Alert events are triggered during its defined duration), an Alert event is resolved.

  • New Alert is triggered unless the cooldown period is satisfied.

  • The cooldown period may not be satisfied at a given time and won’t trigger any alerts. However, if, over time, all the alert conditions are satisfied again, the cooldown period is then reset and will be calculated from the time when any of the conditions stopped to be satisfied.

The diagram below shows a simplified lifecycle of an Alert Policy with a defined cooldown period:

System annotation example
Image 2: Alerting lifecycle
tip

Check out the Alerting - Use Case in SLOcademy for more complex Alerting example.

Configuring Cooldown Period in the UI​

Refer to Getting Started guide for details.

Alert Policy Statuses​

When an alert policy is in Triggered state, no other new alert can be triggered unless the alert is resolved or canceled.

Alert Policy statuses adhere to the following criteria:

  • An alert is resolved when any of the conditions stopped to be true AND the cooldown period expired from that time.

  • An alert is canceled when Alert policy configuration has changed OR a new calendar window has started for the calendar aligned time window SLOs.

note

When an Alert event assumes the resolved status within the duration of AlertSilence, Nobl9 will send you an all-clear notification. For more details, see Silencing Alerts | Nobl9 Documentation.

Displaying Triggered Alerts in the UI​

If any of your SLOs fires Alerts, Nobl9 will display this information on the main pane of the Service Health Dashboard, next to the SLO name in the SLO Grid view, and in the SLO Grid view tree:

alert status shd
Image 2: Triggered Alerts notification on the Service Health Dashboard
alert status grid view
Image 3: Triggered Alerts notification on the SLO Grid View
alert status shd
Image 4: Triggered Alerts notification on the SlO Grid view (tree) Dashboard

Retrieving Triggered Alerts in sloctl​

Using sloctl, you can retrieve information when an alert stopped to be valid. To do so, run the following command in sloctl:

sloctl get alerts
info

For more details on the sloctl get alerts command, check sloctl User guide.

Here's an example of a triggered Alert that hasn't been resolved yet:

apiVersion: n9/v1alpha
kind: Alert
metadata:
name: 6fbc76bc-ff8a-40a2-8ac6-65d7d7a2686e
project: alerting-test
spec:
alertPolicy:
name: burn-rate-is-4x-immediately
project: alerting-test
service:
name: triggering-alerts-service
project: alerting-test
severity: Medium
slo:
name: prometheus-rolling-timeslices-threshold
project: alerting-test
status: Triggered
thresholdValue: 950
triggeredClockTime: "2022-01-16T00:28:05Z"

Alerts List​

Alerts List on the SLO Grid view allows you to view Alert events related to your SLO. You can access the Alerts by accessing the SLO Details view and clicking the Alerts tab at the top:

alert list
Image 5: Accessing Alerts list

Nobl9 limits displaying Alerts to 1000 most recent Alert events. You can use filters to narrow down the results. You can filter by:

  • Alert statuses: triggered, resolved
  • SLO objective names
  • Alert Policy name
tip

Keep in mind that the filters are linked by the AND logical operator.

alert list filters
Image 6: Alerts list filters
caution

In rare situations, Nobl9 won't return some Alerts. This might happen because:

  • Nobl9 returns Alerts for existing objects only (Alert Policies, SLOs, Services, and Objectives).
  • If you delete any objects, Nobl9 won't return Alerts for them in the Alerts list.
  • If you delete an SLO/AlertPolicy/Service and recreate it with the same name, Nobl9 won't return results for it.
  • If you unlink an Alert Policy from an SLO, Nobl9 won't return Alerts for it.

Alerts List and RBAC​

Nobl9 returns alerts triggered in given project only (alert project is the same as the SLO project). If you don't have permission to view SLO in a given project, you won't see these Alerts.

tip

You can also use the sloctl get alerts command to get up to 1000 most recent Alert events and filter the results using flags.

Labels and Alert Methods​

Adding Labels to Alert Methods​

Users can add one or more labels to an alert policy, which will be sent along with the alert notification when the policy’s conditions are met.

Other Relevant Resources​

For useful tips on how to get started with your first Alert check Your First Alert Policy! Also see our Tips and Tricks.

If you describe infrastructure as code, you might also consider defining the Alert Methods with the same convention. You can find more details in our Terraform documentation.