Skip to main content

Create an alert policy

Reading time: 0 minute(s) (0 words)
See how to create an alert policy.
Tired of reading? Check out the video tutorial!

With a created alert method, you can configure an alert policy that expresses a set of conditions you want to track or monitor. The conditions for an alert policy define what is monitored and when to activate an alert: when the performance of your service is declining, Nobl9 will send a notification to a predefined channel.

Configuration​

Follow these steps to create an alert policy in the Nobl9 UI:

  1. Go to the Alerts tab.
  2. Click .
  3. In step 1 of the Alert policy wizard:
  4. Define your alerting conditions​

    You can set a maximum of three alerting conditions. A defined alerting condition monitors the behavior and volatility of a data source. Select one or more of the boxes and choose your parameters:


    • The Remaining error budget would be exhausted in the near or distant future. In this condition, exhaustion time prediction becomes more sensitive as your remaining budget decreases. Once your SLO has no error budget left, even the slightest amount of burn will trigger an alert.
    • The Remaining error budget is the amount left from the error budget set in the SLO.
    • The Entire error budget would be exhausted in the near or distant future. This prediction is based on the allocation of your entire error budget and depends only on the current burn rate. Use it to define alerts based on time rather than the burn rate function and avoid the remaining budget value impacting the prediction.
    • The Average error budget burn rate is greater or equal to the threshold and lasts for some period. This alerting condition helps catch burn rate spikes independently of the burned budget.
    • The remaining error budget is below the threshold. It allows for the most straightforward configurations that will alert you when you reach a specific level of error budget, regardless of how quickly or slowly you reach it.

    Define a cooldown period​

    The cooldown period was designed to prevent from sending too many alerts. It is an interval measured from the last time stamp when all alert policy conditions were satisfied. Each SLO's objective triggers its own alerts and keeps its own cooldown period. For more information, refer to the Nobl9 documentation.


    • The cooldown period value is mandatory, and it must be an integer value greater than or equal to five minutes.
    • The default value is five minutes.
    • You can choose between three types of units: hours, minutes, and seconds.


  5. In step 2 of the Alert policy wizard:
  6. Define alert policy attributes​

    • Select a Project, then enter a Display name (optional) and a Name for the alert (this field is mandatory and will be filled in automatically if you provide a display name).
    • Enter a Description (optional).
    • Set the alert Severity to one of the following:
      • High: A critical incident with a very high impact.
      • Medium: A major incident with a significant impact.
      • Low: A minor incident with low impact.

    • Select or add Labels.
      Labels have a specific format and must conform to the following rules:
      • key: value format
      • key can contain only lowercase alphanumeric characters, underscores, and dashes; must start with a letter and end with an alphanumeric character; maximum length 63 characters
      • value can contain Unicode characters; maximum length 200 characters
      • Maximum of 20 labels attached
    • The added labels will be sent along with the alert notification when the policy’s conditions are met.



  7. In step 3 of the Alert policy wizard:
  8. Select alert method​

    Select the previously created alert method that you'd like to use and select the relevant integration from the list menu.



  9. Click Create Alert Policy.

For more information on alert policy & alert method lifecycle, refer to the Alert methods.

Check out the configuration video:​

Video 1: Creating an alert policy
Good job! You'll now receive a notification to a predefined channel in case something goes wrong.