Skip to main content

Azure Monitor managed service for Prometheus

Reading time: 0 minute(s) (0 words)

Azure Monitor managed service for Prometheus is a part of Azure Monitor Metrics. It allows collecting Prometheus metrics and analyzing them with Azure Monitor tools.

Integration with Nobl9 lets you collect metrics from Azure Monitor managed service for Prometheus and create SLOs based on them.

Azure Monitor managed service for Prometheus parameters and supported features in Nobl9
General support:
Release channel: Beta
Connection method: Agent, Direct
Replay and SLI Analyzer: Historical data limit 30 days
Event logs: Supported
Query checker: Not supported
Query parameters retrieval: Supported
Timestamp cache persistence: Supported

Query parameters:
Query interval: 1 min
Query delay: 0
Jitter: 15 sec
Timeout: 30 sec

Agent details and minimum required versions for supported features:
Plugin name: n9prometheus
Query delay environment variable: PROM_QUERY_DELAY
Replay and SLI Analyzer: 0.78.0-beta
Query parameters retrieval: 0.78.0-beta
Timestamp cache persistence: 0.78.0-beta
Custom HTTP headers: 0.83.0-beta

Additional notes:
Support for Prometheus metrics
Learn more Opens in a new tab

Creating SLOs with Azure Monitor managed service for Prometheus​

Nobl9 integration with Azure Monitor managed service for Prometheus supports Prometheus metrics.

You can create SLOs based on Azure Monitor managed service for Prometheus using the Nobl9 Terraform provider or applying a YAML definition with sloctl.

Nobl9 Web​

Follow the instructions below to create your SLOs with Azure Monitor managed service for Prometheus on the Nobl9 Web:

  1. Navigate to Service Level Objectives.

  2. Click .

Step 1: Select the service the SLO will be associated with.

Step 2:

  1. Select your Azure Monitor managed service for Prometheus data source.
  2. Configure Replay: set the Period for historical data retrieval.
    It can be 0 or a positive integer up to 30.
  3. Specify Metric and enter the PromQL query:

The threshold metric evaluates a single time series against a threshold value you set.

4. Enter the query. For example: sum(rate(prometheus_http_requests_total{code=~"^2.*"}[1h]))

Step 3: define a Time Window for your SLO.

  • Rolling time windows are better for tracking the recent user experience of a service.
  • Calendar-aligned windows are best suited for SLOs that are intended to map to business metrics measured on a calendar-aligned basis, such as every calendar month or every quarter.

Step 4: specify the Error Budget Calculation Method and your Objective(s).

  • Occurrences method counts good attempts against the count of total attempts.
  • Time Slices method measures how many good minutes were achieved (when a system operates within defined boundaries) during a time window.
  • You can define up to 12 objectives for an SLO.

See the use case example and the SLO calculations guide for more information on the error budget calculation methods.

Step 5: add the Display name, Name, and other settings for your SLO:

  • Set notification on data, if this option is available for your data source.
    When activated, Nobl9 notifies you if your SLO hasn't received data or received incomplete data for more than 15 minutes.
  • Add alert policies, labels, and links, if required.
    You can add up to 20 links per SLO.

Click Create SLO.

sloctl​

Azure Monitor managed service for Prometheus is case-insensitive.
Refer to the YAML SLO reference for details.

Sample Azure Monitor managed service for Prometheus threshold SLO
apiVersion: n9/v1alpha
kind: SLO
metadata:
name: api-server-slo
displayName: API Server SLO
project: default
labels:
area:
- latency
- slow-check
env:
- prod
- dev
region:
- us
- eu
team:
- green
- sales
annotations:
area: latency
env: prod
region: us
team: sales
spec:
description: Example Azure Prometheus SLO
indicator:
metricSource:
name: azure-prometheus
project: default
kind: Agent
budgetingMethod: Occurrences
objectives:
- displayName: Good response (200)
value: 200
name: ok
target: 0.95
rawMetric:
query:
azurePrometheus:
promql: >-
sum((rate(container_cpu_usage_seconds_total{container!="POD",container!=""}[30m])

- on (namespace,pod,container) group_left avg by
(namespace,pod,container)(kube_pod_container_resource_requests{resource="cpu"}))

* -1 >0)
op: lte
primary: true
service: api-server
timeWindows:
- unit: Month
count: 1
isRolling: false
calendar:
startTime: 2022-12-01T00:00:00.000Z
timeZone: UTC
alertPolicies:
- fast-burn-5x-for-last-10m
attachments:
- url: https://docs.nobl9.com
displayName: Nobl9 Documentation
anomalyConfig:
noData:
alertMethods:
- name: slack-notification
project: default

Querying the Azure Monitor managed service for Prometheus API​

The Nobl9 agent leverages the Prometheus API parameters. It pulls data at a per-minute interval from the Prometheus server.

For a more in-depth look, consult additional resources: