Google Cloud Monitoring
Google Cloud Monitoring (GCM) provides visibility into the performance, uptime, and overall health of cloud-powered applications. It collects metrics, events, and metadata from Google Cloud, hosted uptime probes, and application instrumentation.
Authenticationβ
Google Cloud Monitoring authentication requires the userβs credentials to be entered in Nobl9. Users can retrieve their authentication credentials from the Google Cloud Platform (GCP) in the form of a Service account key
file. For details on how to get your Service account key
file, refer to the Getting Started with Authentication | Google Cloud documentation.
For the direct connection to GCM,
the contents of the downloaded Service account key
file must be uploaded into the Nobl9 UI.
This ensures direct integration with the GCM APIs to retrieve the data, leveraging the SaaS-to-SaaS infrastructure in Nobl9.
For the agent connection, you need to copy and paste your credentials from your credentials.json
file and pass those when invoking the agent. Nobl9 agent can use Workload Identity in GCP (Google Cloud Platform) in GKE (Google Kubernetes Engine). For more information, refer to the Deploying the Google Cloud Monitoring agent section.
Your user account must have access to one of the following OAuth scopes:
Scope of supportβ
- Google Cloud Monitoring metrics
- Custom query delay for data retrieval
- Query parameters retrieval with
sloctl
- SLI Analyzer
- Replay
- Event logs for direct connection method
Adding Google Cloud Monitoring as a data sourceβ
To ensure data transmission between Nobl9 and Google Cloud Monitoring, it may be necessary to list Nobl9 IP addresses as trusted.
- 18.159.114.21
- 18.158.132.186
- 3.64.154.26
You can add the Google Cloud Monitoring data source using the direct or agent connection methods.
Direct connection methodβ
A direct connection to Google Cloud Monitoring requires users to enter their credentials which Nobl9 stores safely.
Nobl9 Webβ
To set up this type of connection:
- Navigate to Integrations > Sources.
- Click .
- Click the required Source icon.
- Choose Direct.
-
Select one of the following Release Channels:
- The
stable
channel is fully tested by the Nobl9 team. It represents the final product; however, this channel does not contain all the new features of abeta
release. Use it to avoid crashes and other limitations. - The
beta
channel is under active development. Here, you can check out new features and improvements without the risk of affecting any viable SLOs. Remember that features in this channel can change.
- The
-
Upload your Service Account Key File to authenticate with GCP (mandatory).
Retrieve your authentication credentials from the Google Cloud Platform. The file must be in JSON format. For more information, refer to the Getting Started with Authentication | Google Cloud documentation or the Authentication section above.
- Select a Project.
Specifying a project is helpful when multiple users are spread across multiple teams or projects. When the Project field is left blank, Nobl9 uses thedefault
project. - Enter a Display Name.
You can enter a user-friendly name with spaces in this field. - Enter a Name.
The name is mandatory and can only contain lowercase, alphanumeric characters, and dashes (for example,my-project-1
). Nobl9 duplicates the display name here, transforming it into the supported format, but you can edit the result. - Enter a Description.
Here you can add details such as who is responsible for the integration (team/owner) and the purpose of creating it. - Specify the Query delay to set a customized delay for queries when pulling the data from the data source.
- The default value in Google Cloud Monitoring integration for Query delay is
2 minutes
.
infoChanging the Query delay may affect your SLI data. For more details, check the Query delay documentation. - The default value in Google Cloud Monitoring integration for Query delay is
- Enter a Maximum Period for Historical Data Retrieval.
- This value defines how far back in the past your data will be retrieved when replaying your SLO based on this data source.
- The maximum period value depends on the data source.
Find the maximum value for your data source. - A greater period can extend the loading time when creating an SLO.
- The value must be a positive integer.
- Enter a Default Period for Historical Data Retrieval.
- It is used by SLOs connected to this data source.
- The value must be a positive integer or
0
. - By default, this value is set to 0. When you set it to
>0
, you will create SLOs with Replay.
- Click Add Data Source
sloctlβ
The YAML for setting up a direct connection to Google Cloud Monitoring looks like this:
apiVersion: n9/v1alpha
kind: Direct
metadata:
name: gcm-direct
project: default
spec:
gcm:
serviceAccountKey: |-
{
# secret, copy and paste your credentials from the credentials.json file
}
sourceOf:
- Metrics
releaseChannel: beta
queryDelay:
unit: Minute
value: 720
logCollectionEnabled: false
historicalDataRetrieval:
maxDuration:
value: 30
unit: Day
defaultDuration:
value: 7
unit: Day
Field | Type | Description |
---|---|---|
queryDelay.unit mandatory | enum | Specifies the unit for the query delay. Possible values: Second | Minute . β’ Check query delay documentation for default unit of query delay for each source. |
queryDelay.value mandatory | numeric | Specifies the value for the query delay. β’ Must be a number less than 1440 minutes (24 hours). β’ Check query delay documentation for default unit of query delay for each source. |
logCollectionEnabled optional | boolean | Optional. Defaults to false . Set to true if you'd like your direct to collect event logs. Beta functionality available only through direct release channel. Reach out to support@nobl9.com to activate it. |
releaseChannel mandatory | enum | Specifies the release channel. Accepted values: beta | stable . |
Source-specific fields | ||
gcm.serviceAccountKey mandatory | string | Copy and paste your credentials from the `credentials.json` file. See authentication for more details. |
Replay-related fields | ||
historicalDataRetrieval optional | n/a | Optional structure related to configuration related to Replay. β Use only with supported sources. β’ If omitted, Nobl9 uses the default values of value: 0 and unit: Day for maxDuration and defaultDuration . |
maxDuration.value optional | numeric | Specifies the maximum duration for historical data retrieval. Must be integer β₯ 0 . See Replay documentation for values of max duration per data source. |
maxDuration.unit optional | enum | Specifies the unit for the maximum duration of historical data retrieval. Accepted values: Minute | Hour | Day . |
defaultDuration.value optional | numeric | Specifies the default duration for historical data retrieval. Must be integer β₯ 0 and β€ maxDuration . |
defaultDuration.unit optional | enum | Specifies the unit for the default duration of historical data retrieval. Accepted values: Minute | Hour | Day . |
Agent connection methodβ
Nobl9 Webβ
Follow the instructions below to set up an agent connection.
- Navigate to Integrations > Sources.
- Click .
- Click the required Source icon.
- Choose Agent.
-
Select one of the following Release Channels:
- The
stable
channel is fully tested by the Nobl9 team. It represents the final product; however, this channel does not contain all the new features of abeta
release. Use it to avoid crashes and other limitations. - The
beta
channel is under active development. Here, you can check out new features and improvements without the risk of affecting any viable SLOs. Remember that features in this channel can change.
- The
- Select a Project.
Specifying a project is helpful when multiple users are spread across multiple teams or projects. When the Project field is left blank, Nobl9 uses thedefault
project. - Enter a Display Name.
You can enter a user-friendly name with spaces in this field. - Enter a Name.
The name is mandatory and can only contain lowercase, alphanumeric characters, and dashes (for example,my-project-1
). Nobl9 duplicates the display name here, transforming it into the supported format, but you can edit the result. - Enter a Description.
Here you can add details such as who is responsible for the integration (team/owner) and the purpose of creating it. - Specify the Query delay to set a customized delay for queries when pulling the data from the data source.
- The default value in Google Cloud Monitoring integration for Query delay is
2 minutes
.
infoChanging the Query delay may affect your SLI data. For more details, check the Query delay documentation. - The default value in Google Cloud Monitoring integration for Query delay is
- Enter a Maximum Period for Historical Data Retrieval.
- This value defines how far back in the past your data will be retrieved when replaying your SLO based on this data source.
- The maximum period value depends on the data source.
Find the maximum value for your data source. - A greater period can extend the loading time when creating an SLO.
- The value must be a positive integer.
- Enter a Default Period for Historical Data Retrieval.
- It is used by SLOs connected to this data source.
- The value must be a positive integer or
0
. - By default, this value is set to 0. When you set it to
>0
, you will create SLOs with Replay.
- Click Add Data Source
sloctlβ
The YAML for setting up an agent connection to Google Cloud Monitoring looks like this:
apiVersion: n9/v1alpha
kind: Agent
metadata:
name: gcm
displayName: Google Cloud Monitoring # optional
spec:
description: GCM agent # optional
sourceOf:
- Metrics
releaseChannel: beta # string, one of: beta || stable
queryDelay:
unit: Minute # string, one of: Second || Minute
value: 720 # numeric, must be a number less than 1440 minutes (24 hours)
gcm: {}
Field | Type | Description |
---|---|---|
queryDelay.unit mandatory | enum | Specifies the unit for the query delay. Possible values: Second | Minute . β’ Check query delay documentation for default unit of query delay for each source. |
queryDelay.value mandatory | numeric | Specifies the value for the query delay. β’ Must be a number less than 1440 minutes (24 hours). β’ Check query delay documentation for default unit of query delay for each source. |
releaseChannel mandatory | enum | Specifies the release channel. Accepted values: beta | stable . |
You can deploy only one agent in one YAML file by using the sloctl apply
command.
Agent deploymentβ
When you add the data source, Nobl9 automatically generates a Kubernetes configuration and a Docker command line for you to use to deploy the agent. Both of these are available in the web UI, under the Agent Configuration section. Be sure to swap in your credentials.
Nobl9 agent can use Workload Identity in GCP (Google Cloud Platform) in GKE (Google Kubernetes Engine). As such, the N9_GCP_CREDENTIALS_PATH
environment variable has been changed to GOOGLE_APPLICATION_CREDENTIALS
. For more information, refer to the Getting started with authentication | Google Cloud documentation.
- Kubernetes
- Docker
If you use Kubernetes, you can apply the supplied YAML config file to a Kubernetes cluster to deploy the agent. Remember to swap in your credentials, for example, copy and paste your credentials from the ServiceAccount Key credentials.json
file. It will look something like this:
# DISCLAIMER: This deployment description contains only the fields necessary for the purpose of this demo.
# It is not a ready-to-apply k8s deployment description, and the client_id and client_secret are only exemplary values.
apiVersion: v1
kind: Secret
metadata:
name: nobl9-agent-nobl9-dev-gcm-gcm
namespace: default
type: Opaque
stringData:
client_id: #client_id
client_secret: #client_secret
data:
credentials.json: |-
`CREDENTIALS`
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nobl9-agent-nobl9-dogfood-default-gcm-test
namespace: default
spec:
replicas: 1
selector:
matchLabels:
nobl9-agent-name: gcm-test
nobl9-agent-project: default
template:
metadata:
labels:
nobl9-agent-name: gcm-test
nobl9-agent-project: default
spec:
containers:
- name: agent-container
image: nobl9/agent:0.80.0
resources:
requests:
memory: "350Mi"
cpu: "0.1"
env:
- name: N9_CLIENT_ID
valueFrom:
secretKeyRef:
key: client_id
name: nobl9-agent-nobl9-dogfood-default-gcm-test
- name: N9_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client_secret
name: nobl9-agent-nobl9-dogfood-default-gcm-test
# The N9_METRICS_PORT is a variable specifying the port to which the /metrics and /health endpoints are exposed.
# The 9090 is the default value and can be changed.
# If you donβt want the metrics to be exposed, comment out or delete the N9_METRICS_PORT variable.
- name: N9_METRICS_PORT
value: "9090"
# To use Workload Identity in Kubernetes Cluster in Google Cloud Platform,
# comment out or delete the GOOGLE_APPLICATION_CREDENTIALS environment variable
# and follow the instructions described here https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/var/gcp/credentials.json"
# N9_ALLOWED_URLS is an optional safety parameter that limits the URLs that an Agent is able to query
# for metrics. URLs defined in the Nobl9 app are prefix-compared against the N9_ALLOWED_URLS list of
# comma separated URLs.
# - name: N9_ALLOWED_URLS
# value: "http://172.16.0.2/api/v1/query,http://172.16.0.3"
volumeMounts:
- name: gcp-credentials
mountPath: "/var/gcp"
readOnly: true
volumes:
- name: gcp-credentials
secret:
secretName: nobl9-agent-nobl9-dogfood-default-gcm-test
If you use Docker, you can run the supplied Docker command to deploy the agent. Remember to replace PATH_TO_LOCAL_CREDENTIALS_FILE
with the path to your local credentials.json
file). It will look something like this:
# DISCLAIMER: This Docker command contains only the fields necessary for the purpose of this demo.
# It is not a ready-to-apply command, and you will need to replace the placeholder values with your own values.
docker run -d --restart on-failure --name nobl9-agent-nobl9-dev-gcm-gcm \
-e N9_CLIENT_SECRET="CLIENT_SECRET" \
-e N9_CLIENT_ID="CLIENT_ID" \
# The N9_METRICS_PORT is a variable specifying the port to which the /metrics and /health endpoints are exposed.
# The 9090 is the default value and can be changed.
# If you donβt want the metrics to be exposed, comment out or delete the N9_METRICS_PORT variable.
-e N9_METRICS_PORT=9090 \
-e GOOGLE_APPLICATION_CREDENTIALS=/var/gcp/credentials.json \
-v `PATH_TO_LOCAL_CREDENTIALS_FILE`:/var/gcp/credentials.json \
nobl9/agent:0.80.0
Creating SLOs with Google Cloud Monitoringβ
Nobl9 Webβ
Follow the instructions below to create your SLOs with Google Cloud Monitoring in the Nobl9 UI:
-
Navigate to Service Level Objectives.
-
Click .
-
In step 1 of the SLO wizard, select the Service the SLO will be associated with.
-
In step 2, select Google Cloud Monitoring as the data source for your SLO.
-
Enter a Project ID.
- The Project ID is a unique identifier of your Google Cloud project, which can be composed of 6β30 lowercase alphanumeric characters and dashes (for example,
my-sample-project-191923
). For more information, refer to the Creating and Managing Projects | Google Cloud documentation.
- The Project ID is a unique identifier of your Google Cloud project, which can be composed of 6β30 lowercase alphanumeric characters and dashes (for example,
-
Specify the Metric. You can choose either a Threshold Metric, where a single time series is evaluated against a threshold or a Ratio Metric, which allows you to enter two time series to compare (for example, a count of good requests and total requests).
- Choose the Data Count Method for your ratio metric:
- Non-incremental: counts incoming metric values one-by-one. So the resulting SLO graph is pike-shaped.
- Incremental: counts the incoming metric values incrementally, adding every next value to previous values.
It results in a constantly increasing SLO graph.
-
Specify Query or a Good Query and Total Query for the metric you selected. Your query must follow the Monitoring Query Language syntax.
- Each query must return only one metric and one time series.
- Since Nobl9 asks for data every 1 minute,
we recommend setting the period for the align delta function to 1 minute, i.e.
align delta(1m)
.
As a result, Nobl9 receives the difference in a given minute and records it as an SLI. - Nobl9 processes a single dataset at a time and doesn't aggregate GCM metrics.
Make sure yourgroup_by
aggregator points to the single datasetβexactly that one you want to observe.
You can find the available groups on your Google Cloud Observability Monitoring dashboard > Metrics explorer.
"fetch consumed_api
| metric 'serviceruntime.googleapis.com/api/request_latencies'
| filter (resource.service == 'monitoring.googleapis.com')
| align delta(1m)
| every 1m
| group_by [resource.service],
[value_request_latencies_mean: mean(value.request_latencies)]"
countMetrics
), keep in mind that the values ββresulting from that query for both good and total:- Must be positive.
- While we recommend using integers, fractions are also acceptable.
- If using fractions, we recommend them to be larger than
1e-4
=0.0001
. - Shouldn't be larger than
1e+20
.
- In step 3 of the SLO wizard, define a Time Window for the SLO.
-
Rolling time windows are better for tracking the recent user experience of a service.
-
Calendar-aligned windows are best suited for SLOs that are intended to map to business metrics measured on a calendar-aligned basis, such as every calendar month or every quarter.
-
In step 4, specify the Error Budget Calculation Method and your Objective(s).
- Occurrences method counts good attempts against the count of total attempts.
- Time Slicesmethod measures how many good minutes were achieved (when a system operates within defined boundaries) during a time window.
- You can define up to 12 objectives for an SLO.
See the use case example and the SLO calculations guide for more information on the error budget calculation methods.
-
In step 5, add the Display name, Name, and other settings for your SLO:
- Create a composite SLO
- Set notification on data, if this option is available for your data source.
When activated, Nobl9 notifies you if your SLO hasn't received data or received incomplete data for more than 15 minutes. - Add alert policies, labels, and links, if required.
You can add up to 20 links per SLO.
-
Click Create SLO.
sloctlβ
- rawMetric
- countMetric
Hereβs an example of Google Cloud Monitoring using rawMetric
(threshold metric):
apiVersion: n9/v1alpha
kind: SLO
metadata:
name: gcm-latency-mean-threshold
project: my-project
spec:
service: my-service
indicator:
metricSource:
name: gcm
project: my-project
rawMetric:
gcm:
projectId: my-project-id
query: "fetch consumed_api
| metric 'serviceruntime.googleapis.com/api/request_latencies'
| filter (resource.service == 'monitoring.googleapis.com')
| align delta(1m)
| every 1m
| group_by [resource.service],
[value_request_latencies_mean: mean(value.request_latencies)]"
timeWindows:
- unit: Day
count: 1
calendar:
startTime: 2022-01-21 12:30:00 # date with time in 24h format
timeZone: America/New_York # name as in IANA Time Zone Database
budgetingMethod: Occurrences
objectives:
- displayName: Healthy
value: 40
op: lte
target: 0.99
- displayName: Slower
value: 41
op: gte
target: 0.98
- displayName: Critical
value: 100
op: gte
target: 0.95
Hereβs an example of Google Cloud Monitoring using countMetric
(ratio metric):
apiVersion: n9/v1alpha
kind: SLO
metadata:
name: gcm-response-codes-ratio
project: my-project
spec:
service: my-service
indicator:
metricSource:
name: gcm
project: my-project
timeWindows:
- unit: Week
count: 1
calendar:
startTime: 2022-01-21 12:30:00 # date with time in 24h format
timeZone: America/New_York # name as in IANA Time Zone Database
budgetingMethod: Occurrences
objectives:
- displayName: Acceptable
value: 0.95
target: 0.9
countMetrics:
incremental: false
good:
gcm:
projectId: my-project-id
query: "fetch consumed_api
| metric 'serviceruntime.googleapis.com/api/request_count'
| filter
(resource.service == 'monitoring.googleapis.com')
&& (metric.response_code == '200')
| align rate(1m)
| every 1m
| group_by [resource.service],
[value_request_count_aggregate: aggregate(value.request_count)]"
total:
gcm:
projectId: my-project-id
query: "fetch consumed_api
| metric 'serviceruntime.googleapis.com/api/request_count'
| filter
(resource.service == 'monitoring.googleapis.com')
| align rate(1m)
| every 1m
| group_by [resource.service],
[value_request_count_aggregate: aggregate(value.request_count)]"
Expected query outputβ
Nobl9 accepts single time series only.
Therefore, at each point in the time series, the GCM query must return a single value.
When your query includes multiple tables,
for example, using ident
,
make sure it returns a single value.
You can test your query result with the projects.timeSeries.query method
{
"timeSeriesDescriptor": {
"pointDescriptors": [
{
"key": "good_total_ratio",
"valueType": "DOUBLE",
"metricKind": "GAUGE",
"unit": "1"
}
]
},
"timeSeriesData": [
{
"pointData": [
{
"values": [
{
"doubleValue": 0.98773006134969321
}
],
"timeInterval": {
"startTime": "2024-06-06T08:00:03.532075Z",
"endTime": "2024-06-06T08:00:03.532075Z"
}
}
]
}
]
}
Querying the Google Cloud Monitoring serverβ
Nobl9 queries the Google Cloud Monitoring server using the projects.timeSeries.query
API every 60 seconds. The number of data points returned is dependent on the amount of data Google Cloud Monitoring can return.
Google Cloud Monitoring API rate limitsβ
To verify the limits to API usage, go to the Quotas dashboard in the GCM UI. For an API, click the All Quotas button to see your quota.