Google BigQuery
Google BigQuery is a serverless data warehouse that facilitates scalable analysis over petabytes of data. It is a Platform as a Service that supports querying using ANSI SQL. The BigQuery integration with Nobl9 empowers users to turn their big data into valuable business insights.
Scope of supportβ
- Query parameters retrieval with
sloctl
- Event logs for direct connection method
Authenticationβ
Big Query authentication requires the userβs credentials to be entered in Nobl9.
Users can retrieve their authentication credentials from the Google Cloud Platform as the Service account key file
.
For all necessary details on how to get the Service account key file
,
refer to the Getting
Started with Authentication | BigQuery documentation.
For the direct connection, pass the contents of the downloaded Service account key file
on the Nobl9 Web.
This activates direct integration with the Big Query APIs
to retrieve data, leveraging the SaaS-to-SaaS infrastructure in Nobl9.
Agent connection for the user to be granted with BigQuery permissions. The minimal set of permissions required for the BigQuery agent connection is:
bigquery.datasets.get
bigquery.jobs.create
bigquery.jobs.list
bigquery.models.getData
bigquery.models.getMetadata
bigquery.tables.getData
Nobl9 agent can use Workload Identity in GCP (Google Cloud Platform) in GKE (Google Kubernetes Engine). For more information, refer to the Deploying BigQuery agent section.
Adding Google BigQuery as a data sourceβ
To ensure data transmission between Nobl9 and Google BigQuery, it may be necessary to list Nobl9 IP addresses as trusted.
- 18.159.114.21
- 18.158.132.186
- 3.64.154.26
You can add the BigQuery data source using the direct or agent connection methods.
Direct connection methodβ
Direct connection to BigQuery requires users to enter their credentials which Nobl9 stores safely.
Nobl9 Webβ
To set up this type of connection:
- Navigate to Integrations > Sources.
- Click .
- Click the required Source icon.
- Choose Direct.
-
Select one of the following Release Channels:
- The
stable
channel is fully tested by the Nobl9 team. It represents the final product; however, this channel does not contain all the new features of abeta
release. Use it to avoid crashes and other limitations. - The
beta
channel is under active development. Here, you can check out new features and improvements without the risk of affecting any viable SLOs. Remember that features in this channel can change.
- The
-
Upload Service Account Key File to authenticate with Google Cloud.
The file needs to be in JSON format. Refer to the Authentication section for more details.
- Select a Project.
Specifying a project is helpful when multiple users are spread across multiple teams or projects. When the Project field is left blank, Nobl9 uses thedefault
project. - Enter a Display Name.
You can enter a user-friendly name with spaces in this field. - Enter a Name.
The name is mandatory and can only contain lowercase, alphanumeric characters, and dashes (for example,my-project-1
). Nobl9 duplicates the display name here, transforming it into the supported format, but you can edit the result. - Enter a Description.
Here you can add details such as who is responsible for the integration (team/owner) and the purpose of creating it. - Specify the Query delay to set a customized delay for queries when pulling the data from the data source.
- The default value in BigQuery integration for Query delay is
0 seconds
.
infoChanging the Query delay may affect your SLI data. For more details, check the Query delay documentation. - The default value in BigQuery integration for Query delay is
- Click Add Data Source
sloctlβ
The YAML for setting up a direct connection to BigQuery looks like this:
apiVersion: n9/v1alpha
kind: Direct
metadata:
name: bigquery-direct
displayName: BigQuery direct
project: bigquery-direct
spec:
description: Direct integration with BigQuery
sourceOf:
- Metrics
releaseChannel: beta
queryDelay:
unit: Minute
value: 720
logCollectionEnabled: false
bigQuery:
serviceAccountKey: |-
{
# secret, copy and paste your credentials from the credentials.json file
}
Field | Type | Description |
---|---|---|
queryDelay.unit mandatory | enum | Specifies the unit for the query delay. Possible values: Second | Minute . β’ Check query delay documentation for default unit of query delay for each source. |
queryDelay.value mandatory | numeric | Specifies the value for the query delay. β’ Must be a number less than 1440 minutes (24 hours). β’ Check query delay documentation for default unit of query delay for each source. |
logCollectionEnabled optional | boolean | Optional. Defaults to false . Set to true if you'd like your direct to collect event logs. Beta functionality available only through direct release channel. Reach out to support@nobl9.com to activate it. |
releaseChannel mandatory | enum | Specifies the release channel. Accepted values: beta | stable . |
Source-specific fields | ||
bigQuery.serviceAccountKey mandatory | string | You must embed the **Service Account Key File** content to authenticate with Google Cloud. See authentication for more details |
Agent connection methodβ
Nobl9 Webβ
Follow the instructions below to create your BigQuery agent connection.
- Navigate to Integrations > Sources.
- Click .
- Click the required Source icon.
- Choose Agent.
-
Select one of the following Release Channels:
- The
stable
channel is fully tested by the Nobl9 team. It represents the final product; however, this channel does not contain all the new features of abeta
release. Use it to avoid crashes and other limitations. - The
beta
channel is under active development. Here, you can check out new features and improvements without the risk of affecting any viable SLOs. Remember that features in this channel can change.
- The
- Select a Project.
Specifying a project is helpful when multiple users are spread across multiple teams or projects. When the Project field is left blank, Nobl9 uses thedefault
project. - Enter a Display Name.
You can enter a user-friendly name with spaces in this field. - Enter a Name.
The name is mandatory and can only contain lowercase, alphanumeric characters, and dashes (for example,my-project-1
). Nobl9 duplicates the display name here, transforming it into the supported format, but you can edit the result. - Enter a Description.
Here you can add details such as who is responsible for the integration (team/owner) and the purpose of creating it. - Specify the Query delay to set a customized delay for queries when pulling the data from the data source.
- The default value in BigQuery integration for Query delay is
0 seconds
.
infoChanging the Query delay may affect your SLI data. For more details, check the Query delay documentation. - The default value in BigQuery integration for Query delay is
- Click Add Data Source
sloctlβ
The YAML for setting up an agent connection to BigQuery looks like this:
apiVersion: n9/v1alpha
kind: Agent
metadata:
name: bigquery
displayName: BigQuery Agent
project: default
spec:
description: BigQuery description
sourceOf:
- Metrics
queryDelay:
unit: Minute
value: 720
releaseChannel: beta
bigQuery: {}
Field | Type | Description |
---|---|---|
queryDelay.unit mandatory | enum | Specifies the unit for the query delay. Possible values: Second | Minute . β’ Check query delay documentation for default unit of query delay for each source. |
queryDelay.value mandatory | numeric | Specifies the value for the query delay. β’ Must be a number less than 1440 minutes (24 hours). β’ Check query delay documentation for default unit of query delay for each source. |
releaseChannel mandatory | enum | Specifies the release channel. Accepted values: beta | stable . |
You can deploy only one Agent in one YAML file by using the sloctl apply
command.
Agent deploymentβ
When you add the data source, Nobl9 automatically generates a Kubernetes configuration and a Docker command line for you to use to deploy the Agent. Both of these are available in the web UI, under the Agent Configuration section. Be sure to swap in your credentials (e.g., replace the <CREDENTIALS>
comment with the contents of your credentials.json
file encoded with base64).
Nobl9 agent can use Workload Identity in GCP (Google Cloud Platform) in GKE (Google Kubernetes Engine). As such, the N9_GCP_CREDENTIALS_PATH
environment variable has been changed to GOOGLE_APPLICATION_CREDENTIALS
. For more information, refer to the Getting started with authentication | Google Cloud documentation.
- Kubernetes
- Docker
If you use Kubernetes, you can apply the supplied YAML config file to a Kubernetes cluster to deploy the Agent. It will look something like this:
# DISCLAIMER: This deployment description contains only the fields necessary for the purpose of this demo.
# It is not a ready-to-apply k8s deployment description, and the client_id and client_secret are only exemplary values.
apiVersion: v1
kind: Secret
metadata:
name: nobl9-agent-nobl9-dev-bigquery-agent
namespace: default
type: Opaque
stringData:
client_id: "unique_client_id"
client_secret: "unique_client_secret"
data:
credentials.json: |-
# <CREDENTIALS>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nobl9-agent-nobl9-dogfood-test-mybq
namespace: default
spec:
replicas: 1
selector:
matchLabels:
nobl9-agent-name: mybq
nobl9-agent-project: test
template:
metadata:
labels:
nobl9-agent-name: mybq
nobl9-agent-project: test
spec:
containers:
- name: agent-container
image: nobl9/agent:0.80.0
resources:
requests:
memory: "350Mi"
cpu: "0.1"
env:
- name: N9_CLIENT_ID
valueFrom:
secretKeyRef:
key: client_id
name: nobl9-agent-nobl9-dogfood-test-mybq
- name: N9_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client_secret
name: nobl9-agent-nobl9-dogfood-test-mybq
# The N9_METRICS_PORT is a variable specifying the port to which the /metrics and /health endpoints are exposed.
# The 9090 is the default value and can be changed.
# If you donβt want the metrics to be exposed, comment out or delete the N9_METRICS_PORT variable.
- name: N9_METRICS_PORT
value: "9090"
# To use Workload Identity in Kubernetes Cluster in Google Cloud Platform,
# comment out or delete the GOOGLE_APPLICATION_CREDENTIALS environment variable
# and follow the instructions described here https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/var/gcp/credentials.json"
# N9_ALLOWED_URLS is an optional security parameter that limits the URLs that an Agent can query
# for metrics. URLs defined in the Nobl9 app are prefix-compared against the N9_ALLOWED_URLS list of
# comma separated URLs.
# - name: N9_ALLOWED_URLS
# value: "http://172.16.0.2/api/v1/query,http://172.16.0.3"
volumeMounts:
- name: gcp-credentials
mountPath: "/var/gcp"
readOnly: true
volumes:
- name: gcp-credentials
secret:
secretName: nobl9-agent-nobl9-dogfood-test-mybq
If you use Docker, you can run the Docker command to deploy the agent. It will look something like this (be sure to swap <PATH_TO_LOCAL_CREDENTIALS_FILE>
with the path to your local credentials.json file):
# DISCLAIMER: This Docker command contains only the fields necessary for the purpose of this demo.
# It is not a ready-to-apply command, and you will need to replace the placeholder values with your own values.
docker run -d --restart on-failure --name nobl9-agent-nobl9-dogfood-test-mybq \
-e N9_CLIENT_ID="unique_client_id" \
-e N9_CLIENT_SECRET="unique_client_secret" \
# The N9_METRICS_PORT is a variable specifying the port to which the /metrics and /health endpoints are exposed.
# The 9090 is the default value and can be changed.
# If you donβt want the metrics to be exposed, comment out or delete the N9_METRICS_PORT variable.
-e N9_METRICS_PORT=9090 \
-e GOOGLE_APPLICATION_CREDENTIALS=/var/gcp/credentials.json \
-v <PATH_TO_LOCAL_CREDENTIALS_FILE>:/var/gcp/credentials.json \
nobl9/agent:0.80.0
Creating SLOs with Google BigQueryβ
Nobl9 Webβ
Follow the instructions below to create your SLOs with BigQuery in the UI:
-
Navigate to Service Level Objectives.
-
Click .
-
Select a Data Source from the drop-down list.
-
Enter a Project ID.
The Project ID is a unique identifier of the Google Cloud project. For more details, refer to Creating and Managing Projects | BigQuery documentation. The Project ID must be a unique string of 6-30 lowercase letters, digits, or hyphens. Example:bigquery://project
-
Enter a Location of the BigQuery dataset from where the data is read.
See Supported Locations | BigQuery documentation. -
Specify the Metric. You can choose either a Threshold Metric, where a single time series is evaluated against a threshold or a Ratio Metric, which allows you to enter two time series to compare (for example, a count of good requests and total requests).
- Choose the Data Count Method for your ratio metric:
- Non-incremental: counts incoming metric values one-by-one. So the resulting SLO graph is pike-shaped.
- Incremental: counts the incoming metric values incrementally, adding every next value to previous values.
It results in a constantly increasing SLO graph.
- Choose the Data Count Method for your ratio metric:
-
Enter an SQL query or SQL query for the good counter, and an SQL query for the total counter for the metric you selected.
-
Threshold metric for BigQuery:
SELECT response_time AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response` WHERE created BETWEEN DATETIME(@n9date_from) AND DATETIME(@n9date_to)
-
Ratio metric for BigQuery:
Good query:SELECT response_time AS n9value, created AS n9date FROM `test-123.metrics.http_response`
Total query:SELECT response_time AS n9value, created AS n9date FROM `test-123.metrics.http_response`
SLI values for good and totalWhen choosing the query for the ratio SLI (countMetrics
), keep in mind that the values ββresulting from that query for both good and total:- Must be positive.
- While we recommend using integers, fractions are also acceptable.
- If using fractions, we recommend them to be larger than
1e-4
=0.0001
. - Shouldn't be larger than
1e+20
.
-
-
In step 3, define a Time Window for the SLO.
-
Rolling time windows are better for tracking the recent user experience of a service.
-
Calendar-aligned windows are best suited for SLOs that are intended to map to business metrics measured on a calendar-aligned basis, such as every calendar month or every quarter.
-
In step 4, specify the Error Budget Calculation Method and your Objective(s).
- Occurrences method counts good attempts against the count of total attempts.
- Time Slicesmethod measures how many good minutes were achieved (when a system operates within defined boundaries) during a time window.
- You can define up to 12 objectives for an SLO.
See the use case example and the SLO calculations guide for more information on the error budget calculation methods.
-
In step 5, add the Display name, Name, and other settings for your SLO:
- Create a composite SLO
- Set notification on data, if this option is available for your data source.
When activated, Nobl9 notifies you if your SLO hasn't received data or received incomplete data for more than 15 minutes. - Add alert policies, labels, and links, if required.
You can add up to 20 links per SLO.
-
Click Create SLO.
sloctlβ
- rawMetric
- countMetric
Hereβs an example of BiqQuery using rawMetric
(threshold metric):
apiVersion: n9/v1alpha
kind: SLO
metadata:
name: bigquery-test
project: default
spec:
service: bq-service
indicator:
metricSource:
name: bigquery
timeWindows:
- unit: Day
count: 7
calendar:
startTime: 2020-03-09 00:00:00
timeZone: Europe/Warsaw
budgetingMethod: Occurrences
objectives:
- displayName: Fatal
op: lte
rawMetric:
query:
bigQuery:
projectId: "bdwtest-256112"
location: "EU"
query: "SELECT response_time AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response`"
value: 6.50
target: 0.6
- displayName: Poor
op: lte
rawMetric:
query:
bigQuery:
projectId: "bdwtest-256112"
location: "EU"
query: "SELECT response_time AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response`"
value: 6.00
target: 0.6
- displayName: Good
op: lte
rawMetric:
query:
bigQuery:
projectId: "bdwtest-256112"
location: "EU"
query: "SELECT response_time AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response`"
value: 5.55
target: 0.7
- displayName: Perfect
op: lte
rawMetric:
query:
bigQuery:
projectId: "bdwtest-256112"
location: "EU"
query: "SELECT response_time AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response`"
value: 5.00
target: 0.8
- displayName: TooPerfect
op: lte
rawMetric:
query:
bigQuery:
projectId: "bdwtest-256112"
location: "EU"
query: "SELECT response_time AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response`"
value: 4
target: 0.8
Hereβs an example of BiqQuery using countMetric
(ratio metric):
apiVersion: n9/v1alpha
kind: SLO
metadata:
name: slo-with-bigquery-agent-count-metrics
displayName: Slo with bigquery agent count metrics
project: bigquery
spec:
description: Description
service: bigquery
indicator:
metricSource:
name: bigquery
timeWindows:
- unit: Week
count: 1
calendar:
startTime: 2020-01-21 12:30:00 # date with time in 24h format
timeZone: America/New_York # name as in IANA Time Zone Database
budgetingMethod: Occurrences
objectives:
- displayName: Good
target: 0.95
countMetrics:
incremental: false
good:
bigQuery:
projectId: "bdwtest-256112"
location: "EU"
query: "SELECT http_code AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response` WHERE http_code = 200 AND created BETWEEN DATETIME(@n9date_from) AND DATETIME(@n9date_to)"
total:
bigQuery:
projectId: "bdwtest-256112"
location: "EU"
query: "SELECT http_code AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response` WHERE created BETWEEN DATETIME(@n9date_from) AND DATETIME(@n9date_to)"
Important notes:
The BigQuery SLO requires the following fields:
-
The
location
is the BigQuery dataset from where the data is read. -
The
projectID
is a unique identifier of Google Cloud project. TheprojectID
must be a unique string of 6-30 lowercase letters, digits, or hyphens. -
The
query
.
Query samplesβ
- Threshold metric
- Ratio metric
Threshold metric sample:
SELECT
response_time AS n9value,
created AS n9date
FROM `bdwtest-256112.metrics.http_response`
WHERE created
BETWEEN DATETIME(@n9date_from)
AND DATETIME(@n9date_to)
Ratio metric example:
SELECT
1 AS n9value,
created AS n9date
FROM `bdwtest-256112.metrics.http_response`
WHERE created
http_code = 200
AND BETWEEN DATETIME(@n9date_from)
AND DATETIME(@n9date_to)
The n9value must be an alias for a numeric field. The n9date represents a date in DATETIME format. Conditions are required. For example, a WHERE
or HAVING
clause narrows the query to a DATETIME(@n9date_from)
and DATETIME(@n9date_to)
timeframe. A validation checks if the columns or aliases are present in the queries.
When you narrow the query to the time interval defined by the DATETIME(@n9date_from)
Β andΒ DATETIME(@n9date_to)
parameters, then, the value you are comparing it with must be of the same type. For example, in this part of a query:
WHERE created
BETWEEN DATETIME(@n9date_from)
AND DATETIME(@n9date_to)
the created
value must match the parameterβs type, meaning, DATETIME.
Sample query results:
n9value | n9date
256Β Β Β Β Β | 2021-06-15T01:00:47.754070
259Β Β Β Β Β | 2021-06-14T16:35:36.754070
250Β Β Β Β Β | 2021-06-14T17:27:15.754070
Google BigQuery API rate limitsβ
The following rate limits apply to the BigQuery API:
-
The rate limits apply to query jobs. For detailed information, refer to the Quotas and Limits | BigQuery documentation.
-
The BigQuery rate limit applies also to the point density. If the point density fetched from database per one minute is greater than 1000, an error occurs. Then, you must rewrite the query with point aggregation.
BigQuery pricing is based on bytes read by the query. You can find detailed instruction on how to estimate costs in the Estimate storage and query costs | BigQuery documentation.
As the Nobl9 agent requires BigQuery queries to contain the where
clause with date between
filter, for example:
WHERE
{date_col} BETWEEN
DATETIME(@n9date_from)
AND DATETIME(@n9date_to)
you can use partitioning on the date_col
column to reduce the number of bytes read and, therefore, the costs of running the BigQuery Agent.
For more detailed information on partitioning, refer to the Introduction to partitioned tables | BigQuery documentation.