Skip to main content

BigQuery

Google BigQuery is a serverless data warehouse that enables scalable analysis over petabytes of data. It is a Platform as a Service that supports querying using ANSI SQL. BigQuery integration with Nobl9 enables users to turn their big data into valuable business insights.

Authentication

Big Query authentication requires the user’s credentials to be entered in Nobl9. Users can retrieve their authentication credentials from the Google Cloud Platform as the Service account key file. For all necessary details on how to get the Service account key file, refer to the Getting Started with Authentication | BigQuery Documentation.

For the Direct connection, the contents of the downloaded Service account key file can be pasted into Nobl9 UI. This will enable direct integration with the Big Query APIs to retrieve the data leveraging the SaaS to SaaS infrastructure in Nobl9.

Agent connection requires that the user has a set of BigQuery permissions. The minimal set of permissions required for the BigQuery agent connection is:

bigquery.datasets.get
bigquery.jobs.create
bigquery.jobs.list
bigquery.models.getData
bigquery.models.getMetadata
bigquery.tables.getData
note

Nobl9 Agent can use Workload Identity in GCP (Google Cloud Platform) in GKE (Google Kubernetes Engine). For more information, refer to the Deploying BigQuery Agent section.

Adding BigQuery as a Data Source in the UI

To add BigQuery as a data source in Nobl9 using the Agent or Direct connection method, follow these steps:

  1. Navigate to Integrations > Sources.
  2. Click the button.
  3. Click the relevant Source icon.
  4. Choose a relevant connection method (Agent or Direct), then configure the source as described below.

BigQuery Direct

Direct Configuration in the UI

Direct connection to BigQuery requires users to enter their credentials which Nobl9 stores safely. To set up this type of connection:

  1. Upload Service Account Key File to authenticate with Google Cloud.
    The file needs to be in JSON format. Refer to the Authentication section for more details.
  1. Select a Project.
    Specifying a Project is helpful when multiple users are spread across multiple teams or projects. When the Project field is left blank then object is assigned to project default.
  2. Enter a Display Name.
    You can enter a friendly name with spaces in this field.
  3. Enter a Name.
    The name is mandatory and can only contain lowercase, alphanumeric characters and dashes (for example, my-project-name). This field is populated automatically when you enter a display name, but you can edit the result.
  4. Enter a Description.
    Here you can add details such as who is responsible for the integration (team/owner) and the purpose of creating it.
  5. Click the Add Data Source button.

BigQuery Agent

Agent Configuration in the UI

Follow the instructions below to create your BigQuery Agent connection. Refer to the section above for the description of the fields.

  1. Enter a Project.
  2. Enter a Display Name.
  3. Enter a Name.
  4. Create a Description.
  5. Click the Add Data Source button.

Agent Using CLI - YAML

The YAML for setting up an Agent connection to BigQuery looks like this:

apiVersion: n9/v1alpha
kind: Agent
metadata:
name: bigquery
displayName: BigQuery Agent # optional
project: default
spec:
description: BigQuery description # optional
sourceOf:
- Metrics
bigQuery: {}

Important notes:

Before using the BigQuery Agent you will need to know your projectID and location.

  • The projectID is a unique identifier of Google Cloud Project. The projectID must be a unique string of 6-30 lowercase letters, digits, or hyphens.

  • The location is the BigQuery dataset from where the data is read.

warning

You can deploy only one Agent in one YAML file by using the sloctl apply command.

Deploying BigQuery Agent

When you add the data source, Nobl9 automatically generates a Kubernetes configuration and a Docker command line for you to use to deploy the Agent. Both of these are available in the web UI, under the Agent Configuration section. Be sure to swap in your credentials (e.g., replace the <CREDENTIALS> comment with the contents of your credentials.json file encoded with base64).

warning

Nobl9 Agent can use Workload Identity in GCP (Google Cloud Platform) in GKE (Google Kubernetes Engine). As such, the N9_GCP_CREDENTIALS_PATH environment variable has been changed to GOOGLE_APPLICATION_CREDENTIALS. For more information, refer to the Getting started with authentication | Google Cloud Documentation.

If you use Kubernetes, you can apply the supplied YAML config file to a Kubernetes cluster to deploy the Agent. It will look something like this:

# DISCLAIMER: This deployment description contains only the fields necessary for the purpose of this demo.
# It is not a ready-to-apply k8s deployment description, and the client_id and client_secret are only exemplary values.

apiVersion: v1
kind: Secret
metadata:
name: nobl9-agent-nobl9-dev-bigquery-agent
namespace: default
type: Opaque
stringData:
client_id: "unique_client_id"
client_secret: "unique_client_secret"
data:
credentials.json: |-
# <CREDENTIALS>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nobl9-agent-nobl9-dogfood-test-mybq
namespace: default
spec:
replicas: 1
selector:
matchLabels:
nobl9-agent-name: mybq
nobl9-agent-project: test
template:
metadata:
labels:
nobl9-agent-name: mybq
nobl9-agent-project: test
spec:
containers:
- name: agent-container
image: nobl9/agent:latest
resources:
requests:
memory: "350Mi"
cpu: "0.1"
env:
- name: N9_CLIENT_ID
valueFrom:
secretKeyRef:
key: client_id
name: nobl9-agent-nobl9-dogfood-test-mybq
- name: N9_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client_secret
name: nobl9-agent-nobl9-dogfood-test-mybq
# The N9_METRICS_PORT is a variable specifying the port to which the /metrics and /health endpoints are exposed.
# The 9090 is the default value and can be changed.
# If you don’t want the metrics to be exposed, comment out or delete the N9_METRICS_PORT variable.
- name: N9_METRICS_PORT
value: "9090"
# To use Workload Identity in Kubernetes Cluster in Google Cloud Platform,
# comment out or delete the GOOGLE_APPLICATION_CREDENTIALS environment variable
# and follow the instructions described here https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/var/gcp/credentials.json"
# N9_ALLOWED_URLS is an optional security parameter that limits the URLs that an Agent can query
# for metrics. URLs defined in the Nobl9 app are prefix-compared against the N9_ALLOWED_URLS list of
# comma separated URLs.
# - name: N9_ALLOWED_URLS
# value: "http://172.16.0.2/api/v1/query,http://172.16.0.3"
volumeMounts:
- name: gcp-credentials
mountPath: "/var/gcp"
readOnly: true
volumes:
- name: gcp-credentials
secret:
secretName: nobl9-agent-nobl9-dogfood-test-mybq

Creating SLOs with BigQuery

Creating SLOs in the UI

Follow the instructions below to create your SLOs with BigQuery in the UI:

  1. Navigate to Service Level Objectives.

  2. Click the button.
  3. Select a Data Source from the drop-down list.

  4. Enter a Project ID.
    The Project ID is a unique identifier of the Google Cloud Project. For more details, refer to Creating and Managing Projects | BigQuery Documentation. The Project ID must be a unique string of 6-30 lowercase letters, digits, or hyphens. Example: bigquery://project

  5. Enter a Location of the BigQuery dataset from where the data is read.
    See Supported Locations | BigQuery Documentation.

  6. Specify the Metric. You can choose either a Threshold Metric, where a single time series is evaluated against a threshold or a Ratio Metric, which allows you to enter two time series to compare (for example, a count of good requests and total requests).

    note

    For the Ratio Metric, you can choose the Data Count Method:

    • For the Non-incremental method, we expect it to be the components of the sum.
    • For the Incremental method, we expect the value of a metric to be the current sum of some numerator.

    For more information, refer to the SLO Calculations Guide.

  7. Enter an SQL query or SQL query for the good counter, and an SQL query for the total counter for the metric you selected.

    • Threshold metric for BigQuery:
      SELECT response_time AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response` WHERE created BETWEEN DATETIME(@n9date_from) AND DATETIME(@n9date_to)

    • Ratio Metric for BigQuery:
      Good query: SELECT response_time AS n9value, created AS n9date FROM `test-123.metrics.http_response`
      Total Query: SELECT response_time AS n9value, created AS n9date FROM `test-123.metrics.http_response`

  8. In step 3, define a Time Window for the SLO.

  9. In step 4, specify the Error Budget Calculation Method and your Objective(s).

  10. In step 5, add a Name, Description, and other details about your SLO. You can also select Alert Policies and Labels on this screen.

  11. When you’re done, click Create SLO.

Query Samples

Threshold Metric sample:

SELECT
response_time AS n9value,
created AS n9date
FROM `bdwtest-256112.metrics.http_response`
WHERE created
BETWEEN DATETIME(@n9date_from)
AND DATETIME(@n9date_to)

The n9value must be an alias for a numeric field. The n9date represents a date in DATETIME format. Conditions are required. For example, a WHERE or HAVING clause narrows the query to a DATETIME(@n9date_from) and DATETIME(@n9date_to) timeframe. A validation checks if the columns or aliases are present in the queries.

warning

When you narrow the query to the time interval defined by the DATETIME(@n9date_from) and DATETIME(@n9date_to) parameters, then, the value you are comparing it with must be of the same type. For example, in this part of a query:

WHERE created
BETWEEN DATETIME(@n9date_from)
AND DATETIME(@n9date_to)

the created value must match the parameter’s type, meaning, DATETIME.

Sample query results:

n9value | n9date
256     | 2021-06-15T01:00:47.754070
259     | 2021-06-14T16:35:36.754070
250     | 2021-06-14T17:27:15.754070

SLOs using BiqQuery - YAML samples

Here’s an example of BiqQuery using rawMetric (Threshold metric):

apiVersion: n9/v1alpha
kind: SLO
metadata:
name: bigquery-test
project: default
spec:
service: bq-service
indicator:
metricSource:
name: bigquery
timeWindows:
- unit: Day
count: 7
calendar:
startTime: 2020-03-09 00:00:00
timeZone: Europe/Warsaw
budgetingMethod: Occurrences
objectives:
- displayName: Fatal
op: lte
rawMetric:
query:
bigQuery:
projectId: "bdwtest-256112"
location: "EU"
query: "SELECT response_time AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response`"
value: 6.50
target: 0.6
- displayName: Poor
op: lte
rawMetric:
query:
bigQuery:
projectId: "bdwtest-256112"
location: "EU"
query: "SELECT response_time AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response`"
value: 6.00
target: 0.6
- displayName: Good
op: lte
rawMetric:
query:
bigQuery:
projectId: "bdwtest-256112"
location: "EU"
query: "SELECT response_time AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response`"
value: 5.55
target: 0.7
- displayName: Perfect
op: lte
rawMetric:
query:
bigQuery:
projectId: "bdwtest-256112"
location: "EU"
query: "SELECT response_time AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response`"
value: 5.00
target: 0.8
- displayName: TooPerfect
op: lte
rawMetric:
query:
bigQuery:
projectId: "bdwtest-256112"
location: "EU"
query: "SELECT response_time AS n9value, created AS n9date FROM `bdwtest-256112.metrics.http_response`"
value: 4
target: 0.8

Important notes:

The BigQuery SLO requires the following fields:

  • The location is the BigQuery dataset from where the data is read.

  • The projectID is a unique identifier of Google Cloud Project. The projectID must be a unique string of 6-30 lowercase letters, digits, or hyphens.

  • The query.

BigQuery API Rate Limits

The following rate limits apply to the BigQuery API:

  • The rate limits apply to query jobs. For detailed information, refer to the Quotas and Limits | BigQuery Documentation.

  • The BigQuery rate limit applies also to the point density. If the point density fetched from database per 1 minute is greater than 1000, an error occurs. Then, you must rewrite the query with point aggregation.


BigQuery costs optimization

BigQuery pricing is based on bytes read by the query. You can find detailed instruction on how to estimate costs in the Estimate storage and query costs | BigQuery Documentation.

As the Nobl9 Agent requires BigQuery queries to contain the where clause with date between filter, for example:

WHERE
{date_col} BETWEEN
DATETIME(@n9date_from)
AND DATETIME(@n9date_to)

you can use partitioning on the date_col column to reduce the number of bytes read and, therefore, the costs of running the BigQuery Agent.

For more detailed information on partitioning, refer to the Introduction to partitioned tables | BigQuery Documentation.

Getting Started | BigQuery Documentation

Queries | BigQuery Documentation

Creating and Managing Projects | BigQuery Documentation

Quotas and Limits | BigQuery Documentation

Getting started with authentication | Google Cloud Documentation

Agent Metrics | Nobl9 Documentation

Creating SLOs via Terraform | Nobl9 Terraform Documentation

Creating Agents via Terraform | Nobl9 Terraform Documentation