Skip to main content

Google BigQuery

Reading time: 0 minute(s) (0 words)

Google BigQuery is a serverless data warehouse that facilitates scalable analysis over petabytes of data. It is a Platform as a Service that supports querying using ANSI SQL. The BigQuery integration with Nobl9 empowers users to turn their big data into valuable business insights.

Google BigQuery parameters and supported features in Nobl9
General support:
Release channel: Stable, Beta
Connection method: Agent, Direct
Replay and SLI Analyzer: Not supported
Event logs: Supported
Query checker: Not supported
Query parameters retrieval: Supported
Timestamp cache persistence: Supported

Query parameters:
Query interval: 1 min
Query delay: 0
Jitter: 15 sec
Timeout: 30 sec

Agent details and minimum required versions for supported features:
Environment variable: BIGQUERY_QUERY_DELAY
Plugin name: n9bigquery
Query parameters retrieval: 0.73.2
Timestamp cache persistence: 0.65.0

Authentication​

Big Query authentication requires the user’s credentials to be entered in Nobl9. Users can retrieve their authentication credentials from the Google Cloud Platform as the Service account key file. For all necessary details on how to get the Service account key file, refer to the Getting Started with Authentication | BigQuery documentation.

For the direct connection, pass the contents of the downloaded Service account key file on the Nobl9 Web. This activates direct integration with the Big Query APIs to retrieve data, leveraging the SaaS-to-SaaS infrastructure in Nobl9.

Agent connection for the user to be granted with BigQuery permissions. The minimal set of permissions required for the BigQuery agent connection is:

bigquery.datasets.get
bigquery.jobs.create
bigquery.jobs.list
bigquery.models.getData
bigquery.models.getMetadata
bigquery.tables.getData
note

Nobl9 agent can use Workload Identity in GCP (Google Cloud Platform) in GKE (Google Kubernetes Engine). For more information, refer to the Deploying BigQuery agent section.

Adding Google BigQuery as a data source​

To ensure data transmission between Nobl9 and Google BigQuery, it may be necessary to list Nobl9 IP addresses as trusted.

πŸ’»ip allowlist
IP addresses to include in your allowlist for secure access:

If you're using app.nobl9.com instance:
  • 18.159.114.21
  • 18.158.132.186
  • 3.64.154.26
If you're using us1.nobl9.com instance:
  • 34.121.54.120
  • 34.123.193.191
  • 34.134.71.10
  • 35.192.105.150
  • 35.225.248.37
  • 35.226.78.175
  • 104.198.44.161

You can add the BigQuery data source using the direct or agent connection methods.

Direct connection method​

Direct connection to BigQuery requires users to enter their credentials which Nobl9 stores safely.

Nobl9 Web​

To set up this type of connection:

  1. Navigate to Integrations > Sources.
  2. Click .
  3. Click the required Source button.
  4. Choose Direct.
  1. Select one of the following Release Channels:
    • The stable channel is fully tested by the Nobl9 team. It represents the final product; however, this channel does not contain all the new features of a beta release. Use it to avoid crashes and other limitations.
    • The beta channel is under active development. Here, you can check out new features and improvements without the risk of affecting any viable SLOs. Remember that features in this channel can change.
  2. Upload Service Account Key File to authenticate with Google Cloud.
    The file needs to be in JSON format. Refer to the Authentication section for more details.

  1. Select a Project.
    Specifying a project is helpful when multiple users are spread across multiple teams or projects. When the Project field is left blank, Nobl9 uses the default project.
  2. Enter a Display Name.
    You can enter a user-friendly name with spaces in this field.
  3. Enter a Name.
    The name is mandatory and can only contain lowercase, alphanumeric characters, and dashes (for example, my-project-1). Nobl9 duplicates the display name here, transforming it into the supported format, but you can edit the result.
  4. Enter a Description.
    Here you can add details such as who is responsible for the integration (team/owner) and the purpose of creating it.
  5. Specify the Query delay to set a customized delay for queries when pulling the data from the data source.
    • The default value in BigQuery integration for Query delay is 0 seconds.
    info
    Changing the Query delay may affect your SLI data. For more details, check the Query delay documentation.
  6. Click Add Data Source.

sloctl​

The YAML for setting up a direct connection to BigQuery looks like this:

apiVersion: n9/v1alpha
kind: Direct
metadata:
name: my-big-query-data-source
displayName: My BigQuery data source
# String, mandatory. The name identifier of your data source parent project
project: my-project
spec:
description: My direct-connected BigQuery data source
releaseChannel: stable
bigQuery:
serviceAccountKey: "YOUR_SECRET_KEY"
queryDelay:
value: 1
unit: Second
FieldTypeDescription
queryDelay.unit
mandatory
enumSpecifies the unit for the query delay. Possible values: Second | Minute.
β€’ Check query delay documentation for default unit of query delay for each source.
queryDelay.value
mandatory
numericSpecifies the value for the query delay.
β€’ Must be a number less than 1440 minutes (24 hours).
β€’ Check query delay documentation for default unit of query delay for each source.
logCollectionEnabled
optional
booleanOptional. Defaults to false. Set to true if you'd like your direct to collect event logs. Contact us to activate it.
releaseChannel
mandatory
enumSpecifies the release channel. Accepted values: beta | stable.
Source-specific fields
bigQuery.serviceAccountKey
mandatory
string You must embed the **Service Account Key File** content to authenticate with Google Cloud. See authentication for more details

Agent connection method​

Nobl9 Web​

Follow the instructions below to create your BigQuery agent connection.

  1. Navigate to Integrations > Sources.
  2. Click .
  3. Click the required Source button.
  4. Choose Agent.
  1. Select one of the following Release Channels:
    • The stable channel is fully tested by the Nobl9 team. It represents the final product; however, this channel does not contain all the new features of a beta release. Use it to avoid crashes and other limitations.
    • The beta channel is under active development. Here, you can check out new features and improvements without the risk of affecting any viable SLOs. Remember that features in this channel can change.
  1. Select a Project.
    Specifying a project is helpful when multiple users are spread across multiple teams or projects. When the Project field is left blank, Nobl9 uses the default project.
  2. Enter a Display Name.
    You can enter a user-friendly name with spaces in this field.
  3. Enter a Name.
    The name is mandatory and can only contain lowercase, alphanumeric characters, and dashes (for example, my-project-1). Nobl9 duplicates the display name here, transforming it into the supported format, but you can edit the result.
  4. Enter a Description.
    Here you can add details such as who is responsible for the integration (team/owner) and the purpose of creating it.
  5. Specify the Query delay to set a customized delay for queries when pulling the data from the data source.
    • The default value in BigQuery integration for Query delay is 0 seconds.
    info
    Changing the Query delay may affect your SLI data. For more details, check the Query delay documentation.
  6. Click Add Data Source.

sloctl​

The YAML for setting up an agent connection to BigQuery looks like this:

apiVersion: n9/v1alpha
kind: Agent
metadata:
name: my-big-query-data-source
displayName: My BigQuery data source
# String, mandatory. The name identifier of your data source parent project
project: my-project
spec:
description: My agent-connected Google BigQuery data source
releaseChannel: stable
bigQuery: {}
queryDelay:
value: 1
unit: Second
FieldTypeDescription
queryDelay.unit
mandatory
enumSpecifies the unit for the query delay. Possible values: Second | Minute.
β€’ Check query delay documentation for default unit of query delay for each source.
queryDelay.value
mandatory
numericSpecifies the value for the query delay.
β€’ Must be a number less than 1440 minutes (24 hours).
β€’ Check query delay documentation for default unit of query delay for each source.
releaseChannel
mandatory
enumSpecifies the release channel. Accepted values: beta | stable.
warning

You can deploy only one Agent in one YAML file by using the sloctl apply command.

Agent deployment​

When you add the data source, Nobl9 automatically generates a Kubernetes configuration and a Docker command line for you to use to deploy the agent. To find them, once your agent is created, go to the Nobl9 Web > your data source details > Agent Configuration. Swap in your credentials (e.g., replace the <CREDENTIALS> comment with the contents of your credentials.json file encoded with base64).

warning

Nobl9 agent can use Workload Identity in GCP (Google Cloud Platform) in GKE (Google Kubernetes Engine). As such, the N9_GCP_CREDENTIALS_PATH environment variable has been changed to GOOGLE_APPLICATION_CREDENTIALS. For more information, refer to the Getting started with authentication | Google Cloud documentation.

The below Kubernetes and Docker configurations contain only the fields required for the demonstration. They aren't ready-to-apply commands.

If you use Kubernetes, you can apply the generated YAML config file to a Kubernetes cluster for agent deployment. Below is a sample configuration:

apiVersion: v1
kind: Secret
metadata:
name: nobl9-agent-flower-market-my-project-my-bigquery-data-source
namespace: my-namespace
type: Opaque
stringData:
client_id: "UNIQUE_CLIENT_ID"
client_secret: "UNIQUE_CLIENT_SECRET"
data:
credentials.json: |-
# <CREDENTIALS>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nobl9-agent-flower-market-my-project-my-bigquery-data-source
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
nobl9-agent-name: "my-bigquery-data-source"
nobl9-agent-project: "my-project"
template:
metadata:
labels:
nobl9-agent-name: "my-bigquery-data-source"
nobl9-agent-project: "my-project"
spec:
containers:
- name: agent-container
image: nobl9/agent:0.82.2
resources:
requests:
memory: "700Mi"
cpu: "0.2"
env:
- name: N9_CLIENT_ID
valueFrom:
secretKeyRef:
key: client_id
name: nobl9-agent-flower-market-my-project-my-bigquery-data-source
- name: N9_INTAKE_URL
value: "https://{my-value}/api/input"
- name: N9_QUERYENGINE_URL
value: "https://{my-value}/api/queryengine"
- name: N9_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client_secret
name: nobl9-agent-nobl9-dev-my-project-my-bigquery-data-source
# The N9_METRICS_PORT variable defines the port to which the /metrics and /health endpoints are exposed.
# The 9090 is the default value and can be changed.
# To avoid metrics exposure, comment out or delete the N9_METRICS_PORT variable.
- name: N9_METRICS_PORT
value: "9090"
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/var/gcp/credentials.json"
# N9_ALLOWED_URLS is optional.
# It limits the URLs that the agent can query for metrics, for safety reasons. URLs defined in the Nobl9 app are prefix-compared against the N9_ALLOWED_URLS list of
# comma-separated URLs.
# - name: N9_ALLOWED_URLS
# value: "YOUR_VALUE"
volumeMounts:
- name: gcp-credentials
mountPath: "/var/gcp"
readOnly: true
volumes:
- name: gcp-credentials
secret:
secretName: nobl9-agent-flower-market-my-project-my-bigquery-data-source

Creating SLOs with Google BigQuery​

Nobl9 Web​

Follow the instructions below to create your SLOs with BigQuery in the UI:

  1. Navigate to Service Level Objectives.

  2. Click .
  3. Select a Service.
    It will be the location for your SLO in Nobl9.

  4. Select your BigQuery data source.

  5. Enter Project ID: a unique identifier of your required Google Cloud project.
    Project ID can contain 6-30 lowercase letters, digits, or hyphens.
    For example, bigquery://project

  6. Select a Location of the BigQuery dataset that contains the data you need to read.

  7. Select the Metric type:

    • Threshold metric: a single time series is evaluated against a threshold.
    • Ratio metric: two-time series for comparison for good events and total events.
      For ratio metrics, select the Data count method:
      • Non-incremental counts incoming data points one-by-one. As a result, the SLO chart is pike-shaped.
      • Incremental counts incoming data points incrementally, adding every next value to the previous values. It results in a constantly increasing SLO chart.
  8. Enter an SQL query or SQL query for the good counter, and an SQL query for the total counter for the metric you selected.

    • Sample threshold metric query:
      SELECT response_time AS n9value, created AS n9date FROM my-project-id WHERE created BETWEEN DATETIME(@n9date_from) AND DATETIME(@n9date_to)`ORDER BY n9date

    • Sample ratio metric queries:
      Good SELECT http_code AS n9value, created AS n9date FROM my-project-id WHERE http_code = 200 AND created BETWEEN DATETIME(@n9date_from) AND DATETIME(@n9date_to) ORDER BY n9date
      Total SELECT http_code AS n9value, created AS n9date FROM my-project-id WHERE created BETWEEN DATETIME(@n9date_from) AND DATETIME(@n9date_to) ORDER BY n9date

    SLI values for good and total
    When choosing the query for the ratio SLI (countMetrics), keep in mind that the values ​​resulting from that query for both good and total:
    • Must be positive.
    • While we recommend using integers, fractions are also acceptable.
      • If using fractions, we recommend them to be larger than 1e-4 = 0.0001.
    • Shouldn't be larger than 1e+20.
  1. Define the Time Window for your SLO:
  2. Configure the Error budget calculation method and Objectives:
    • Occurrences method counts good attempts against the count of total attempts.
    • Time Slices method measures how many good minutes were achieved (when a system operates within defined boundaries) during a time window.
    • You can define up to 12 objectives for an SLO.

    • Similar threshold values for objectives
      To use similar threshold values for different objectives in your SLO, we recommend differentiating them by setting varying decimal points for each objective.
      For example, if you want to use threshold value 1 for two objectives, set it to 1.0000001 for the first objective and to 1.0000002 for the second one.
      Learn more about threshold value uniqueness.
  3. Add the Display name, Name, and other settings for your SLO:
    • Name identifies your SLO in Nobl9. After you save the SLO, its name becomes read-only.
      Use only lowercase letters, numbers, and dashes.
    • Create Composite SLO: with this option selected, you create a composite SLO 1.0. Composite SLOs 1.0 are deprecated. They're fully operable; however, we encourage you to create new composite SLOs 2.0.
      You can create composite SLOs 2.0 with sloctl using the provided template. Alternatively, you can create a composite SLO 2.0 with Nobl9 Terraform provider.
    • Set Notifications on data. With it, Nobl9 will notify you in the cases when SLO won't be reporting data or report incomplete data for more than 15 minutes.
    • Add alert policies, labels, and links, if required.
      Up to 20 items of each type per SLO is allowed.
  4. Click CREATE SLO.

  5. SLO configuration use case
    Check the SLO configuration use case for a real-life SLO example.

sloctl​

Here’s an example of BiqQuery using rawMetric (threshold metric):

# Budgeting method: Occurrences
# Time window type: Calendar

- apiVersion: n9/v1alpha
kind: SLO
metadata:
name: api-server-slo
# Optional
displayName: API Server SLO
project: my-project
# Optional
labels:
key-1:
- value-1
- value-2
key-2:
- value-1
- value-2
# Optional
annotations:
key-1: value-1
key-2: value-1
spec:
description: My sample threshold SLO based on BigQuery
indicator:
metricSource:
name: big-query
project: my-project
kind: Agent
# Enum: Occurrences || Timeslices
budgetingMethod: Occurrences
objectives:
- displayName: My objective 1
value: 200.0
name: my-objective
target: 0.95
rawMetric:
query:
bigQuery:
query: SELECT response_time AS n9value, created AS n9date FROM `my-project-id` WHERE created BETWEEN DATETIME(@n9date_from) AND DATETIME(@n9date_to)` ORDER BY n9date
projectId: my-project-id
location: my-location
op: lte
primary: true
service: my-service
timeWindows:
- unit: Month
count: 1
isRolling: false
calendar:
startTime: 2022-12-01 00:00:00
timeZone: UTC
# Optional. Up to 20 alert policies per SLO
alertPolicies:
- my-alert-policy
# Optional
attachments:
- url: https://my-url.com
displayName: My URL
# Optional, beta functionality
anomalyConfig:
noData:
alertMethods:
- name: my-alert-method
project: my-project

Important notes:

The BigQuery SLO requires the following fields:

  • The location is the BigQuery dataset from where the data is read.

  • The projectID is a unique identifier of Google Cloud project. The projectID must be a unique string of 6-30 lowercase letters, digits, or hyphens.

  • To optimize your BigQuery plugin performance, consider including ORDER BY n9date in the query. With the sorting order defined, your BigQuery plugin can more efficiently retrieve and process data in batches, reducing overall query execution time.

Query samples​

Threshold metric sample:

SELECT
response_time AS n9value,
created AS n9date
FROM `my-google-cloud-project`
WHERE created
BETWEEN DATETIME(@n9date_from)
AND DATETIME(@n9date_to)
ORDER BY n9date

The n9value must be an alias for a numeric field. It's the DATETIME format representation of a date. Conditions are required.

For example, a WHERE or HAVING clause narrows the query to a DATETIME(@n9date_from) and DATETIME(@n9date_to) timeframe.

The queries are validated against columns or aliases.

Data type consistency

When narrowing the query to the interval by the DATETIME(@n9date_from) and DATETIME(@n9date_to) parameters, the data type of the value you're comparing must be the same. For example,

WHERE created
BETWEEN DATETIME(@n9date_from)
AND DATETIME(@n9date_to)

Since the parameters are DATETIME, the created value must also be DATETIME.

Sample query results:

n9value | n9date
256Β Β Β Β Β | 2021-06-15T01:00:47.754070
259Β Β Β Β Β | 2021-06-14T16:35:36.754070
250Β Β Β Β Β | 2021-06-14T17:27:15.754070

Google BigQuery API rate limits​

The following rate limits apply to the BigQuery API:

  • Query jobs. See Quotas and Limits reference.

  • Point density. Point density greater than 1000 data points per minute leads to errors. To address this, add point aggregation to your query.

Google BigQuery cost optimization

BigQuery pricing is based on bytes read by the query.

Since BigQuery queries must contain the where clause with date between filter, as required by the Nobl9 agent, your can use partitioning on the date_col column to reduce the number of bytes read. For example,

WHERE
{date_col} BETWEEN
DATETIME(@n9date_from)
AND DATETIME(@n9date_to)

Learn more about storage and query costs estimation and partitioned tables.

For a more in-depth look, consult additional resources: