Skip to main content

Splunk Observability

Reading time: 0 minute(s) (0 words)

Splunk Observability allows users to search, monitor, and analyze machine-generated big data. Splunk Observability facilitates collecting and monitoring metrics, logs, and traces from common data sources. Data collection and monitoring in one place ensures full-stack, end-to-end observability of the entire infrastructure.

Splunk Observability is different from the Splunk Core that powers Splunk Cloud / Enterprise and is the traditional log management solution from Splunk. Nobl9 also integrates to that through a different set of APIs.

Authentication​

SplunkObservability is SaaS but the URL which indicates the realm (region) needs to be provided. For more details, refer toΒ Realms in Endpoints | Splunk Observability documentation.

When deploying the Nobl9 agent for SplunkObservability, it is required to provide

SPLUNK_OBSERVABILITY_ACCESS_TOKENΒ 

as an environment variable for authentication with organization API Access Token (seeΒ Create an Access Token | Splunk Observability documentation). There is a placeholder for that value in configuration obtained from installation instructions in the Nobl9 UI (refer to the Agent configuration in the UI section).

Adding Splunk Observability Realm​

Splunk Observability connection also requires entering your organization’s Realm. Follow the below instructions to get your API endpoint for the Realm in Splunk:

  1. In your Splunk account, go to Settings > Profile.

  2. Go to the Endpoints section

  3. Choose the URL from the API field.

Image 1: Endpoints section in the Splunk account
note
  • Access tokens are valid for 30 days.

  • Customers could use Org tokens which are valid for 5 years. Org tokens can also be used to generate session tokens

    • Sample access token for Splunk Observability: t4QJpMY1XLcECzm1c5Jb0A

Adding Splunk Observability as a data source​

You can add the Splunk Observability data source using the direct or agent connection methods. For both methods, start with these steps:

  1. Navigate to Integrations > Sources.
  2. Click .
  3. Click the relevant Source icon.
  4. Choose a relevant connection method (Agent or Direct), then configure the source as described below.

Splunk Observability direct​

Direct configuration in the UI​

Direct connection to Splunk Observability requires users to enter their credentials which Nobl9 stores safely. To set up this type of connection:

  1. Select one of the following Release Channels:
    • The stable channel is fully tested by the Nobl9 team. It represents the final product; however, this channel does not contain all the new features of a beta release. Use it to avoid crashes and other limitations.
    • The beta channel is under active development. Here, you can check out new features and improvements without the risk of affecting any viable SLOs. Remember that features in this channel may be subject to change.
  2. Enter your organization's Realm to connect your data source.
    Refer to the Authentication section above for more details.

  3. Enter the Access Token environment variable for authentication with the organization API Access Token.
    Refer to the Authentication section above for more details.

  1. Select a Project.
    Specifying a project is helpful when multiple users are spread across multiple teams or projects. When the Project field is left blank, Nobl9 uses the default project.
  2. Enter a Display Name.
    You can enter a user-friendly name with spaces in this field.
  3. Enter a Name.
    The name is mandatory and can only contain lowercase, alphanumeric characters, and dashes (for example, my-project-1). Nobl9 duplicates the display name here, transforming it into the supported format, but you can edit the result.
  4. Enter a Description.
    Here you can add details such as who is responsible for the integration (team/owner) and the purpose of creating it.
  5. Specify the Query delay to set a customized delay for queries when pulling the data from the data source.
    • The default value in Splunk Observability integration for Query delay is 5 minutes.
    info
    Changing the Query delay may affect your SLI data. For more details, check the Query delay documentation.
  6. Click Add Data Source.

Direct using CLI - YAML​

The YAML for setting up a direct connection to Splunk Observability looks like this:

apiVersion: n9/v1alpha
kind: Direct
metadata:
name: splunk-observability-direct
displayName: Splunk Observability direct
project: splunk-observability-direct
spec:
description: Direct integration with Splunk Observability
sourceOf:
- Metrics
- Services
releaseChannel: beta
queryDelay:
unit: Minute
value: 720
splunkObservability:
realm: us1
accessToken: example-access-token
FieldTypeDescription
queryDelay.unit
mandatory
enumSpecifies the unit for the query delay. Possible values: Second | Minute.
β€’ Check query delay documentation for default unit of query delay for each source.
queryDelay.value
mandatory
numericSpecifies the value for the query delay.
β€’ Must be a number less than 1440 minutes (24 hours).
β€’ Check query delay documentation for default unit of query delay for each source.
releaseChannel
mandatory
enumSpecifies the release channel. Accepted values: beta | stable.
Source-specific fields
splunkObservability.realm
mandatory
stringSee realms in endpoints | Splunk Observability documentation for more details.
splunkObservability.accessToken
mandatory
string, secretEnvironment variable used for authentication with the organization API Access Token. See authentication section above for more details.

Splunk Observability agent​

Agent configuration in the UI​

Follow the instructions below to configure your Splunk Observability agent. Refer to the section above for the description of the fields.

  1. Select one of the following Release Channels:
    • The stable channel is fully tested by the Nobl9 team. It represents the final product; however, this channel does not contain all the new features of a beta release. Use it to avoid crashes and other limitations.
    • The beta channel is under active development. Here, you can check out new features and improvements without the risk of affecting any viable SLOs. Remember that features in this channel may be subject to change.
  2. Enter your organization's Realm to connect your data source.

  1. Enter a Project.
  2. Enter a Display Name.
  3. Enter a Name.
  4. Create a Description.
  5. Customize the Query Delay.
  6. Click Add Data Source.

Agent using CLI - YAML​

The YAML for setting up an agent connection to Splunk Observability looks like this:

apiVersion: n9/v1alpha
kind: Agent
metadata:
name: splunk-observability
displayName: Splunk Observability
project: splunk-observability
spec:
description: Agent settings for Splunk Observability
sourceOf:
- Metrics
- Services
releaseChannel: beta
queryDelay:
unit: Minute
value: 720
splunkObservability:
realm: us1
FieldTypeDescription
queryDelay.unit
mandatory
enumSpecifies the unit for the query delay. Possible values: Second | Minute.
β€’ Check query delay documentation for default unit of query delay for each source.
queryDelay.value
mandatory
numericSpecifies the value for the query delay.
β€’ Must be a number less than 1440 minutes (24 hours).
β€’ Check query delay documentation for default unit of query delay for each source.
logCollectionEnabled
optional
booleanOptional. Defaults to false. Set to true if you'd like your direct to collect event logs. Beta functionality available only through direct release channel. Reach out to support@nobl9.com to activate it.
releaseChannel
mandatory
enumSpecifies the release channel. Accepted values: beta | stable.
Source-specific fields
splunkObservability.realm
mandatory
stringSee realms in endpoints | Splunk Observability documentation for more details.
warning

You can deploy only one agent in one YAML file by using the sloctl apply command.

Deploying Splunk Observability agent​

When you add the data source, Nobl9 automatically generates a Kubernetes configuration and a Docker command line for you to use to deploy the agent. Both of these are available in the web UI, under the Agent Configuration section. Be sure to swap in your credentials (e.g., replace the <SPLUNK_OBSERVABILITY_ACCESS_TOKEN> with your organization key).

If you use Kubernetes, you can apply the supplied YAML config file to a Kubernetes cluster to deploy the agent. It will look something like this:

# DISCLAIMER: This Deployment description is containing only the necessary fields for the purpose of this demo.
# It is not a ready-to-apply k8s deployment description and the client_id as well as the client_secret are only exemplary values.

apiVersion: v1
kind: Secret
metadata:
name: nobl9-agent-nobl9-dev-dwq-ble
namespace: default
type: Opaque
stringData:
splunk_observability_access_token: "<SPLUNK_OBSERVABILITY_ACCESS_TOKEN>"
client_id: "unique_client_id"
client_secret: "unique_client_secret"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nobl9-agent-nobl9-dev-splunkobs-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
nobl9-agent-name: "splunkobs"
nobl9-agent-project: "deployment"
nobl9-agent-organization: "nobl9-dev"
template:
metadata:
labels:
nobl9-agent-name: "splunkobs"
nobl9-agent-project: "deployment"
nobl9-agent-organization: "nobl9-dev"
spec:
containers:
- name: agent-container
image: nobl9/agent:0.73.2
resources:
requests:
memory: "350Mi"
cpu: "0.1"
env:
- name: N9_CLIENT_ID
valueFrom:
secretKeyRef:
key: client_id
name: nobl9-agent-nobl9-dev-splunkobs-deployment
- name: N9_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client_secret
name: nobl9-agent-nobl9-dev-dwq-ble
- name: SPLUNK_OBSERVABILITY_ACCESS_TOKEN
valueFrom:
secretKeyRef:
key: splunk_observability_access_token
name: nobl9-agent-nobl9-dev-dwq-ble
# The N9_METRICS_PORT is a variable specifying the port to which the /metrics and /health endpoints are exposed.
# The 9090 is the default value and can be changed.
# If you don’t want the metrics to be exposed, comment out or delete the N9_METRICS_PORT variable.
- name: N9_METRICS_PORT
value: "9090"

Log sampling for the Splunk Observability agent​

With the agent release 0.50.0, we introduced a separate logging mechanism for Splunk Observability agent to handle burstable log loads. This mechanism only works for redundant points dropping informationβ€”other logs are logged normally.

You can decide whether you want to use log sampling or not by setting SPLUNK_OBSERVABILITY_DATA_POINT_LOG_SAMPLING_CONFIG environment variable. It's a JSON object with the following fields:

{
"burst": int, // how many messages?
"period": int, // how often? (in seconds)
"enabled": bool,
}

The above YAMLs default .enabled to false so that agents by default don't use it.

If only the .enabled variable is set to true, it defaults .burst to 1, and .period to 900, which is an equivalent to log 1 message each 15 minutes per organization.

Here's an example of configuration that allows to log 3 messages per 120 seconds per organization:

"{ \"burst\": 3, \"period\": 120, \"enabled\": true}"

Creating SLOs with Splunk Observability​

Creating SLOs in the UI​

Follow the instructions below to create your SLOs with Splunk Observability in the UI:

  1. Navigate to Service Level Objectives.

  2. Click .
  3. In step 2, select Splunk Observability as the Data Source for your SLO, then specify the Metric. You can choose either a Threshold Metric, where a single time series is evaluated against a threshold, or a Ratio Metric, which allows you to enter two time series to compare (for example, a count of good requests and total requests).

    1. Choose the Data Count Method for your ratio metric:
    • Non-incremental: counts incoming metric values one-by-one. So the resulting SLO graph is pike-shaped.
    • Incremental: counts the incoming metric values incrementally, adding every next value to previous values. It results in a constantly increasing SLO graph.
  4. Enter a Program (for the Threshold metric), or Program for good counter, and Program for total counter (for the count metric). The following are program examples:

    1. Threshold metric for Splunk Observability:

      A = data('demo.trans.count', filter=filter('demo_datacenter', 'Tokyo'), rollup='rate').mean().publish(label='A', enable=False);
      B = data('demo.trans.count', filter=filter('demo_datacenter', 'Tokyo'), rollup='rate').stddev().publish(label='B', enable=False);
      C = (B/A).publish(label='C');
    2. Ratio metric for Splunk Observability:

      Program for good counter: data('demo.trans.count', filter=filter('demo_datacenter', 'Tokyo'),rollup='rate').stddev().publish()

      Program for total counter: data('demo.trans.count', filter=filter('demo_datacenter', 'Tokyo'), rollup='rate').mean().publish()

      SLI values for good and total
      When choosing the query for the ratio SLI (countMetrics), keep in mind that the values ​​resulting from that query for both good and total:
      • Must be positive.
      • While we recommend using integers, fractions are also acceptable.
        • If using fractions, we recommend them to be larger than 1e-4 = 0.0001.
      • Shouldn't be larger than 1e+20.
  5. In step 3, define a Time Window for the SLO.

  6. In step 4, specify the Error Budget Calculation Method and your Objective(s).

  7. In step 5, add a Name, Description, and other details about your SLO. You can also select Alert policies and Labels on this screen.

  8. When you’re done, click Create SLO.

SLOs using Splunk Observability - YAML samples​

Here’s an example of Splunk Observability using a rawMetric (threshold metric):

- apiVersion: n9/v1alpha
kind: SLO
metadata:
name: tokyo-server-4-latency
displayName: Server4 Latency [Tokyo]
project: splunk-observability
spec:
description: Latency of Server4 in Tokyo ragion
service: splunk-observability-demo-service
indicator:
metricSource:
name: splunk-observability
timeWindows:
- unit: Day
count: 1
calendar:
startTime: 2020-01-21 12:30:00
timeZone: America/New_York
budgetingMethod: Occurrences
objectives:
- displayName: Excellent
op: lte
rawMetric:
query:
splunkObservability:
program: 'data('demo.trans.count', filter=filter('demo_datacenter', 'Tokyo'), rollup='rate').mean().publish()'
value: 200
target: 0.8
- displayName: Good
op: lte
rawMetric:
query:
splunkObservability:
program: 'data('demo.trans.count', filter=filter('demo_datacenter', 'Tokyo'), rollup='rate').mean().publish()'
value: 250
target: 0.9
- displayName: Poor
op: lte
rawMetric:
query:
splunkObservability:
program: 'data('demo.trans.count', filter=filter('demo_datacenter', 'Tokyo'), rollup='rate').mean().publish()'
value: 300
target: 0.99

Important notes:

Metric specification from SplunkObservability has one field:

  • program – it is a SignalFlow analytics program and is mandatory (string). Search criteria that return exactly one time series. Program needs to return only one key in the data map (one time series). For more details, see the Query examples section.

Query examples​

For details on Splunk Observability queries syntax, check Signalflow | Splunk Observability documentation.

Querying the Splunk Observability server​

Nobl9 queries Splunk observability 4 data points every minute, resulting in a 15-second resolution.

Splunk Observability API rate limits​

You can control your resource usage using org token (Access Tokens) limits. For more information, refer to the Org token limits | Splunk Observability documentation and the System limits for Splunk Infrastructure Monitoring | Splunk Observability documentation.

Splunk Observability Cloud documentation

Create an Access Token | Splunk Observability documentation

Realms in Endpoints | Splunk Observability documentation

Signalflow | Splunk Observability documentation

Org token limits | Splunk Observability documentation

System limits for Splunk Infrastructure Monitoring | Splunk Observability documentation

Agent metrics

Creating SLOs via Terraform

Creating agents via Terraform