Skip to main content

Elasticsearch

Elasticsearch is a distributed search and storage solution used for log analytics, full-text search, security intelligence, business analytics, and operational intelligence use cases. This integration supports histogram aggregate queries that return either a single value or a single pair stored in n9-val field, any filtering or matches can be applied as long as the output follows the mentioned format.

Authentication

The Nobl9 Agent calls the Elasticsearch Get API | Elasticsearch Documentation. To call the Elasticsearch API, you must provide a token. The token can be obtained from the Kibana control panel. All of the required steps are documented by ElasticSearch and can be found here.

Custom Authorization Header

For the Agent version equal or greater than 0.37.0, you can set ELASTICSEARCH_CUSTOM_AUTHORIZATION_HEADER environment variable to authenticate.

If you want to use the custom header for authentication instead of the ELASTICSEARCH_TOKEN in your Agent config, you must add the variable ELASTICSEARCH_CUSTOM_AUTHORIZATION_HEADER with the appropriate value in the Kubernetes YAML or the Docker runtime. For more details, see the Deploying Elasticsearch Agent.

Scope of Support

This integration supports the 7.9.1 version of Elasticsearch.

Adding Elasticsearch as a Data Source in the UI

To add Elasticsearch as a data source in Nobl9 using the Agent connection method, follow these steps:

  1. Navigate to Integrations > Sources.

  2. Click the plus button button.

  3. Click the Elasticsearch icon.

  4. Choose Agent, then configure the source as described below.

Elasticsearch Agent

Agent Configuration in the UI

Follow the instructions below to configure your Elasticsearch Agent:

  1. Add the URL to connect to your data source.
    The URL must point to the Elasticsearch app. If you are using Elastic Cloud, the URL can be obtained from here. Select your deployment, open the deployment details, and copy the Elasticsearch endpoint.

  2. Select a Project (mandatory).
    Specifying a Project is helpful when multiple users are spread across multiple teams or projects. When the Project field is left blank, a default value appears.

  3. Enter a Display name (optional).
    You can enter a friendly name with spaces in this field.

  4. Enter a Name (mandatory).
    The name is mandatory and can only contain lowercase, alphanumeric characters and dashes (for example, my-project-name). This field is populated automatically when you enter a display name, but you can edit the result.

  5. Enter a Description (optional).
    Here you can add details such as who is responsible for the integration (team/owner) and the purpose of creating it.

  6. Click the Add Data Source button.

Deploying Elasticsearch Agent

When you add the data source, Nobl9 automatically generates a Kubernetes configuration and a Docker command line for you to use to deploy the Agent. Both of these are available in the web UI, under the Agent Configuration section. Be sure to swap in your credentials (e.g., replace the <ELASTICSEARCH_TOKEN> with your organization key).

If you use Kubernetes, you can apply the supplied YAML config file to a Kubernetes cluster to deploy the Agent. It will look something like this:

# DISCLAIMER: This deployment description contains only the fields necessary for the purpose of this demo.
# It is not a ready-to-apply k8s deployment description, and the client_id and client_secret are only exemplary values.
apiVersion: v1
kind: Secret
metadata:
name: nobl9-agent-nobl9-dev-elasticsearch-elastic-test
namespace: default
type: Opaque
stringData:
elasticsearch_token: <ELASTICSEARCH_TOKEN>
client_id: "unique_client_id"
client_secret: "unique_client_secret"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nobl9-agent-nobl9-dev-elasticsearch-elastic-test
namespace: default
spec:
replicas: 1
selector:
matchLabels:
nobl9-agent-name: elastic-test
nobl9-agent-project: elasticsearch
nobl9-agent-organization: nobl9-dev
template:
metadata:
labels:
nobl9-agent-name: elastic-test
nobl9-agent-project: elasticsearch
nobl9-agent-organization: nobl9-dev
spec:
containers:
- name: agent-container
image: nobl9/agent:latest
resources:
requests:
memory: "350Mi"
cpu: "0.1"
env:
- name: N9_CLIENT_ID
valueFrom:
secretKeyRef:
key: client_id
name: nobl9-agent-nobl9-dev-elasticsearch-elastic-test
- name: N9_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client_secret
name: nobl9-agent-nobl9-dev-elasticsearch-elastic-test
- name: ELASTICSEARCH_TOKEN
valueFrom:
secretKeyRef:
key: elasticsearch_token
name: nobl9-agent-nobl9-dev-elasticsearch-elastic-test

Creating SLOs with Elasticsearch

Creating SLOs in the UI

Follow the instructions below to create your SLOs with Elasticsearch in the UI:

  1. Navigate to Service Level Objectives.

  2. Click the plus button button.

  3. In step 1 of the SLO wizard, select the Service the SLO will be associated with.

  4. In step 2, select Elasticsearch as the Data Source for your SLO, enter the Index Name | Elasticsearch Documentation, and then specify the Metric. You can choose either a Threshold Metric, where a single time series is evaluated against a threshold or a Ratio Metric, which allows you to enter two time series to compare (for example, a count of good requests and total requests).

    For examples of queries, refer to the section below.

  5. In step 3, define a Time Window for the SLO.

  6. In step 4, specify the Error Budget Calculation Method and your Objective(s).

  7. In step 5, add a Name, Description, and other details about your SLO. You can also select Alert Policies and Labels on this screen.

  8. When you’re done, click Create SLO.

SLOs with Elasticsearch - YAML samples

Here’s an example of Elasticsearch using a rawMetric (Threshold metric):

apiVersion: n9/v1alpha
kind: SLO
metadata:
name: elasticsearch-rawmetric-calendar
project: elasticsearch
spec:
service: elasticsearch-service
indicator:
metricSource:
name: elasticsearch
timeWindows:
- unit: Day
count: 7
calendar:
startTime: 2020-07-19 00:00:00
timeZone: Europe/Warsaw
budgetingMethod: Occurrences
objectives:
- displayName: Good
target: 0.75
op: lte
rawMetric:
query:
elasticsearch:
index: apm-7.13.3-transaction
query: |-
{
"query": {
"bool": {
"must": [
{
"match": {
"service.name": "weloveourpets_xyz"
}
},
{
"match": {
"transaction.result": "HTTP 2xx"
}
}
],
"filter": [
{
"range": {
"@timestamp": {
"gte": "{{.BeginTime}}",
"lte": "{{.EndTime}}"
}
}
}
]
}
},
"size": 0,
"aggs": {
"resolution": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "{{.Resolution}}",
"min_doc_count": 0,
"extended_bounds": {
"min": "{{.BeginTime}}",
"max": "{{.EndTime}}"
}
},
"aggs": {
"n9-val": {
"avg": {
"field": "transaction.duration.us"
}
}
}
}
}
}
value: 100
- displayName: Bad
target: 0.90
op: lte
rawMetric:
query:
elasticsearch:
index: apm-7.13.3-transaction
query: |-
{
"query": {
"bool": {
"must": [
{
"match": {
"service.name": "weloveourpets_xyz"
}
},
{
"match": {
"transaction.result": "HTTP 2xx"
}
}
],
"filter": [
{
"range": {
"@timestamp": {
"gte": "{{.BeginTime}}",
"lte": "{{.EndTime}}"
}
}
}
]
}
},
"size": 0,
"aggs": {
"resolution": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "{{.Resolution}}",
"min_doc_count": 0,
"extended_bounds": {
"min": "{{.BeginTime}}",
"max": "{{.EndTime}}"
}
},
"aggs": {
"n9-val": {
"avg": {
"field": "transaction.duration.us"
}
}
}
}
}
}
value: 250
- displayName: Terrible
target: 0.95
op: lte
rawMetric:
query:
elasticsearch:
index: apm-7.13.3-transaction
query: |-
{
"query": {
"bool": {
"must": [
{
"match": {
"service.name": "weloveourpets_xyz"
}
},
{
"match": {
"transaction.result": "HTTP 2xx"
}
}
],
"filter": [
{
"range": {
"@timestamp": {
"gte": "{{.BeginTime}}",
"lte": "{{.EndTime}}"
}
}
}
]
}
},
"size": 0,
"aggs": {
"resolution": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "{{.Resolution}}",
"min_doc_count": 0,
"extended_bounds": {
"min": "{{.BeginTime}}",
"max": "{{.EndTime}}"
}
},
"aggs": {
"n9-val": {
"avg": {
"field": "transaction.duration.us"
}
}
}
}
}
}
value: 500

Scope of Support for Elasticsearch Queries

When data from Elastic APM is used, @timestamp is an example of a field that holds the timestamp of the document. Another field can be utilized according to the schema used.

note

The {{.BeginTime}} and {{.EndTime}} are mandatory placeholders and are required in both filter and aggregations parameters. The placeholders are then replaced by the Nobl9 agent with the correct time range values.

Use the following links in the Elasticsearch guides for context:

The Nobl9 agent requires that the search result are a time series. The agent expects the date_histogram aggregation is named resolution will be used, and will be the source of the timestamps with child aggregation named n9-val, which is the source of the value(s).

{
"aggs": {
"resolution": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "{{.Resolution}}",
"min_doc_count": 0,
"extended_bounds": {
"min": "{{.BeginTime}}",
"max": "{{.EndTime}}"
}
},
"aggs": {
"n9-val": {
"avg": {
"field": "transaction.duration.us"
}
}
}
}
}
}
  1. Date Histogram Aggregation | Elasticsearch Documentation

    • The recommendation is to use fixed_interval with date_histogram and pass {{.Resolution}} placeholder as the value. This will enable Nobl9 agent to control data resolution.

    • The query must not use a fixed_interval longer than 1 minute because queries are done every 1 minute for a 1-minute time range.

  2. Date Histogram Aggregation Fixed Intervals | Elasticsearch Documentation

    • The "field": "@timestamp" must match the field used in the filter query.

    • Using extended_bounds is recommended with the same placeholders "{{.BeginTime}}", "{{.EndTime}}" as a filter query.

  1. Metrics Aggregations | Elasticsearch Documentation

    • The n9-val must be a metric aggregation.

    • The single value metric aggregation value is used as the value of the time series.

    • The multi-value metric aggregation first returns a non-null value and is used as the value of the time series. In the following example, the null values are skipped.

      "aggs": {
      "n9-val": {
      ...
      }
      }
  2. The elasticsearch.index is the index name when the query completes.

Querying the Elasticsearch Server

Nobl9 calls Elasticearch Get API every minute and retrieves the data points from the previous minute to the present time point. The number of data points is dependent on how much data the customer has stored.

Elasticsearch API Rate Limits

Elasticsearch rate limits are configurable by setting the search.max_buckets. Depending on the settings of the target cluster, the default limitations are 65,536 buckets for aggregate queries. For more information, refer to the Search Settings | Elasticsearch Documentation.

ElasticSearch Authentication | Elasticsearch Documentation

Elasticsearch Get API | Elasticsearch Documentation

Elasticsearch APM | Elasticsearch Documentation

Boolean Query | Elasticsearch Documentation

Query and Filter Context | Elasticsearch Documentation

Filter Aggregation | Elasticsearch Documentation

Range Query | Elasticsearch Documentation

Elasticsearch Index | Elasticsearch Documentation

Search Settings | Elasticsearch Documentation