Elasticsearch
Elasticsearch is a distributed search and storage solution used forย log analytics, full-text search, security intelligence, business analytics, and operational intelligence use cases. This integration supports histogram aggregate queries that return either a single value or a single pair stored inย n9-val
ย field, any filtering or matches can be applied as long as the output follows the mentioned format.
Elasticsearch parameters and supported features in Nobl9
- General support:
- Release channel: Stable, Beta
- Connection method: Agent
- Replay and SLI Analyzer: Not supported
- Event logs: Not supported
- Query checker: Not supported
- Query parameters retrieval: Supported
- Timestamp cache persistence: Supported
- Query parameters:
- Query interval: 1 min
- Query delay: 1 min
- Jitter: 15 sec
- Timeout: 30 sec
- Agent details and minimum required versions for supported features:
- Environment variable:
ES_QUERY_DELAY
- Plugin name:
n9elasticsearch
- Query parameters retrieval:
0.73.2
- Timestamp cache persistence:
0.65.0
- Additional notes:
- Support for Elasticsearch v7.9.1
Authenticationโ
The Nobl9 agent calls the Elasticsearch Get API | Elasticsearch documentation. To call the Elasticsearch API, you must provide a token. The token can be obtained from the Kibana control panel. All of the required steps are documented by ElasticSearch and can be found here.
Custom authorization headerโ
For the agent version equal or greater than 0.37.0
, you can set ELASTICSEARCH_CUSTOM_AUTHORIZATION_HEADER
environment variable to authenticate.
If you want to use the custom header for authentication instead of the ELASTICSEARCH_TOKEN
in your agent config, you must add the variable ELASTICSEARCH_CUSTOM_AUTHORIZATION_HEADER
with the appropriate value in the Kubernetes YAML or the Docker runtime. For more details, see the Deploying Elasticsearch agent.
Adding Elasticsearch as a data sourceโ
To ensure data transmission between Nobl9 and Elasticsearch, it may be necessary to list Nobl9 IP addresses as trusted.
- 18.159.114.21
- 18.158.132.186
- 3.64.154.26
You can add the Elasticsearch data source using the agent connection method.
Nobl9 Webโ
Follow the instructions below to configure your Elasticsearch agent:
- Navigate to Integrations > Sources.
- Click .
- Click the required Source button.
- Choose Agent.
-
Select one of the following Release Channels:
- The
stable
channel is fully tested by the Nobl9 team. It represents the final product; however, this channel does not contain all the new features of abeta
release. Use it to avoid crashes and other limitations. - The
beta
channel is under active development. Here, you can check out new features and improvements without the risk of affecting any viable SLOs. Remember that features in this channel can change.
- The
-
Add the URL to connect to your data source.
The URL must point to the Elasticsearch app. If you are using Elastic Cloud, the URL can be obtained from here. Select your deployment, open the deployment details, and copy the Elasticsearch endpoint.
- Select a Project.
Specifying a project is helpful when multiple users are spread across multiple teams or projects. When the Project field is left blank, Nobl9 uses thedefault
project. - Enter a Display Name.
You can enter a user-friendly name with spaces in this field. - Enter a Name.
The name is mandatory and can only contain lowercase, alphanumeric characters, and dashes (for example,my-project-1
). Nobl9 duplicates the display name here, transforming it into the supported format, but you can edit the result. - Enter a Description.
Here you can add details such as who is responsible for the integration (team/owner) and the purpose of creating it. - Specify the Query delay to set a customized delay for queries when pulling the data from the data source.
- The default value in Elasticsearch integration for Query delay is
1 minute
.
infoChanging the Query delay may affect your SLI data. For more details, check the Query delay documentation. - The default value in Elasticsearch integration for Query delay is
- Click Add Data Source
sloctlโ
The YAML for setting up an agent connection to Elasticsearch looks like this:
apiVersion: n9/v1alpha
kind: Agent
metadata:
name: elasticSearch
displayName: Elasticsearch agent
project: elastic
spec:
sourceOf:
- Metrics
- Services
releaseChannel: stable
queryDelay:
unit: Minute
value: 720
elasticsearch:
url: https://observability-deployment-id.eu-central-1.aws.cloud.es.io:1234
Field | Type | Description |
---|---|---|
queryDelay.unit mandatory | enum | Specifies the unit for the query delay. Possible values: Second | Minute . โข Check query delay documentation for default unit of query delay for each source. |
queryDelay.value mandatory | numeric | Specifies the value for the query delay. โข Must be a number less than 1440 minutes (24 hours). โข Check query delay documentation for default unit of query delay for each source. |
releaseChannel mandatory | enum | Specifies the release channel. Accepted values: beta | stable . |
Source-specific fields | ||
elasticsearch.url mandatory | string | Must point to the Elasticsearch application. |
You can deploy only one agent in one YAML file by using the sloctl apply
command.
Agent deploymentโ
When you add the data source, Nobl9 automatically generates a Kubernetes configuration and a Docker command line for you to use to deploy the agent. Both of these are available in the web UI, under the Agent Configuration section. Be sure to swap in your credentials (e.g., replace the <ELASTICSEARCH_TOKEN>
with your organization key).
- Kubernetes
- Kubernetes - Auth Header
- Docker
If you use Kubernetes, you can apply the supplied YAML config file to a Kubernetes cluster to deploy the agent. It will look something like this:
# DISCLAIMER: This deployment description contains only the fields necessary for the purpose of this demo.
# It is not a ready-to-apply k8s deployment description, and the client_id and client_secret are only exemplary values.
apiVersion: v1
kind: Secret
metadata:
name: nobl9-agent-nobl9-dev-elasticsearch-elastic-test
namespace: default
type: Opaque
stringData:
elasticsearch_token: <ELASTICSEARCH_TOKEN>
client_id: "unique_client_id"
client_secret: "unique_client_secret"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nobl9-agent-nobl9-dev-elasticsearch-elastic-test
namespace: default
spec:
replicas: 1
selector:
matchLabels:
nobl9-agent-name: elastic-test
nobl9-agent-project: elasticsearch
nobl9-agent-organization: nobl9-dev
template:
metadata:
labels:
nobl9-agent-name: elastic-test
nobl9-agent-project: elasticsearch
nobl9-agent-organization: nobl9-dev
spec:
containers:
- name: agent-container
image: nobl9/agent:0.82.2
resources:
requests:
memory: "350Mi"
cpu: "0.1"
env:
- name: N9_CLIENT_ID
valueFrom:
secretKeyRef:
key: client_id
name: nobl9-agent-nobl9-dev-elasticsearch-elastic-test
- name: N9_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client_secret
name: nobl9-agent-nobl9-dev-elasticsearch-elastic-test
- name: ELASTICSEARCH_TOKEN
valueFrom:
secretKeyRef:
key: elasticsearch_token
name: nobl9-agent-nobl9-dev-elasticsearch-elastic-test
# The N9_METRICS_PORT is a variable specifying the port to which the /metrics and /health endpoints are exposed.
# The 9090 is the default value and can be changed.
# If you donโt want the metrics to be exposed, comment out or delete the N9_METRICS_PORT variable.
- name: N9_METRICS_PORT
value: "9090"
Deploying your agent in Kubernetes, you can use ELASTICSEARCH_CUSTOM_AUTHORIZATION_HEADER
for authentication (for the agent version equal or greater 0.37.0):
apiVersion: v1
kind: Secret
metadata:
name: nobl9-agent-nobl9-dev-elasticsearch-es-agent2
namespace: default
type: Opaque
stringData:
elasticsearch_custom_authorization_header: "Basic YWRtaW46YWRtaW4xMjM="
client_id: "unique_client_id"
client_secret: "unique_client_secret"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nobl9-agent-nobl9-dev-elasticsearch-es-agent2
namespace: default
spec:
replicas: 1
selector:
matchLabels:
nobl9-agent-name: es-agent2
nobl9-agent-project: elasticsearch
nobl9-agent-organization: nobl9-dev
template:
metadata:
labels:
nobl9-agent-name: es-agent2
nobl9-agent-project: elasticsearch
nobl9-agent-organization: nobl9-dev
spec:
containers:
- name: agent-container
image: nobl9/agent:0.82.2-elasticsearch-custom-auth
resources:
requests:
memory: "350Mi"
cpu: "0.1"
env:
- name: N9_CLIENT_ID
valueFrom:
secretKeyRef:
key: client_id
name: nobl9-agent-nobl9-dev-elasticsearch-es-agent2
- name: N9_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client_secret
name: nobl9-agent-nobl9-dev-elasticsearch-es-agent2
- name: ELASTICSEARCH_CUSTOM_AUTHORIZATION_HEADER
valueFrom:
secretKeyRef:
key: elasticsearch_custom_authorization_header
name: nobl9-agent-nobl9-dev-elasticsearch-es-agent2
# The N9_METRICS_PORT is a variable specifying the port to which the /metrics and /health endpoints are exposed.
# The 9090 is the default value and can be changed.
# If you donโt want the metrics to be exposed, comment out or delete the N9_METRICS_PORT variable.
- name: N9_METRICS_PORT
value: "9090"
If you use Docker, you can run the Docker command to deploy the agent. It will look something like this:
# DISCLAIMER: This docker command description is containing only the necessary fields for the purpose of this demo.
# It is not a ready-to-apply docker command.
docker run -d --restart on-failure \
--name nobl9-agent-nobl9-dev-elasticsearch-elastic-test \
-e N9_CLIENT_ID="unique_client_id" \
-e N9_CLIENT_SECRET="unique_client_secret" \
# The N9_METRICS_PORT is a variable specifying the port to which the /metrics and /health endpoints are exposed.
# The 9090 is the default value and can be changed.
# If you donโt want the metrics to be exposed, comment out or delete the N9_METRICS_PORT variable.
-e N9_METRICS_PORT=9090 \
-e ELASTICSEARCH_TOKEN="<ELASTICSEARCH_TOKEN>"\
nobl9/agent:0.82.2
Creating SLOs with Elasticsearchโ
Nobl9 Webโ
Follow the instructions below to create your SLOs with Elasticsearch in the UI:
-
Navigate to Service Level Objectives.
-
Click .
-
In step 1 of the SLO wizard, select the service the SLO will be associated with.
-
In step 2, select Elasticsearch as the data source for your SLO.
-
Enter the Index Name.
For information on how to obtain it, refer to the Index Name | Elasticsearch documentation. -
Specify the Metric. You can choose either a Threshold Metric, where a single time series is evaluated against a threshold or a Ratio Metric, which allows you to enter two time series to compare (for example, a count of good requests and total requests).
- Choose the Data Count Method for your ratio metric:
- Non-incremental: counts incoming metric values one-by-one. So the resulting SLO graph is pike-shaped.
- Incremental: counts the incoming metric values incrementally, adding every next value to previous values.
It results in a constantly increasing SLO graph.
-
Enter a Query or Query for good counter and Query for total counter for the metric you selected.
For examples of queries, refer to the section below.For details on Elasticsearch queries, refer to the Scope of support for Elasticsearch Queries section.
SLI values for good and totalWhen choosing the query for the ratio SLI (countMetrics
), keep in mind that the values โโresulting from that query for both good and total:- Must be positive.
- While we recommend using integers, fractions are also acceptable.
- If using fractions, we recommend them to be larger than
1e-4
=0.0001
. - Shouldn't be larger than
1e+20
.
-
In step 3, define a Time Window for the SLO.
-
Rolling time windows are better for tracking the recent user experience of a service.
-
Calendar-aligned windows are best suited for SLOs that are intended to map to business metrics measured on a calendar-aligned basis, such as every calendar month or every quarter.
-
In step 4, specify the Error Budget Calculation Method and your Objective(s).
- Occurrences method counts good attempts against the count of total attempts.
- Time Slicesmethod measures how many good minutes were achieved (when a system operates within defined boundaries) during a time window.
- You can define up to 12 objectives for an SLO.
See the use case example and the SLO calculations guide for more information on the error budget calculation methods.
-
In step 5, add the Display name, Name, and other settings for your SLO:
- Create a composite SLO
- Set notification on data, if this option is available for your data source.
When activated, Nobl9 notifies you if your SLO hasn't received data or received incomplete data for more than 15 minutes. - Add alert policies, labels, and links, if required.
You can add up to 20 links per SLO.
-
Click Create SLO.
sloctlโ
- rawMetric
- countMetric
Hereโs an example of Elasticsearch using a rawMetric
(threshold metric):
apiVersion: n9/v1alpha
kind: SLO
metadata:
name: elasticsearch-rawmetric-calendar
project: elasticsearch
spec:
service: elasticsearch-service
indicator:
metricSource:
name: elasticsearch
timeWindows:
- unit: Day
count: 7
calendar:
startTime: 2020-07-19 00:00:00
timeZone: Europe/Warsaw
budgetingMethod: Occurrences
objectives:
- displayName: Good
target: 0.75
op: lte
rawMetric:
query:
elasticsearch:
index: apm-7.13.3-transaction
query: |-
{
"query": {
"bool": {
"must": [
{
"match": {
"service.name": "weloveourpets_xyz"
}
},
{
"match": {
"transaction.result": "HTTP 2xx"
}
}
],
"filter": [
{
"range": {
"@timestamp": {
"gte": "{{.BeginTime}}",
"lte": "{{.EndTime}}"
}
}
}
]
}
},
"size": 0,
"aggs": {
"resolution": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "{{.Resolution}}",
"min_doc_count": 0,
"extended_bounds": {
"min": "{{.BeginTime}}",
"max": "{{.EndTime}}"
}
},
"aggs": {
"n9-val": {
"avg": {
"field": "transaction.duration.us"
}
}
}
}
}
}
value: 100
- displayName: Bad
target: 0.90
op: lte
rawMetric:
query:
elasticsearch:
index: apm-7.13.3-transaction
query: |-
{
"query": {
"bool": {
"must": [
{
"match": {
"service.name": "weloveourpets_xyz"
}
},
{
"match": {
"transaction.result": "HTTP 2xx"
}
}
],
"filter": [
{
"range": {
"@timestamp": {
"gte": "{{.BeginTime}}",
"lte": "{{.EndTime}}"
}
}
}
]
}
},
"size": 0,
"aggs": {
"resolution": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "{{.Resolution}}",
"min_doc_count": 0,
"extended_bounds": {
"min": "{{.BeginTime}}",
"max": "{{.EndTime}}"
}
},
"aggs": {
"n9-val": {
"avg": {
"field": "transaction.duration.us"
}
}
}
}
}
}
value: 250
- displayName: Terrible
target: 0.95
op: lte
rawMetric:
query:
elasticsearch:
index: apm-7.13.3-transaction
query: |-
{
"query": {
"bool": {
"must": [
{
"match": {
"service.name": "weloveourpets_xyz"
}
},
{
"match": {
"transaction.result": "HTTP 2xx"
}
}
],
"filter": [
{
"range": {
"@timestamp": {
"gte": "{{.BeginTime}}",
"lte": "{{.EndTime}}"
}
}
}
]
}
},
"size": 0,
"aggs": {
"resolution": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "{{.Resolution}}",
"min_doc_count": 0,
"extended_bounds": {
"min": "{{.BeginTime}}",
"max": "{{.EndTime}}"
}
},
"aggs": {
"n9-val": {
"avg": {
"field": "transaction.duration.us"
}
}
}
}
}
}
value: 500
Hereโs an example of Elasticsearch using a countMetric
(ratio metric):
apiVersion: n9/v1alpha
kind: SLO
metadata:
name: elasticsearch-slo-ratio
displayName: Elastic Search Ratio
project: elastic
spec:
budgetingMethod: Occurrences
indicator:
metricSource:
kind: Agent
name: elastic
project: elastic
objectives:
- countMetrics:
good:
elasticsearch:
query: |
{
"query": {
"bool": {
"must": [
{
"match": {
"service.name": "weloveourpets_xyz"
}
}
],
"filter": [
{
"range": {
"@timestamp": {
"gte": "{{.BeginTime}}",
"lte": "{{.EndTime}}"
}
}
},
{
"match": {
"transaction.result": "HTTP 2xx"
}
}
]
}
},
"size": 0,
"aggs": {
"resolution": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "{{.Resolution}}",
"min_doc_count": 0,
"extended_bounds": {
"min": "{{.BeginTime}}",
"max": "{{.EndTime}}""
}
},
"aggs": {
"n9-val": {
"value_count": {
"field": "transaction.result"
}
}
}
}
}
}
index: apm-7.13.3-transaction
incremental: false
total:
elasticsearch:
query: |
{
"query": {
"bool": {
"must": [
{
"match": {
"service.name": "weloveourpets_xyz"
}
}
],
"filter": [
{
"range": {
"@timestamp": {
"gte": "{{.BeginTime}}",
"lte": "{{.EndTime}}"
}
}
}
]
}
},
"size": 0,
"aggs": {
"resolution": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "{{.Resolution}}"
"min_doc_count": 0,
"extended_bounds": {
"min": "{{.BeginTime}}",
"max": "{{.EndTime}}"
}
},
"aggs": {
"n9-val": {
"value_count": {
"field": "transaction.result"
}
}
}
}
}
}
index: apm-7.13.3-transaction
displayName: Enough
target: 0.5
value: 1
service: dynatrace-demo-service
timeWindows:
- count: 1
isRolling: true
unit: Hour
Elasticsearch queries scope of supportโ
When data from Elastic APM is used, @timestamp
is an example of a field that holds the timestamp of the document. Another field can be utilized according to the schema used.
Use the following links in the Elasticsearch guides for context:
The Nobl9 agent requires the search results to be a time series. The agent expects the date_histogram
aggregation is named resolution
will be used, and will be the source of the timestamps with child aggregation named n9-val
, which is the source of the value(s).
{
"aggs": {
"resolution": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "{{.Resolution}}",
"min_doc_count": 0,
"extended_bounds": {
"min": "{{.BeginTime}}",
"max": "{{.EndTime}}"
}
},
"aggs": {
"n9-val": {
"avg": {
"field": "transaction.duration.us"
}
}
}
}
}
}
-
Date Histogram Aggregation | Elasticsearch documentation
-
The recommendation is to use
fixed_interval
withdate_histogram
and pass{{.Resolution}}
placeholder as the value. This will entitle Nobl9 agent to control data resolution. -
The query must not use a
fixed_interval
longer than one minute because queries are done every one minute for a 1-minute time range.
-
-
Date Histogram Aggregation Fixed Intervals | Elasticsearch documentation
-
The
"field": "@timestamp"
must match the field used in the filter query. -
Use the
extended_bounds
with the"{{.BeginTime}}"
,"{{.EndTime}}"
placeholders as a filter query.
-
The {{.BeginTime}}
and {{.EndTime}}
are mandatory placeholders and must be included in the query. If you use filter and aggregations parameters in your query, then, the {{.BeginTime}}
ย andย {{.EndTime}}
ย placeholders are required in both parameters.
The placeholders are then replaced by the Nobl9 agent with the correct time range values.
-
Metrics Aggregations | Elasticsearch documentation
-
The
n9-val
must be a metric aggregation. -
The
single value metric aggregation
value is used as the value of the time series. -
The
multi-value metric aggregation
first returns a non-null value and is used as the value of the time series. In the following example, thenull
values are skipped."aggs": {
"n9-val": {
...
}
}
-
-
The
elasticsearch.index
is the index name when the query completes.
Querying the Elasticsearch serverโ
Nobl9 calls Elasticsearch Get API every minute and retrieves the data points from the previous minute to the present time point. The number of data points is dependent on how much data the customer has stored.
Elasticsearch API rate limitsโ
Elasticsearch rate limits are configurable by setting the search.max_buckets
. Depending on the settings of the target cluster, the default limitations are 65,536 buckets for aggregate queries. For more information, refer to the Search Settings | Elasticsearch documentation.