Skip to main content

Elasticsearch Exporter

Status Available in: contrib Maintainers: @JaredTan95, @carsonip, @lahsivjar Source: opentelemetry-collector-contrib

Supported Telemetry

Logs Metrics Traces

Overview

This exporter supports sending logs, metrics, traces and profiles to Elasticsearch. The Exporter is API-compatible with Elasticsearch 7.17.x, 8.x, and 9.x. Certain features of the exporter, such as the otel mapping mode, may require newer versions of Elasticsearch. Limited effort will be made to support EOL versions of Elasticsearch — see https://www.elastic.co/support/eol.

Configuration options

Exactly one of the following settings is required:
  • endpoint (no default): The target Elasticsearch URL to which data will be sent (e.g. https://elasticsearch:9200)
  • endpoints (no default): A list of Elasticsearch URLs to which data will be sent, attempted in round-robin order
  • cloudid (no default): The Elastic Cloud ID of the Elastic Cloud Cluster to which data will be sent (e.g. foo:YmFyLmNsb3VkLmVzLmlvJGFiYzEyMyRkZWY0NTY=)
When the above settings are missing, endpoints will default to the comma-separated ELASTICSEARCH_URL environment variable. Elasticsearch credentials may be configured via Authentication configuration settings. As a shortcut, the following settings are also supported:
  • user (optional): Username used for HTTP Basic Authentication.
  • password (optional): Password used for HTTP Basic Authentication.
  • api_key (optional): Elasticsearch API Key in “encoded” format (e.g. VFR2WU41VUJIbG9SbGJUdVFrMFk6NVVhVDE3SDlSQS0wM1Rxb24xdXFldw==).
Example:
exporters:
  elasticsearch:
    endpoint: https://elastic.example.com:9200
    auth:
      authenticator: basicauth

extensions:
  basicauth:
    client_auth:
      username: elastic
      password: changeme

······

service:
  extensions: [basicauth]
  pipelines:
    logs:
      receivers: [otlp]
      exporters: [elasticsearch]
    traces:
      receivers: [otlp]
      exporters: [elasticsearch]

Advanced configuration

HTTP settings

The Elasticsearch exporter supports common HTTP Configuration Settings. Gzip compression is enabled by default. To disable compression, set compression to none. Default Compression Level is set to 1 (gzip.BestSpeed). As a consequence of supporting confighttp, the Elasticsearch exporter also supports common TLS Configuration Settings. The Elasticsearch exporter sets timeout (HTTP request timeout) to 90s by default. All other defaults are as defined by confighttp.

Queuing and batching

The Elasticsearch exporter supports the common sending_queue settings which supports both queueing and batching. The default sending queue is configured to do async batching with the following configuration:
sending_queue:
  enabled: true
  sizer: requests
  num_consumers: 10
  queue_size: 10
  batch:
    flush_timeout: 10s
    min_size: 1e+6 // 1MB
    max_size: 5e+6 // 5MB
    sizer: bytes
The default configurations are chosen to be closer to the defaults with the exporter’s previous inbuilt batching feature. The exporterhelper documentation provides more details on the sending_queue settings.

Elasticsearch document routing

Documents are statically or dynamically routed to the target index / data stream in the following order. The first routing mode that applies will be used.
  1. “Static mode”: Route to logs_index for log records, metrics_index for data points and traces_index for spans, if these configs are not empty respectively. 1
  2. “Dynamic - Index attribute mode”: Route to index name specified in elasticsearch.index attribute (precedence: log record / data point / span attribute > scope attribute > resource attribute) if the attribute exists. 1
  3. “Dynamic - Data stream routing mode”: Route to data stream constructed from ${data_stream.type}-${data_stream.dataset}-${data_stream.namespace}, where data_stream.type is logs for log records, metrics for data points, and traces for spans, and is static. 1 In a special case with mapping::mode: bodymap, data_stream.type field (valid values: logs, metrics) can be dynamically set from attributes. The resulting documents will contain the corresponding data_stream.* fields, see restrictions applied to Data Stream Fields.
    1. data_stream.dataset or data_stream.namespace in attributes (precedence: log record / data point / span attribute > scope attribute > resource attribute)
    2. Otherwise, if a scope attribute with the name encoding.format exists and contains a string value, data_stream.dataset will be set to this value. Note that while enabled by default, this behaviour is considered experimental. Some encoding extensions set this field (e.g. awslogsencodingextension), but it is not yet part of Semantic Conventions. There is the potential that the name of this routing field evolves as the discussion progresses in SemConv.
    3. Otherwise, if scope name matches regex /receiver/(\w*receiver) or /connector/(\w*connector), data_stream.dataset will be capture group #1
    4. Otherwise, data_stream.dataset falls back to generic and data_stream.namespace falls back to default.
This can be customised through the following settings:
  • logs_index (optional): The index or data stream name to publish logs (and span events in OTel mapping mode) to. logs_index should be empty unless all logs should be sent to the same index.
  • logs_dynamic_index (optional): uses resource, scope, or log record attributes to dynamically construct index name.
    • enabled(DEPRECATED): No-op. Documents are now always routed dynamically unless logs_index is not empty. Will be removed in a future version.
  • metrics_index (optional): The index or data stream name to publish metrics to. metrics_index should be empty unless all metrics should be sent to the same index. Note that metrics support is currently in development.
  • metrics_dynamic_index (optional): uses resource, scope or data point attributes to dynamically construct index name.
    • enabled(DEPRECATED): No-op. Documents are now always routed dynamically unless metrics_index is not empty. Will be removed in a future version.
  • traces_index (optional): The index or data stream name to publish traces to. traces_index should be empty unless all traces should be sent to the same index.
  • traces_dynamic_index (optional): uses resource, scope, or span attributes to dynamically construct index name.
    • enabled(DEPRECATED): No-op. Documents are now always routed dynamically unless traces_index is not empty. Will be removed in a future version.
  • logstash_format (optional): Logstash format compatibility. Logs, metrics and traces can be written into an index in Logstash format.
    • enabled(default=false): Enable/disable Logstash format compatibility. When logstash_format::enabled is true, the index name is composed using the above dynamic routing rules as prefix and the date as suffix, e.g: If the computed index name is logs-generic-default, the resulting index will be logs-generic-default-YYYY.MM.DD. The last string appended belongs to the date when the data is being generated.
    • prefix_separator(default=-): Set a separator between logstash_prefix and date.
    • date_format(default=%Y.%m.%d): Time format (based on strftime) to generate the second part of the Index name.
  • logs_dynamic_id (optional): Dynamically determines the document ID to be used in Elasticsearch based on a log record attribute.
    • enabled(default=false): Enable/Disable dynamic ID for log records. If elasticsearch.document_id exists and is not an empty string in the log record attributes, it will be used as the document ID. Otherwise, the document ID will be generated by Elasticsearch. The attribute elasticsearch.document_id is removed from the final document when the otel mapping mode is used. See Setting a document id dynamically.
  • traces_dynamic_id (optional): Dynamically determines the document ID to be used in Elasticsearch based on a span attribute.
    • enabled(default=false): Enable/Disable dynamic ID for spans. If elasticsearch.document_id exists and is not an empty string in the span attributes, it will be used as the document ID. For span events, this only applies when using otel mapping mode (where span events are stored as separate documents). Otherwise, the document ID will be generated by Elasticsearch. The attribute elasticsearch.document_id is removed from the final document when the otel mapping mode is used. See Setting a document id dynamically.

Document routing exceptions for OTel data mode

In OTel mapping mode (mapping::mode: otel), there is special handling in addition to the above document routing rules in Elasticsearch document routing. The order to determine the routing mode is the same as Elasticsearch document routing.
  1. “Static mode”: Span events are separate documents routed to logs_index if non-empty.
  2. “Dynamic - Index attribute mode”: Span events are separate documents routed using attribute elasticsearch.index (precedence: span event attribute > scope attribute > resource attribute) if the attribute exists.
  3. “Dynamic - Data stream routing mode”:
  • For all documents, data_stream.dataset will always be appended with .otel.
  • A special case to (3)(1) in Elasticsearch document routing, span events are separate documents that have data_stream.type: logs and are routed using data stream attributes (precedence: span event attribute > scope attribute > resource attribute)

Elasticsearch document mapping

The Elasticsearch exporter supports several document schemas and preprocessing behaviours, which may be configured through the following settings:
  • mapping:
    • mode (DEPRECATED): The mapping mode if supplied via config file is ignored. Use the X-Elastic-Mapping-Mode client metadata key or the elastic.mapping.mode scope attribute instead. If not specified via these methods, the default mapping mode is otel.
    • allowed_modes (defaults to all mapping modes): A list of allowed mapping modes.
The mapping mode can be controlled via the client metadata key X-Elastic-Mapping-Mode, e.g. via HTTP headers, gRPC metadata. It is possible to restrict which mapping modes may be requested by configuring mapping::allowed_modes, which defaults to all mapping modes. Keep in mind that not all processors or exporter configurations will maintain client metadata. The mapping mode can also be controlled via the scope attribute elastic.mapping.mode. If specified, this takes precedence over the X-Elastic-Mapping-Mode client metadata. If any scope has an invalid mapping mode, the exporter will reject the entire batch. The attribute will be excluded from the final document. Valid mapping modes are:
  • none
  • ecs
  • otel
  • raw
  • bodymap
See below for a description of each mapping mode.

Migration: Setting mapping mode via scope attribute

Since the mapping::mode config option is deprecated, use the following method to set the mapping mode: Use scope attribute via transform processor This approach sets the elastic.mapping.mode scope attribute on the telemetry data.
processors:
  transform:
    log_statements:
      - context: scope
        statements:
          - set(attributes["elastic.mapping.mode"], "otel")
    trace_statements:
      - context: scope
        statements:
          - set(attributes["elastic.mapping.mode"], "otel")
    metric_statements:
      - context: scope
        statements:
          - set(attributes["elastic.mapping.mode"], "otel")
exporters:
  elasticsearch:
    endpoint: https://elasticsearch:9200
service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [transform]
      exporters: [elasticsearch]
[!NOTE] The scope attribute elastic.mapping.mode takes precedence over the X-Elastic-Mapping-Mode client metadata. The attribute will be excluded from the final document sent to Elasticsearch.
[!NOTE] otel and ecs mapping modes require Elasticsearch 8.12 or above2. otel mode works best with Elasticsearch 8.16 or above3.

OTel mapping mode

The default and recommended “OTel-native” mapping mode. In otel mapping mode, the Elasticsearch Exporter stores documents in Elastic’s preferred “OTel-native” schema. In this mapping mode, documents use the original attribute names and closely follows the event structure from the OTLP events. There is special treatment for the following attributes: data_stream.type, data_stream.dataset, and data_stream.namespace. Instead of serializing these values under the *attributes.* namespace, they are put at the root of the document, to conform with the conventions of the data stream naming scheme that maps these as constant_keyword fields. data_stream.dataset will always be appended with .otel if dynamic data stream routing mode is active. Span events are stored in separate documents. They will be routed with data_stream.type set to logs if dynamic data stream routing mode is active. Attribute elasticsearch.index will be removed from the final document if exists.
SignalSupported
Logs:white_check_mark:
Traces:white_check_mark:
Metrics:white_check_mark:
Profiles:white_check_mark:

ECS mapping mode

[!WARNING] The ECS mode mapping mode is currently undergoing changes, and its behaviour is unstable.
In ecs mapping mode, the Elasticsearch Exporter maps fields from OpenTelemetry Semantic Conventions (version 1.22.0) to Elastic Common Schema where possible. This mode may be used for compatibility with existing dashboards that work with ECS.
Signalecs
Logs:white_check_mark:
Traces:white_check_mark:
Metrics:white_check_mark:
Profiles:no_entry_sign:

Bodymap mapping mode

[!WARNING] The Bodymap mode mapping mode is currently undergoing changes, and its behaviour is unstable.
In bodymap mapping mode, the Elasticsearch Exporter supports only logs and will take the “body” of a log record as the exact content of the Elasticsearch document without any transformation. This mapping mode is intended for use cases where the client wishes to have complete control over the Elasticsearch document structure.
Signalbodymap
Logs:white_check_mark:
Traces:no_entry_sign:
Metrics:no_entry_sign:
Profiles:no_entry_sign:

Default (none) mapping mode

In the none mapping mode the Elasticsearch Exporter produces documents with the original field names of from the OTLP data structures.
Signalnone
Logs:white_check_mark:
Traces:white_check_mark:
Metrics:no_entry_sign:
Profiles:no_entry_sign:

Raw mapping mode

The raw mapping mode is identical to none, except for two differences:
  • In none mode attributes are mapped with an Attributes. prefix, while in raw mode they are not.
  • In none mode span events are mapped with an Events. prefix, while in raw mode they are not.
Signalraw
Logs:white_check_mark:
Traces:white_check_mark:
Metrics:no_entry_sign:
Profiles:no_entry_sign:

Elasticsearch ingest pipeline

Documents may be optionally passed through an Elasticsearch Ingest pipeline prior to indexing. This can be configured through the following settings:
  • pipeline (optional): ID of an Elasticsearch Ingest pipeline used for processing documents published by the exporter.
  • logs_dynamic_pipeline (optional): Dynamically determines the ingest pipeline to be used in Elasticsearch based on attributes in the log signal.
    • enabled(default=false): Enable/Disable dynamic pipeline. If elasticsearch.ingest_pipeline attribute exists in the log record attributes and is not an empty string, it will be used as the Elasticsearch ingest pipeline. This currently only applies to the log signal. The attribute elasticsearch.ingest_pipeline is removed from the final document when the otel mapping mode is used.

Elasticsearch bulk indexing

The Elasticsearch exporter uses the Elasticsearch Bulk API for indexing documents. The behaviour of this bulk indexing can be configured with the following settings:
  • num_workers (DEPRECATED, use sending_queue::num_consumers instead): This config is deprecated and will be used to configure sending_queue::num_consumers if sending_queue::num_consumers is not explicitly defined. Number of workers publishing bulk requests concurrently.
  • flush (DEPRECATED, use sending_queue instead): This config is deprecated and will be used to configure different options for sending_queue if sending_queue options are not explicitly defined. Event bulk indexer buffer flush settings
    • bytes (DEPRECATED, use sending_queue::batch::max_size instead): This config is deprecated and will be used to configure sending_queue::batch::max_size if sending_queue::batch::max_size is not explicitly defined. See the sending_queue::batch::max_size for more details.
    • interval (DEPRECATED, use sending_queue::batch::flush_timeout instead): This config is deprecated and will be used to configure sending_queue::batch::flush_timeout if sending_queue::batch::flush_timeout is not explicitly defined. See the sending_queue::batch::flush_timeout for more details.
  • retry: Elasticsearch bulk request retry settings
    • enabled (default=true): Enable/Disable request retry on error. Failed requests are retried with exponential backoff.
    • max_requests (DEPRECATED, use retry::max_retries instead): Number of HTTP request retries including the initial attempt. If used, retry::max_retries will be set to max_requests - 1.
    • max_retries (default=2): Number of HTTP request retries. To disable retries, set retry::enabled to false instead of setting max_retries to 0.
    • initial_interval (default=100ms): Initial waiting time if a HTTP request failed.
    • max_interval (default=1m): Max waiting time if a HTTP request failed.
    • retry_on_status (default=[429]): Status codes that trigger request or document level retries. Request level retry and document level retry status codes are shared and cannot be configured separately. To avoid duplicates, it defaults to [429].
  • sending_queue: Configures the queueing and batching behaviour. Below are the defaults (which may vary from standard defaults), for full configuration check the exporterhelper docs.
    • enabled (default=true): Enable queueing and batching behaviour.
    • num_consumers (default=10): Number of consumers that dequeue batches.
    • wait_for_result (default=false): If true, blocks incoming requests until processed.
    • block_on_overflow (default=false): If true, blocks the request until the queue has space.
    • sizer (default=requests): Measure queueing by requests.
    • queue_size (default=10): Maximum size the queue can accept.
    • batch:
      • flush_timeout (default=10s): Time after which batch is exported irrespective of other settings.
      • sizer (default=bytes): Size batches by bytes. Note that bytes here are based on the pdata model and not on the NDJSON docs that will constitute the bulk indexer requests. To address this discrepancy, the bulk indexers could also flush when their size exceeds the configured max_size due to size of pdata model being smaller than their corresponding NDJSON encoding.
      • min_size (default=1MB): Min size of the batch.
      • max_size (default=5MB): Max size of the batch. This value should be much lower than Elasticsearch’s http.max_content_length config to avoid HTTP 413 Entity Too Large error. It is recommended to keep this value under 5MB.

Bulk indexing error response

With Elasticsearch 8.18+, a new query parameter include_source_on_error allows users to receive the source document in the error response, if there were any parsing errors in the bulk request. In the exporter, the equivalent configuration is also named include_source_on_error.
  • include_source_on_error:
    • true: Enables bulk index responses to include source document on error. Requires Elasticsearch 8.18+. WARNING: the exporter may log error responses containing request payload, causing potential sensitive data to be exposed in logs.
    • false: Disables including source document on bulk index error responses. Requires Elasticsearch 8.18+.
    • null (default): Backward-compatible option for older Elasticsearch versions. By default, the error reason is discarded from bulk index responses entirely, i.e. only error type is returned.

Elasticsearch node discovery

The Elasticsearch Exporter will regularly check Elasticsearch for available nodes. Newly discovered nodes will automatically be used for load balancing. Settings related to node discovery are:
  • discover:
    • on_start (optional): If enabled the exporter queries Elasticsearch for all known nodes in the cluster on startup.
    • interval (optional): Interval to update the list of Elasticsearch nodes.
Node discovery can be disabled by setting discover.interval to 0.

Telemetry settings

The Elasticsearch Exporter’s own telemetry settings for testing and debugging purposes. ⚠️ This is experimental and may change at any time.
  • telemetry:
    • log_request_body (default=false): Logs Elasticsearch client request body as a field in a log line at DEBUG level. It requires service::telemetry::logs::level to be set to debug. WARNING: Enabling this config may expose sensitive data.
    • log_response_body (default=false): Logs Elasticsearch client response body as a field in a log line at DEBUG level. It requires service::telemetry::logs::level to be set to debug. WARNING: Enabling this config may expose sensitive data.
    • log_failed_docs_input (default=false): Include the input (action line and document line) causing indexing error under input field in a log line at DEBUG level. It requires service::telemetry::logs::level to be set to debug. WARNING: Enabling this config may expose sensitive data.
    • log_failed_docs_input_rate_limit (default=“1s”): Rate limiting of logs emitted by log_failed_docs_input config, e.g. “1s” means roughly 1 log line per second. A zero or negative value disables rate limiting.

Metadata keys

Metadata keys are a list of client metadata keys that the exporter uses to partition batches when sending_queue is enabled with batching support and enrich internal telemetry. ⚠️ This is experimental and may change at any time.
  • metadata_keys (optional): List of metadata keys that will be used to partition the data into batches if sending_queue is enabled with batching support. With batching enabled only these metadata keys are guaranteed to be propagated. The keys will also be used to enrich the exporter’s internal telemetry if defined. The keys are extracted from the client metadata available via the context and added to the internal telemetry as attributes.
NOTE: The metadata keys are converted to lower case as key lookups for client metadata is case insensitive. This means that the metric produced by internal telemetry will also have the attribute in lower case.

Exporting metrics

Metrics support is currently in development. The metric types supported are:
  • Gauge
  • Sum
  • Histogram (Delta temporality only)
  • Exponential histogram (Delta temporality only)
  • Summary

Metrics dynamic templates

For metrics, the exporter sends per-document dynamic_templates with each bulk index action so that Elasticsearch can apply the correct mapping to metric fields. It uses the bulk API dynamic_templates parameter:
A map from the full name of fields to the name of dynamic templates. It defaults to an empty map. If a name matches a dynamic template, that template will be applied regardless of other match predicates defined in the template. If a field is already defined in the mapping, then this parameter won’t be used.
The index template must define dynamic templates whose names match the values sent by the exporter. Behavior depends on the mapping mode:
Mapping modeField path in documentTemplate names sentNotes
OTelmetrics.<metric name>histogram, summary, gauge_double, gauge_long, counter_double, counter_longThe OTel data plugin defines more specific templates.
ECSmetric.<metric name>histogram_metrics, summary_metrics, double_metricsRelies on core templates in metrics@mappings. Intended to match the APM metrics ingest pipeline.
  • OTel: Each metric is written under the metrics object; the bulk action maps full field names (e.g. metrics.my_metric) to one of the OTel template names above based on metric type (histogram, summary, gauge, or counter) and value type.
  • ECS: Each metric is written as a top-level field metric.<name>; the bulk action maps that field name to one of the ECS/APM template names (histogram_metrics, summary_metrics, or double_metrics for gauges and counters).

Exporting profiles

Profiles support is currently in development, and should not be used in production. Profiles only support the OTel mapping mode. Example:
exporters:
  elasticsearch:
    endpoint: https://elastic.example.com:9200
    mapping:
      mode: otel
[!IMPORTANT] For the Elasticsearch Exporter to be able to export Profiles data, Universal Profiling needs to be installed in the database. See the Universal Profiling getting started documentation You will need to use the Elasticsearch endpoint, with an Elasticsearch API key.

ECS Mapping

elasticsearchexporter follows ECS mapping defined here: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model-appendix.md#elastic-common-schema When mode is set to ecs, elasticsearchexporter performs conversions for resource-level and record-level (log or trace) attributes from their Semantic Conventions (SemConv) names to equivalent Elastic Common Schema (ECS) names. If the target ECS field name is specified as an empty string (""), the converter will neither convert the SemConv key to the equivalent ECS name nor pass through the SemConv key as-is to become the ECS name. When “Preserved” is true, the attribute will be preserved in the payload and duplicated as mapped to its ECS equivalent. When more than one SemConv attribute maps to the same ECS attribute, the converter will map all attributes to the same ECS name. This is mean to support backwards compatibility for SemConv attributes that have been renamed/deprecated. The value of the last-mapped attribute will take precedence. It is recommended to enrich events using the elasticapmprocessor to ensure index documents contain all required Elastic fields to power the Kibana UI.

Resource attribute mapping

Semantic Convention NameECS NamePreserveSkip if exists
client.addressclient.ipfalsefalse
cloud.platformcloud.service.namefalsefalse
container.image.tagscontainer.image.tagfalsefalse
deployment.environmentservice.environmentfalsefalse
deployment.environment.nameservice.environmentfalsefalse
faas.instancefaas.idfalsefalse
faas.triggerfaas.trigger.typefalsefalse
host.archhost.architecturefalsefalse
host.hostnamehost.hostnametruetrue
k8s.cluster.nameorchestrator.cluster.namefalsefalse
k8s.container.namekubernetes.container.namefalsefalse
k8s.cronjob.namekubernetes.cronjob.namefalsefalse
k8s.daemonset.namekubernetes.daemonset.namefalsefalse
k8s.deployment.namekubernetes.deployment.namefalsefalse
k8s.job.namekubernetes.job.namefalsefalse
k8s.namespace.namekubernetes.namespacefalsefalse
k8s.node.namekubernetes.node.namefalsefalse
k8s.pod.namekubernetes.pod.namefalsefalse
k8s.pod.uidkubernetes.pod.uidfalsefalse
k8s.replicaset.namekubernetes.replicaset.namefalsefalse
k8s.statefulset.namekubernetes.statefulset.namefalsefalse
os.descriptionhost.os.fullfalsefalse
os.namehost.os.namefalsefalse
os.typehost.os.platformfalsefalse
os.versionhost.os.versionfalsefalse
process.command_lineprocess.argsfalsefalse
process.executable.nameprocess.titlefalsefalse
process.executable.pathprocess.executablefalsefalse
process.parent.pidprocess.parent.pidfalsefalse
process.runtime.nameservice.runtime.namefalsefalse
process.runtime.versionservice.runtime.versionfalsefalse
service.instance.idservice.node.namefalsefalse
source.addresssource.ipfalsefalse
telemetry.distro.name""falsefalse
telemetry.distro.version""falsefalse
telemetry.sdk.languageservice.language.namefalsefalse
telemetry.sdk.name""falsefalse
telemetry.sdk.versionservice.language.versionfalsefalse

Log record attribute mapping

Semantic Convention NameECS NamePreserve
event.nameevent.actionfalse
exception.messageerror.messagefalse
exception.stacktraceerror.stacktracefalse
exception.typeerror.typefalse
exception.escapedevent.error.exception.handledfalse
http.response.body.sizehttp.response.encoded_body_sizefalse

Span attribute mapping

Semantic Convention NameECS NamePreserve
db.systemspan.db.typefalse
db.namespacespan.db.instancefalse
db.query.textspan.db.statementfalse
http.response.body.sizehttp.response.encoded_body_sizefalse

Compound Mapping

There are ECS fields that are not mapped easily 1 to 1 but require more advanced logic.

host.name and host.hostname

Maintains the SemConv Value host.name as ECS Value host.name and maps it to ECS Value host.hostname, if this does not already exist.

@timestamp

In case the record contains timestamp, this value is used. Otherwise, the observed timestamp is used.

Setting a document id dynamically

The logs_dynamic_id and traces_dynamic_id settings allow users to set the document ID dynamically based on log record, span, or span event attributes. Besides the ability to control the document ID, these settings also work as a deduplication mechanism, as Elasticsearch will refuse to index a document with the same ID. For logs, the log record attribute elasticsearch.document_id can be set explicitly by a processor based on the log record. For traces, the span attribute elasticsearch.document_id (or span event attribute for span events) can be set explicitly by a processor based on the span or span event. As an example, the transform processor can create this attribute dynamically for logs:
processors:
  transform/es-doc-id:
    error_mode: ignore
    log_statements:
      - context: log
        condition: attributes["event_name"] != null && attributes["event_creation_time"] != null
        statements:
          - set(attributes["elasticsearch.document_id"], Concat(["log", attributes["event_name"], attributes["event_creation_time"], "-"))
For traces, you can use the transform processor to set the document ID based on trace and span IDs to ensure uniqueness:
exporters:
  elasticsearch:
    mapping:
      mode: otel  # Required for span events to be separate documents
    traces_dynamic_id:
      enabled: true

processors:
  transform/es-doc-id-traces:
    error_mode: ignore
    trace_statements:
      - context: span
        statements:
          # Set ID for spans
          - set(attributes["elasticsearch.document_id"], Concat([trace_id.string, span_id.string], "-"))
      - context: spanevent
        statements:
          # Set ID for span events (only works in otel mapping mode)
          - set(attributes["elasticsearch.document_id"], Concat([trace_id.string, span_id.string, name], "-"))
Note: Span events are only stored as separate documents in otel mapping mode. In other mapping modes (ecs, bodymap, raw), span events are embedded within the span document and will not have separate document IDs.

Known issues

version_conflict_engine_exception

Symptom: elasticsearchexporter logs an error “failed to index document” with error.type “version_conflict_engine_exception” and error.reason containing “version conflict, document already exists”. This happens when the target data stream is a TSDB metrics data stream (e.g. using OTel mapping mode sending to a 8.16+ Elasticsearch, or ECS mapping mode sending to system integration data streams). Elasticsearch Time Series Data Streams requires that there must only be one document per timestamp with the same dimensions. The purpose is to avoid duplicate data when re-trying a batch of metrics that were previously sent but failed to be indexed. The dimensions are mostly made up of resource attributes, scope attributes, scope name, attributes, and the unit. The exporter can only group metrics with the same dimensions into the same document if they arrive in the same batch. To ensure metrics are not dropped even if they arrive in different batches in the exporter, the exporter adds a fingerprint of the metric names to the document in the otel mapping mode. Note that this functionality requires both
  • minimum Elasticsearch Exporter version 0.121.0
  • minimum Elasticsearch version 8.17.6, 8.18.1, 8.19.0, 9.0.1, or 9.1.0
If you are on an earlier version of Elasticsearch, either update your cluster or install this custom component template:
PUT _component_template/metrics-otel@custom
{
  "template": {
    "mappings": {
      "properties": {
        "_metric_names_hash": {
          "type": "keyword",
          "time_series_dimension": true
        }
      }
    }
  }
}
After installing this component template, if you’ve previously ingested data, you’ll need to wait until the old index of the time series data stream reaches its end_time. This can take up to 30 minutes by default. See time series index look-ahead time for more information. While in most situations, this error is just a sign that Elasticsearch’s duplicate detection is working as intended, the data may be classified as a duplicate while it was not. This implies data is lost.
  1. If the data is not sent in otel mapping mode to metrics-*.otel-* data streams, the metrics name fingerprint is not applied. This can happen for OTel host and k8s metrics that the elasticinframetricsprocessor has translated to the format the host and k8s dashboards in Kibana can consume. If these metrics arrive in the elasticsearchexporter in different batches, they will not be grouped to the same document. This can cause the version_conflict_engine_exception error. Try to remove the batchprocessor from the pipeline (or set send_batch_max_size: 0) to ensure metrics are not split into different batches. This gives the exporter the opportunity to group all related metrics into the same document.
  2. Otherwise, check your metrics pipeline setup for misconfiguration that causes an actual violation of the single writer principle. This means that the same metric with the same dimensions is sent from multiple sources, which is not allowed in the OTel metrics data model.

flush failed (400) illegal_argument_exception

Symptom: bulk indexer logs an error that indicates “bulk indexer flush error” with bulk request returning HTTP 400 and an error type of illegal_argument_exception, similar to the following.
error   [email protected]/bulkindexer.go:343       bulk indexer flush error
{
  "otelcol.component.id": "elasticsearch",
  "otelcol.component.kind": "Exporter",
  "otelcol.signal": "logs",
  "error": "flush failed (400): {\"error\":{\"type\":\"illegal_argument_exception\",\"caused_by\":{}}}"
}
In this scenario, Elasticsearch may reject the bulk request because the require_data_stream bulk action metadata is not supported. This may happen when you use OTel mapping mode (the default mapping mode from v0.122.0, or explicitly by configuring mapping::mode: otel) or ECS mapping mode, and send data to Elasticsearch version < 8.12. To resolve this, upgrade Elasticsearch to 8.12+; for OTel mapping mode, 8.16+ is recommended. Alternatively, try other mapping modes, but the document structure will be different.

”dropping cumulative temporality histogram” and “dropping cumulative temporality exponential histogram”

Symptom: elasticsearchexporter logs a warning dropping cumulative temporarily histogram similar to:
warn    [email protected]/exporter.go:340  validation errors
{
  "resource": {
    "service.instance.id": "33ffe7e8-e944-4f92-8fce-9094f4b61d1d",
    "service.name": "./elastic-agent",
    "service.version": "9.1.5"
  },
  "otelcol.component.id": "elasticsearch/otel",
  "otelcol.component.kind": "exporter",
  "otelcol.signal": "metrics",
  "error": "dropping cumulative temporality histogram \"http.client.request.duration\""
}
This issue occurs because Elasticsearch does not support cumulative temporality for histograms. As a workaround, you can either:
  • Export histogram metrics using delta temporality, or
  • Apply a cumulativetodelta processor. For more details, see Metrics data ingestion.

Attributes

Attribute NameDescriptionTypeValues
error.typeThe type of error that occurred when processing the documents.string
failure_storeThe status of the failure store.stringunknown, not_enabled, used, failed
http.response.status_codeHTTP status code.int
outcomeThe operation outcome.stringsuccess, failed_client, failed_server, timeout, too_many, failure_store, internal_server_error

Configuration

Example Configuration

elasticsearch:
  endpoints: [https://elastic.example.com:9200]
elasticsearch/trace:
  tls:
    insecure: false
  endpoints: [https://elastic.example.com:9200]
  timeout: 2m
  headers:
    myheader: test
  traces_index: trace_index
  traces_dynamic_index:
    enabled: false
  logs_dynamic_index:
    enabled: false
  metrics_dynamic_index:
    enabled: false
  pipeline: mypipeline
  user: elastic
  password: search
  api_key: AvFsEiPs==
  discover:
    on_start: true
  retry:
    max_retries: 5
    retry_on_status:
      - 429
      - 500
elasticsearch/metric:
  tls:
    insecure: false
  endpoints: [http://localhost:9200]
  metrics_index: my_metric_index
  traces_dynamic_index:
    enabled: false
  logs_dynamic_index:
    enabled: false
  metrics_dynamic_index:
    enabled: false
  timeout: 2m
  headers:
    myheader: test
  pipeline: mypipeline
  user: elastic
  password: search
  api_key: AvFsEiPs==
  discover:
    on_start: true
  retry:
    max_retries: 5
    retry_on_status:
      - 429
      - 500
elasticsearch/log:
  tls:
    insecure: false
  endpoints: [http://localhost:9200]
  logs_index: my_log_index
  traces_dynamic_index:
    enabled: false
  logs_dynamic_index:
    enabled: false
  metrics_dynamic_index:
    enabled: false
  timeout: 2m
  headers:
    myheader: test
  pipeline: mypipeline
  user: elastic
  password: search
  api_key: AvFsEiPs==
  discover:
    on_start: true
  retry:
    max_retries: 5
    retry_on_status:
      - 429
      - 500
elasticsearch/logstash_format:
  endpoints: [http://localhost:9200]
  logstash_format:
    enabled: true
elasticsearch/raw:
  endpoints: [http://localhost:9200]
  mapping:
    mode: raw
elasticsearch/cloudid:
  cloudid: foo:YmFyLmNsb3VkLmVzLmlvJGFiYzEyMyRkZWY0NTY=
elasticsearch/confighttp_endpoint:
  endpoint: https://elastic.example.com:9200
elasticsearch/compression_none:
  endpoint: https://elastic.example.com:9200
  compression: none
elasticsearch/compression_gzip:
  endpoint: https://elastic.example.com:9200
  compression: gzip
elasticsearch/include_source_on_error:
  endpoint: https://elastic.example.com:9200
  include_source_on_error: true
elasticsearch/metadata_keys:
  endpoint: https://elastic.example.com:9200
  metadata_keys:
    - x-test-1
    - x-test-2
elasticsearch/sendingqueue_disabled:
  endpoint: https://elastic.example.com:9200
  sending_queue:
    enabled: false
elasticsearch/sendingqueue_enabled:
  endpoint: https://elastic.example.com:9200
  sending_queue:
    enabled: true
    sizer: requests
    num_consumers: 100
    batch:
      flush_timeout: 1s
      sizer: items
      min_size: 1000
      max_size: 5000
elasticsearch/backward_compat_for_deprecated_cfgs/new_config_takes_priority:
  endpoint: https://elastic.example.com:9200
  # Should be ignored and left as-is
  num_workers: 11
  flush:
    interval: 11s
    bytes: 1001
  # Should take precedence
  sending_queue:
    enabled: true
    sizer: requests
    num_consumers: 111
    batch:
      flush_timeout: 111s
      max_size: 1_000_001
      sizer: bytes
elasticsearch/backward_compat_for_deprecated_cfgs/fallback_to_old_cfg:
  endpoint: https://elastic.example.com:9200
  # Should be used to set sending_queue config
  num_workers: 11
  flush:
    interval: 11s
    bytes: 1_000_001
  sending_queue:
    enabled: true
    sizer: requests
    batch:
      sizer: bytes
elasticsearch/suppress_conflict_errors:
  endpoint: https://elastic.example.com:9200
  suppress_conflict_errors: true

Last generated: 2026-04-13

Footnotes

  1. See additional handling in Document routing exceptions for OTel data mode 2 3
  2. as OTel and ECS mapping modes rely on the require_data_stream bulk action metadata, available since Elasticsearch 8.12
  3. Elasticsearch 8.16 contains a built-in otel-data plugin