Skip to main content

Drain Processor

Status Maintainers: @MikeGoldsmith, @atoulme, @martinjt Source: opentelemetry-collector-contrib

Supported Telemetry

Logs

Overview

This processor annotates; it does not filter. Use the filter processor downstream to act on the log.record.template attribute — for example, to drop entire classes of noisy logs by pattern.

How it works

Drain builds a parse tree from the token structure of log lines. Lines with similar structure are grouped into a cluster, and a template is derived by replacing variable tokens with <*> wildcards. As more logs arrive the templates become more accurate and stable. Use the template string for filtering rules; it converges to the same value across instances given the same configuration and log patterns (see Deployment considerations).

Configuration

processors:
  drain:
    # Drain parse tree parameters
    tree_depth: 4              # default: 4 (minimum: 3; `depth` in the Drain paper)
    merge_threshold: 0.4       # default: 0.4, range [0.0, 1.0] (`st` in the Drain paper)
    max_node_children: 100     # default: 100 (`maxChild` in the Drain paper)
    max_clusters: 0            # default: 0 (unlimited, LRU eviction when > 0)
    extra_delimiters: []       # default: [] (extra token delimiters beyond whitespace)

    # Body extraction
    body_field: ""             # default: "" (use full body string)

    # Output attribute name
    template_attribute: "log.record.template"    # default

    # Seeding (optional)
    seed_templates: []
    seed_logs: []

    # Warmup suppression (optional)
    warmup_min_clusters: 0     # default: 0 (disabled; annotates from the first record)

Parameters

FieldTypeDefaultDescription
tree_depthint4Max depth of the Drain parse tree (depth in the Drain paper). Higher values produce more specific templates. Minimum: 3.
merge_thresholdfloat0.4Minimum fraction of tokens that must match an existing cluster template for a log line to be merged into it rather than forming a new cluster (st in the Drain paper). Range: [0.0, 1.0].
max_node_childrenint100Maximum children per internal parse tree node (maxChild in the Drain paper). Bounds memory on high-cardinality token positions.
max_clustersint0Maximum clusters tracked. When exceeded, the least-recently-used cluster is evicted. 0 means unlimited.
extra_delimiters[]string[]Additional token delimiters beyond whitespace (e.g. [",", ":"]).
body_fieldstring""If set, and the log body is a structured map, the value of this top-level key is used as the text to template instead of the full body.
template_attributestring"log.record.template"Attribute key written with the derived template string.
seed_templates[]string[]Template strings to pre-load at startup (see Seeding).
seed_logs[]string[]Raw example log lines to train on at startup (see Seeding).
warmup_min_clustersint0Number of distinct clusters that must be observed before annotation is enabled. 0 disables warmup suppression (see Warmup suppression).

Seeding

Seeding pre-populates the Drain tree before any live logs arrive. This is the primary mechanism for stable templates across restarts.

seed_templates

Provide known template strings directly. The processor trains on each entry at startup, establishing clusters for those patterns immediately.
processors:
  drain:
    seed_templates:
      - "user <*> logged in from <*>"
      - "connected to <*>"
      - "heartbeat ping <*>"

seed_logs

Provide raw example log lines. The processor trains on them at startup, letting Drain derive the templates itself. Useful when exact template strings are not known in advance.
processors:
  drain:
    seed_logs:
      - "user alice logged in from 10.0.0.1"
      - "user bob logged in from 192.168.1.1"
      - "connected to 10.0.0.1"
Empty and whitespace-only entries in both lists are silently skipped.

Deployment considerations

Multiple collector instances

Each collector instance builds its Drain parse tree independently in memory. Two instances processing the same log patterns will converge on identical templates because the Drain algorithm is deterministic: given the same configuration and a representative sample of log forms, the same token structure produces the same template string. The main caveat is the early training phase. Before an instance has seen enough lines to abstract a wildcard (e.g. before "user alice logged in" and "user bob logged in" have both been observed), different instances may temporarily produce different templates for the same logical pattern. This is most noticeable at startup with low-volume or highly variable log streams. Mitigations:
  • Use seed_templates or seed_logs to pre-load known patterns at startup. With a comprehensive seed set, instances start in an already-converged state and live training only fills in the gaps.
  • Use warmup_min_clusters to suppress annotation until the tree has stabilised, avoiding unstable templates reaching downstream processors.

Warmup suppression

When warmup_min_clusters is set to a value greater than zero, the processor trains on every record from the start but does not write log.record.template until that many distinct clusters have been observed. Records pass through immediately — there is no buffering or added latency — they simply arrive at the next processor unannotated during the warmup window. The warmup window is observable via otelcol_processor_incoming_items - otelcol_processor_drain_log_records_annotated — the difference represents records that passed through without a template attribute.
processors:
  drain:
    warmup_min_clusters: 20
Once the threshold is reached, all subsequent records are annotated normally. Records that passed through during warmup are not re-annotated.

Metrics

The processor emits the following internal telemetry metrics:
MetricTypeDescription
otelcol_processor_drain_clusters_activegaugeCurrent number of active clusters in the Drain parse tree. Useful for tracking tree growth and stability over time.
otelcol_processor_drain_log_records_annotatedcounterNumber of log records successfully annotated with a template.

Output attributes

The processor sets the following attribute on each log record:
AttributeTypeExampleDescription
log.record.templatestring"user <*> logged in from <*>"The Drain-derived template string. Stable within an instance once the tree has warmed up. Use this for filtering rules.
The attribute name is configurable via template_attribute.
Semantic conventions: log.record.template aligns with the proposed OTel attribute in open-telemetry/semantic-conventions#1283 and #2064. These names may be updated if a convention is formally adopted.

Example pipeline

The following pipeline annotates logs with Drain templates and then drops known noisy patterns using the filter processor:
processors:
  drain:
    tree_depth: 4
    merge_threshold: 0.4
    max_clusters: 500
    seed_templates:
      - "user <*> logged in from <*>"
      - "connected to <*>"
      - "heartbeat ping <*>"
    warmup_min_clusters: 20

  filter/drop_noisy:
    error_mode: ignore
    logs:
      log_record:
        - attributes["log.record.template"] == "heartbeat ping <*>"
        - attributes["log.record.template"] == "connected to <*>"

service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [drain, filter/drop_noisy]
      exporters: [otlp]

body_field

body_field is a convenience for pipelines where the log body is a structured map and you do not have full control over how upstream processors shape it. If you do control the pipeline, the preferred approach is a move operator in the filelog receiver (or equivalent) to promote the message field back to a plain string body before the drain processor sees the record:
operators:
  - type: json_parser
  - type: move
    from: body.message
    to: body
If you cannot do that — for example, logs arrive via OTLP already structured — set body_field to the map key whose value should be fed to Drain:
processors:
  drain:
    body_field: "message"
Given a log body {"level": "info", "message": "user alice logged in from 10.0.0.1"}, only the message value is fed to Drain. The full body is used unchanged if the field is absent or the body is not a map.
Note: body_field only supports a single top-level key. Full OTTL path expressions (e.g. body["event"]["message"]) are not supported and are noted as a future extension.

Future extensions

  • Snapshot persistence: save and restore the Drain tree state across restarts, eliminating the need for seeding and warmup_min_clusters for most deployments. The internal drain package already exposes JSON snapshot hooks; the remaining work is plumbing them into the collector lifecycle and shared storage.
  • OTTL body extraction: support full OTTL path expressions for body_field instead of a single top-level key name.
  • Multi-instance synchronization: optional shared snapshot file or gossip-based tree merging for consistent templates across horizontally scaled deployments.

Last generated: 2026-04-20