Skip to main content

Signaltometrics Connector

Status Available in: contrib Maintainers: @ChrsMark, @lahsivjar Source: opentelemetry-collector-contrib

Overview

Configuration

The component can produce metrics from spans, datapoints (for metrics), and logs. At least one of the metrics for one signal type MUST be specified correctly for the component to work. All signal types can be configured to produce metrics with the same configuration structure. For example, the below configuration will produce delta temporality counters for counting number of events for each of the configured signals:
signal_to_metrics:
  spans:
    - name: span.count
      description: Count of spans
      sum:
        value: Int(AdjustedCount()) # Count of total spans represented by each span
        monotonic: true
  datapoints:
    - name: datapoint.count
      description: Count of datapoints
      sum:
        value: "1" # increment by 1 for each datapoint
        monotonic: true
  logs:
    - name: logrecord.count
      description: Count of log records
      sum:
        value: "1" # increment by 1 for each log record
        monotonic: true
  profiles:
    - name: profile.count
      description: Count of profiles
      sum:
        value: "1" # increment by 1 for each profile
        monotonic: true

Error Handling

The error_mode configuration option determines how the connector handles errors that occur while processing OTTL expressions:
  • error_mode (optional): Determines how errors returned from OTTL expressions are handled. Valid values are propagate, ignore, and silent.
    • propagate (default): Errors cause the entire batch to fail and be returned up the pipeline. This will result in the payload being dropped from the collector.
    • ignore: Errors are logged and the specific record that caused the error is skipped, but processing continues for the rest of the batch.
    • silent: Errors are not logged and the specific record that caused the error is skipped, but processing continues for the rest of the batch.
Example with error handling:
signaltometrics:
  error_mode: ignore  # Log errors but continue processing other records
  spans:
    - name: span.count
      description: Count of spans
      sum:
        value: Int(AdjustedCount())

Metrics types

signal_to_metrics produces a variety of metric types by utilizing OTTL to extract the relevant data for a metric type from the incoming data. The component can produce the following metric types for each signal type: The component does NOT perform any stateful or time based aggregations. The metric types are aggregated for the payload sent in each Consume* call. The final metric is then sent forward in the pipeline.

Sum

Sum metrics have the following configurations:
sum:
  value: <ottl_value_expression>
  monotonic: <bool>
  • [Required] value represents an OTTL expression to extract a value from the incoming data. Only OTTL expressions that return a value are accepted. The returned value determines the value type of the sum metric (int or double). OTTL converters can be used to transform the data.
  • [Optional] monotonic whether the generated metric is monotonic. It defaults to false.

Gauge

Gauge metrics aggregate the last value of a signal and have the following configuration:
gauge:
  value: <ottl_value_expression>
  • [Required] valuerepresents an OTTL expression to extract a numeric value from the signal. Only OTTL expressions that return a value are accepted. The returned value determines the value type of the gauge metric (int or double).
    • For logs: Use e.g. ExtractGrokPatterns with a single key selector (see below).
    • For other signals: Use a field such as value_int, value_double, or a valid OTTL expression.
Examples: Logs (with Grok pattern):
signal_to_metrics:
  logs:
    - name: logs.memory_mb
      description: Extract memory_mb from log records
      gauge:
        value: ExtractGrokPatterns(body, "Memory usage %{NUMBER:memory_mb:int}MB")["memory_mb"]
Traces:
signal_to_metrics:
  spans:
    - name: span.duration.gauge
      description: Span duration as gauge
      gauge:
        value: Int(Seconds(end_time - start_time))

Histogram

Histogram metrics have the following configurations:
histogram:
  buckets: []float64
  count: <ottl_value_expression>
  value: <ottl_value_expression>
  • [Optional] buckets represents the buckets to be used for the histogram. If no buckets are configured then it defaults to:
    []float64{2, 4, 6, 8, 10, 50, 100, 200, 400, 800, 1000, 1400, 2000, 5000, 10_000, 15_000}
    
  • [Optional] count represents an OTTL expression to extract the count to be recorded in the histogram from the incoming data. If no expression is provided then it defaults to the count of the signal. OTTL converters can be used to transform the data. For spans, a special converter adjusted count, is provided to help calculate the span’s adjusted count.
  • [Required] value represents an OTTL expression to extract the value to be recorded in the histogram from the incoming data. OTTL converters can be used to transform the data.

Exponential Histogram

Exponential histogram metrics have the following configurations:
exponential_histogram:
  max_size: <int64>
  count: <ottl_value_expression>
  value: <ottl_value_expression>
  • [Optional] max_size represents the maximum number of buckets per positive or negative number range. Defaults to 160.
  • [Optional] count represents an OTTL expression to extract the count to be recorded in the exponential histogram from the incoming data. If no expression is provided then it defaults to the count of the signal. OTTL converters can be used to transform the data. For spans, a special converter adjusted count, is provided to help calculate the span’s adjusted count.
  • [Required] value represents an OTTL expression to extract the value to be recorded in the exponential histogram from the incoming data. OTTL converters can be used to transform the data.

Attributes

The component can produce metrics categorized by the attributes (span attributes for traces, datapoint attributes for datapoints, or log record attributes for logs) from the incoming data by configuring attributes for the configured metrics. If no attributes are configured then the metrics are produced without any attributes.
attributes:
  - key: datapoint.foo
  - key: datapoint.bar
    default_value: bar
  - key: datapoint.baz
    optional: true
  - keys_expression: otelcol.client.metadata["x-dynamic-attributes"]
Each attribute entry must have exactly one of key or keys_expression set. If attributes are specified then a separate metric will be generated for each unique set of attribute values. There are four behaviors that can be configured for an attribute:
  • Without any extra parameters: datapoint.foo in the above yaml is an example of such configuration. In this configuration, only the signals which have the said attribute are processed with the attribute’s value as one of the attributes for the output metric. If the attribute is missing then the signal is not processed.
  • With default_value: datapoint.bar in the above yaml is an example of such configuration. In this configuration all the signals are processed irrespective of the attribute being present or not in the input signal. The output metric is categorized as per the incoming value of the attribute and an extra bucket exists with the attribute set to the configured default value for all the signals that were missing the configured attribute.
  • With optional set to true: datapoint.baz in the above yaml is an example of such configuration. If the attribute is configured with optional and present in the incoming signal then it will be added directly to the output metric. If it is absent then a new metric with missing attributes will be created. In addition, the optional attribute will not impact the decision i.e. even if the optional attributes are not present in the incoming signal, the signal will be processed and will produce a metric given all other non-optional attributes are present or have a default value defined.
  • With keys_expression: The OTTL value expression is evaluated at runtime and must return a list of attribute keys (pcommon.Slice or []string). Each resolved key is looked up in the signal’s attributes and included in the output metric. If the expression returns nil (e.g. missing client metadata), it is treated as an empty list. Expression evaluation errors are governed by the error_mode configuration. The optional and default_value options can be combined with keys_expression and apply to each resolved key.
Note that resource attributes are handled differently, check the resource attributes section for more details on this. Think of attributes as conditional filters for choosing which attributes should be included in the output metric whereas include_resource_attributes is an include list for customizing resource attributes of the output metric.

Conditions

Conditions are an optional list of OTTL conditions that are evaluated on the incoming data and are ORed together. For example:
signal_to_metrics:
  datapoints:
    - name: datapoint.bar.sum
      description: Count total number of datapoints as per datapoint.bar attribute
      conditions:
        - resource.attributes["foo"] != nil
        - resource.attributes["bar"] != nil
      sum:
        value: "1"
The above configuration will produce sum metrics from datapoints with either foo OR bar resource attribute defined. Conditions can also be ANDed together, for example:
signal_to_metrics:
  datapoints:
    - name: gauge.to.exphistogram
      conditions:
        - metric.type == 1 AND resource.attributes["resource.foo"] != nil
      exponential_histogram:
        count: "1" # 1 count for each datapoint
        value: Double(value_int) + value_double # handle both int and double
The above configuration produces exponential histogram from gauge metrics with resource attributes resource.foo set.

Customizing resource attributes

The component allows customizing the resource attributes for the produced metrics by specifying a list of attributes that should be included in the final metrics. If no attributes are specified for include_resource_attributes then no filtering is performed i.e. all resource attributes of the incoming data is considered.
include_resource_attributes:
  - key: resource.foo # Include resource.foo attribute if present
  - key: resource.bar # Always include resource.bar attribute, default to bar
    default_value: bar
  - key: resource.baz # Optional resource.baz attribute is added if present
    optional: true
  - keys_expression: otelcol.client.metadata["x-dynamic-resource-attributes"]
Each entry must have exactly one of key or keys_expression set. With the above configuration the produced metrics would have the following resource attributes:
  • resource.foo will be present for the produced metrics if the incoming data also has the attribute defined. If the attribute is missing in the incoming data the output metric will be produced without the said attribute.
  • resource.bar will always be present because of the default_value. If the incoming data does not have a resource attribute with name resource.bar then the configured default_value of bar will be used.
  • resource.baz will behave exactly same as resource.foo. Since resource attributes are basically an include list, the optional option is a no-op i.e. the resource attributes with optional set to true behaves identical to an attribute configured without default_value or optional.
  • The keys_expression entry evaluates the OTTL value expression at runtime to resolve a list of attribute keys. The expression must return a list of strings (pcommon.Slice or []string). Each resolved key is looked up in the resource attributes and included in the output metric. If the expression returns nil (e.g. missing client metadata), it is treated as an empty list. Expression evaluation errors are governed by the error_mode configuration. The optional and default_value options can be combined with keys_expression and apply to each resolved key. OTTL expressions for include_resource_attributes should only reference resource-level paths (e.g. resource.attributes) or context-level paths (e.g. otelcol.client.metadata), not signal-specific paths (e.g. attributes, span.*, log.*).

Single writer

Metrics data streams MUST obey single-writer principle. However, since signal_to_metrics component produces metrics from all signal types and also allows customizing the resource attributes, there is a possibility of violating the single-writer principle. To keep the single-writer principle intact, the component adds collector instance information as resource attributes. The following resource attribute is added to each produced metric:
signal_to_metrics.service.instance.id: <service_instance_id_of_the_otel_collector>

Custom OTTL functions

The component implements the following custom OTTL functions:
  1. AdjustedCount: a converter capable of calculating adjusted count for a span.

Configuration

config.yaml (testdata)

signal_to_metrics:
  spans:
    - name: with_resource_foo_only
      description: Spans with resource attribute including resource.foo as a int sum metric
      unit: s
      include_resource_attributes:
        - key: resource.foo
      sum:
        value: Int(Seconds(end_time - start_time))
        monotonic: true
    - name: span_adjusted_count
      description: Adjusted count for the span as a sum metric
      unit: s
      sum:
        value: Int(AdjustedCount())
        monotonic: true
    - name: http.trace.span.duration
      description: Span duration for HTTP spans as a int sum metric
      unit: s
      attributes:
        - key: http.response.status_code
      sum:
        value: Int(Seconds(end_time - start_time))
        monotonic: true
    - name: db.trace.span.duration
      description: Span duration for DB spans as a int sum metric
      unit: s
      attributes:
        - key: db.system
      sum:
        value: Int(Seconds(end_time - start_time))
        monotonic: true
    - name: msg.trace.span.duration
      description: Span duration for messaging spans as a double sum metric
      unit: s
      conditions: # Will evaluate to true
        - resource.attributes["404.attribute"] != nil
        - resource.attributes["resource.foo"] != nil
      attributes:
        - key: messaging.system
      sum:
        value: Double(Seconds(end_time - start_time))
        monotonic: true
    - name: ignored.sum
      description: Will be ignored due to conditions evaluating to false
      unit: s
      conditions: # Will evaluate to false
        - resource.attributes["404.attribute"] != nil
      attributes:
        - key: messaging.system
      sum:
        value: Double(Seconds(end_time - start_time))
        monotonic: true

config.yaml (testdata)

signal_to_metrics:
  spans:
    - name: with_priority_override
      description: Test that static key defined after keys_expression overrides dynamic resolution
      unit: s
      include_resource_attributes:
        - key: resource.foo
      attributes:
        # Dynamic entry first: resolves to ["db.system"]. For spans that
        # have db.system, this picks up the actual value (e.g. "mysql").
        - keys_expression: otelcol.client.metadata["x-dynamic-attrs"]
          default_value: dynamic_override
        # Static entry second: overrides db.system with a fixed default.
        # Since this appears AFTER the dynamic entry, it should win.
        - key: db.system
          default_value: static_override
      sum:
        value: "1"
        monotonic: true

config.yaml (testdata)

signal_to_metrics:
  spans:
    - name: with_dynamic_resource_attrs
      description: Spans with dynamically resolved resource attributes from client metadata
      unit: s
      include_resource_attributes:
        - keys_expression: otelcol.client.metadata["x-dynamic-resource-attributes"]
      sum:
        value: Int(Seconds(end_time - start_time))
        monotonic: true

config.yaml (testdata)

signal_to_metrics:
  spans:
    - name: metric.trace.duration
      description: Spans with resource attribute including resource.foo as a histogram metric
      unit: ms
      histogram:
        value: Int(Milliseconds(end_time - start_time))
    - name: metric.trace.duration
      description: Spans with resource attribute including resource.foo as a exponential histogram metric
      unit: ms
      exponential_histogram:
        value: Int(Milliseconds(end_time - start_time))
    - name: metric.trace.duration
      description: Spans with resource attribute including resource.foo as a sum metric
      unit: ms
      sum:
        value: Int(Milliseconds(end_time - start_time))

config.yaml (testdata)

signal_to_metrics:
  spans:
    - name: with_resource_filter # with resource.foo filter
      description: Spans with resource attribute including resource.foo as a histogram metric
      unit: ms
      include_resource_attributes:
        - key: resource.foo
      histogram:
        count: "Int(AdjustedCount())"
        value: Milliseconds(end_time - start_time)
    - name: with_resource_filter # with resource.bar filter
      description: Spans with resource attribute including resource.bar as a histogram metric
      unit: ms
      include_resource_attributes:
        - key: resource.bar
      histogram:
        count: "Int(AdjustedCount())"
        value: Milliseconds(end_time - start_time)
    - name: with_custom_count
      description: Spans with custom count OTTL expression as a histogram metric
      unit: ms
      histogram:
        count: "2" # count each span twice
        value: Milliseconds(end_time - start_time)
    - name: http.trace.span.duration
      description: Span duration for HTTP spans as a histogram metric
      unit: ms
      attributes:
        - key: http.response.status_code
      histogram:
        count: "Int(AdjustedCount())"
        value: Milliseconds(end_time - start_time)
    - name: db.trace.span.duration
      description: Span duration for DB spans as a histogram metric
      unit: ms
      attributes:
        - key: db.system
      histogram:
        count: "Int(AdjustedCount())"
        value: Milliseconds(end_time - start_time)
    - name: msg.trace.span.duration
      description: Span duration for messaging spans as a histogram metric
      unit: ms
      conditions: # Will evaluate to true
        - resource.attributes["404.attribute"] != nil
        - resource.attributes["resource.foo"] != nil
      attributes:
        - key: messaging.system
      histogram:
        count: "Int(AdjustedCount())"
        value: Milliseconds(end_time - start_time)
    - name: ignored.histogram
      description: Will be ignored due to conditions evaluating to false
      unit: ms
      conditions: # Will evaluate to false
        - resource.attributes["404.attribute"] != nil
      attributes:
        - key: messaging.system
      histogram:
        count: "Int(AdjustedCount())"
        value: Milliseconds(end_time - start_time)
    - name: optional.histogram
      description: The configured optional attribute will be added as-is
      unit: ms
      include_resource_attributes:
        - key: resource.foo
      attributes:
        - key: db.name # All spans with db.name set will be bucketed and a separate bucket will be created with no db.name
          optional: true
      histogram:
        count: "Int(AdjustedCount())"
        value: Milliseconds(end_time - start_time)

config.yaml (testdata)

signal_to_metrics:
  spans:
    - name: with_resource_foo_only
      description: Spans with resource attribute including resource.foo as a int gauge metric
      unit: s
      include_resource_attributes:
        - key: resource.foo
      gauge:
        value: Double(Seconds(end_time - start_time))
    - name: span_adjusted_count
      description: Adjusted count for the span as a int gauge metric
      unit: s
      gauge:
        value: Int(AdjustedCount())
    - name: http.trace.span.duration
      description: Span duration for HTTP spans as a int gauge metric
      unit: s
      attributes:
        - key: http.response.status_code
      gauge:
        value: Int(Seconds(end_time - start_time))
    - name: db.trace.span.duration
      description: Span duration for DB spans as a int gauge metric
      unit: s
      attributes:
        - key: db.system
      gauge:
        value: Double(Seconds(end_time - start_time))
    - name: msg.trace.span.duration
      description: Span duration for messaging spans as a double gauge metric
      unit: s
      conditions: # Will evaluate to true
        - resource.attributes["404.attribute"] != nil
        - resource.attributes["resource.foo"] != nil
      attributes:
        - key: messaging.system
      gauge:
        value: Int(Seconds(end_time - start_time))
    - name: ignored.gauge
      description: Will be ignored due to conditions evaluating to false
      unit: s
      conditions: # Will evaluate to false
        - resource.attributes["404.attribute"] != nil
      attributes:
        - key: messaging.system
      gauge:
        value: Double(Seconds(end_time - start_time))

config.yaml (testdata)

signal_to_metrics:
  spans:
    - name: with_resource_filter # with resource.foo filter
      description: Spans with resource attribute including resource.foo as a exponential histogram metric
      unit: ms
      include_resource_attributes:
        - key: resource.foo
      exponential_histogram:
        count: "Int(AdjustedCount())"
        value: Milliseconds(end_time - start_time)
    - name: with_resource_filter # with resource.bar filter
      description: Spans with resource attribute including resource.bar as a exponential histogram metric
      unit: ms
      include_resource_attributes:
        - key: resource.bar
      exponential_histogram:
        count: "Int(AdjustedCount())"
        value: Milliseconds(end_time - start_time)
    - name: with_custom_count
      description: Spans with custom count OTTL expression as a exponential histogram metric
      unit: ms
      exponential_histogram:
        count: "2" # count each span twice
        value: Milliseconds(end_time - start_time)
    - name: http.trace.span.duration
      description: Span duration for HTTP spans as a exponential histogram metric
      unit: ms
      attributes:
        - key: http.response.status_code
      exponential_histogram:
        count: "Int(AdjustedCount())"
        value: Milliseconds(end_time - start_time)
    - name: db.trace.span.duration
      description: Span duration for DB spans as a exponential histogram metric
      unit: ms
      attributes:
        - key: db.system
      exponential_histogram:
        count: "Int(AdjustedCount())"
        value: Milliseconds(end_time - start_time)
    - name: msg.trace.span.duration
      description: Span duration for messaging spans as a exponential histogram metric
      unit: ms
      conditions: # Will evaluate to true
        - resource.attributes["404.attribute"] != nil
        - resource.attributes["resource.foo"] != nil
      attributes:
        - key: messaging.system
      exponential_histogram:
        count: "Int(AdjustedCount())"
        value: Milliseconds(end_time - start_time)
    - name: ignored.exphistogram
      description: Will be ignored due to conditions evaluating to false
      unit: ms
      conditions: # Will evaluate to false
        - resource.attributes["404.attribute"] != nil
      attributes:
        - key: messaging.system
      exponential_histogram:
        count: "Int(AdjustedCount())"
        value: Milliseconds(end_time - start_time)

config.yaml (testdata)

signal_to_metrics:
  profiles:
    - name: total.profiles.sum
      description: Count total number of profiles
      sum:
        value: "1"
        monotonic: true
    - name: total.profiles.resource.foo.sum
      description: Count total number of profiles with resource attribute foo
      include_resource_attributes:
        - key: resource.foo
      sum:
        value: "1"
        monotonic: true
    - name: profiles.foo.sum
      description: Count total number of profiles as per profile.foo attribute
      attributes:
        - key: profile.foo
      sum:
        value: "1"
        monotonic: true
    - name: profiles.bar.sum
      description: Count total number of profiles as per profiles.bar attribute
      conditions: # Will evaluate to true
        - resource.attributes["404.attribute"] != nil
        - resource.attributes["resource.foo"] != nil
      attributes:
        - key: profiles.bar
      sum:
        value: "1"
        monotonic: true
    - name: ignored.sum
      description: Will be ignored due to conditions evaluating to false
      conditions: # Will evaluate to false
        - resource.attributes["404.attribute"] != nil
      attributes:
        - key: profiles.bar
      sum:
        value: "2"
        monotonic: true

config.yaml (testdata)

signal_to_metrics:
  profiles:
    - name: total.profiles.histogram
      description: Profiles as histogram with duration
      histogram:
        count: "1"
        value: duration_unix_nano
        buckets: [1, 10, 50, 100, 200]
    - name: total.profiles.resource.foo.histogram
      description: Profiles with resource attribute foo as histogram with duration
      include_resource_attributes:
        - key: resource.foo
      histogram:
        count: "1"
        value: duration_unix_nano
        buckets: [1, 10, 50, 100, 200]
    - name: profiles.foo.histogram
      description: Count total number of profiles as per profile.foo attribute as histogram with duration
      attributes:
        - key: profile.foo
      histogram:
        count: "1"
        value: duration_unix_nano
        buckets: [1, 10, 50, 100, 200]
    - name: profiles.bar.histogram
      description: Count total number of profiles as per profiles.bar attribute as histogram with duration
      conditions: # Will evaluate to true
        - resource.attributes["404.attribute"] != nil
        - resource.attributes["resource.foo"] != nil
      attributes:
        - key: profiles.bar
      histogram:
        count: "1"
        value: duration_unix_nano
        buckets: [1, 10, 50, 100, 200]
    - name: ignored.histogram
      description: Will be ignored due to conditions evaluating to false
      conditions: # Will evaluate to false
        - resource.attributes["404.attribute"] != nil
      attributes:
        - key: profiles.bar
      histogram:
        count: "2"
        value: duration_unix_nano
        buckets: [1, 50, 200]

config.yaml (testdata)

signal_to_metrics:
  profiles:
    - name: total.profiles.exphistogram
      description: Profiles as exponential histogram with duration
      exponential_histogram:
        count: "1"
        value: duration_unix_nano
    - name: total.profiles.resource.foo.exphistogram
      description: Profiles with resource attribute foo as exponential histogram with duration
      include_resource_attributes:
        - key: resource.foo
      exponential_histogram:
        count: "1"
        value: duration_unix_nano
    - name: profiles.foo.exphistogram
      description: Count total number of profiles as per profiles.foo attribute as exponential histogram with duration
      attributes:
        - key: profile.foo
      exponential_histogram:
        count: "1"
        value: duration_unix_nano
    - name: profiles.bar.exphistogram
      description: Count total number of profiles as per profiles.bar attribute as exponential histogram with duration
      conditions: # Will evaluate to true
        - resource.attributes["404.attribute"] != nil
        - resource.attributes["resource.foo"] != nil
      attributes:
        - key: profiles.bar
      exponential_histogram:
        count: "1"
        value: duration_unix_nano
    - name: ignored.exphistogram
      description: Will be ignored due to conditions evaluating to false
      conditions: # Will evaluate to false
        - resource.attributes["404.attribute"] != nil
      attributes:
        - key: profiles.bar
      exponential_histogram:
        count: "2"
        value: duration_unix_nano

config.yaml (testdata)

signal_to_metrics:
  datapoints:
    - name: total.datapoint.sum
      description: Count total number of datapoints
      sum:
        value: "1"
        monotonic: true
    - name: datapoint.foo.sum
      description: Count total number of datapoints as per datapoint.foo attribute
      attributes:
        - key: datapoint.foo
      sum:
        value: "1"
        monotonic: true
    - name: datapoint.bar.sum
      description: Count total number of datapoints as per datapoint.bar attribute
      conditions: # Will evaluate to true
        - resource.attributes["404.attribute"] != nil
        - resource.attributes["resource.foo"] != nil
      attributes:
        - key: datapoint.bar
      sum:
        value: "1"
        monotonic: true
    - name: ignored.sum
      description: Will be ignored due to conditions evaluating to false
      conditions: # Will evaluate to false
        - resource.attributes["404.attribute"] != nil
      attributes:
        - key: datapoint.bar
      sum:
        value: "2"
        monotonic: true
    - name: non.monotonic.sum
      description: A non-monotonic sum
      conditions:
        - metric.name == "sum-int"
      sum:
        value: datapoint.value_int
        # monotonic: false by default

config.yaml (testdata)

signal_to_metrics:
  datapoints:
    - name: gauge.to.histogram
      description: A histogram created from gauge values
      include_resource_attributes:
        - key: resource.foo
      attributes:
        - key: datapoint.foo
      conditions:
        - metric.type == 1 # select all gauges
      histogram:
        buckets: [1, 4, 5, 8, 200, 500, 1000]
        count: "1" # 1 count for each datapoint
        value: Double(value_int) + value_double # handle both int and double

config.yaml (testdata)

signal_to_metrics:
  datapoints:
    - name: datapoint.bar.gauge
      description: Last gauge as per datapoint.bar attribute
      attributes:
        - key: datapoint.bar
      conditions:
        - metric.type == 2 # select all sums
      gauge:
        value: Double(value_int) + value_double

config.yaml (testdata)

signal_to_metrics:
  datapoints:
    - name: gauge.to.exphistogram
      description: An exponential histogram created from gauge values
      include_resource_attributes:
        - key: resource.foo
      attributes:
        - key: datapoint.foo
      conditions:
        - metric.type == 1 # select all gauges
      exponential_histogram:
        count: "1" # 1 count for each datapoint
        value: Double(value_int) + value_double # handle both int and double

config.yaml (testdata)

signal_to_metrics:
  logs:
    - name: total.logrecords.sum
      description: Count total number of log records
      sum:
        value: "1"
        monotonic: true
    - name: total.logrecords.resource.foo.sum
      description: Count total number of log records with resource attribute foo
      include_resource_attributes:
        - key: resource.foo
      sum:
        value: "1"
        monotonic: true
    - name: log.foo.sum
      description: Count total number of log records as per log.foo attribute
      attributes:
        - key: log.foo
      sum:
        value: "1"
        monotonic: true
    - name: log.bar.sum
      description: Count total number of log records as per log.bar attribute
      conditions: # Will evaluate to true
        - resource.attributes["404.attribute"] != nil
        - resource.attributes["resource.foo"] != nil
      attributes:
        - key: log.bar
      sum:
        value: "1"
        monotonic: true
    - name: ignored.sum
      description: Will be ignored due to conditions evaluating to false
      conditions: # Will evaluate to false
        - resource.attributes["404.attribute"] != nil
      attributes:
        - key: log.bar
      sum:
        value: "2"
        monotonic: true

config.yaml (testdata)

signal_to_metrics:
  logs:
    - name: metric.log.duration
      description: Logrecords as histogram with log.duration from attributes
      histogram:
        count: "1"
        value: attributes["log.duration"]
        buckets: [1, 10, 50, 100, 200]
    - name: metric.log.duration
      description: Logrecords as exponential histogram with log.duration from attributes
      exponential_histogram:
        count: "1"
        value: attributes["log.duration"]
        max_size: 160
    - name: metric.log.duration
      description: Logrecords as sum with log.duration from attributes
      sum:
        value: attributes["log.duration"]

config.yaml (testdata)

signal_to_metrics:
  logs:
    - name: total.logrecords.histogram
      description: Logrecords as histogram with log.duration from attributes
      histogram:
        count: "1"
        value: attributes["log.duration"]
        buckets: [1, 10, 50, 100, 200]
    - name: total.logrecords.resource.foo.histogram
      description: Logrecords with resource attribute foo as histogram with log.duration from attributes
      include_resource_attributes:
        - key: resource.foo
      histogram:
        count: "1"
        value: attributes["log.duration"]
        buckets: [1, 10, 50, 100, 200]
    - name: log.foo.histogram
      description: Count total number of log records as per log.foo attribute as histogram with log.duration from attributes
      attributes:
        - key: log.foo
      histogram:
        count: "1"
        value: attributes["log.duration"]
        buckets: [1, 10, 50, 100, 200]
    - name: log.bar.histogram
      description: Count total number of log records as per log.bar attribute as histogram with log.duration from attributes
      conditions: # Will evaluate to true
        - resource.attributes["404.attribute"] != nil
        - resource.attributes["resource.foo"] != nil
      attributes:
        - key: log.bar
      histogram:
        count: "1"
        value: attributes["log.duration"]
        buckets: [1, 10, 50, 100, 200]
    - name: ignored.histogram
      description: Will be ignored due to conditions evaluating to false
      conditions: # Will evaluate to false
        - resource.attributes["404.attribute"] != nil
      attributes:
        - key: log.bar
      histogram:
        count: "2"
        value: attributes["log.duration"]
        buckets: [1, 50, 200]

config.yaml (testdata)

signal_to_metrics:
  logs:
    - name: logs.memory_mb
      description: Extract memory_mb from log records
      gauge:
        value: ExtractGrokPatterns(body, "Memory usage %{NUMBER:memory_mb:int}MB")["memory_mb"]
    - name: logs.cpu
      description: Extract cpu from log records
      gauge:
        value: ExtractGrokPatterns(body, "CPU usage %{NUMBER:cpu:float}")["cpu"]
    - name: logs.foo.memory_mb
      description: Extract memory_mb from log records with attribute foo
      gauge:
        value: Int(ExtractPatterns(body, "Memory usage (?P<memory_mb>\\d+(?:\\.\\d+)?)MB")["memory_mb"])
      include_resource_attributes:
        - key: resource.foo
      attributes:
        - key: log.foo
    - name: logs.bar.memory_mb
      description: Extract memory_mb from log records with attribute bar and conditions
      conditions: # Will evaluate to true
        - resource.attributes["404.attribute"] != nil
        - resource.attributes["resource.bar"] != nil
      gauge:
        value: ExtractGrokPatterns(body, "Memory usage %{NUMBER:memory_mb:double}MB", true)["memory_mb"]
      attributes:
        - key: log.bar
    - name: log.ignored.gauge
      description: Will be ignored due to conditions evaluating to false
      conditions: # Will evaluate to false
        - resource.attributes["404.attribute"] != nil
      include_resource_attributes:
        - key: resource.bar
      attributes:
        - key: log.bar
      gauge:
        value: ExtractGrokPatterns(body, "Memory usage %{NUMBER:memory_mb:int}MB")["memory_mb"]

config.yaml (testdata)

signal_to_metrics:
  logs:
    - name: total.logrecords.exphistogram
      description: Logrecords as exponential histogram with log.duration from attributes
      exponential_histogram:
        count: "1"
        value: attributes["log.duration"]
    - name: total.logrecords.resource.foo.exphistogram
      description: Logrecords with resource attribute foo as exponential histogram with log.duration from attributes
      include_resource_attributes:
        - key: resource.foo
      exponential_histogram:
        count: "1"
        value: attributes["log.duration"]
    - name: log.foo.exphistogram
      description: Count total number of log records as per log.foo attribute as exponential histogram with log.duration from attributes
      attributes:
        - key: log.foo
      exponential_histogram:
        count: "1"
        value: attributes["log.duration"]
    - name: log.bar.exphistogram
      description: Count total number of log records as per log.bar attribute as exponential histogram with log.duration from attributes
      conditions: # Will evaluate to true
        - resource.attributes["404.attribute"] != nil
        - resource.attributes["resource.foo"] != nil
      attributes:
        - key: log.bar
      exponential_histogram:
        count: "1"
        value: attributes["log.duration"]
    - name: ignored.exphistogram
      description: Will be ignored due to conditions evaluating to false
      conditions: # Will evaluate to false
        - resource.attributes["404.attribute"] != nil
      attributes:
        - key: log.bar
      exponential_histogram:
        count: "2"
        value: attributes["log.duration"]

Last generated: 2026-04-13