Tanzuobservability Exporter
Overview
This exporter supports sending metrics and traces to Tanzu Observability.Prerequisites
- Obtain the Tanzu Observability by Wavefront API token.
- Set up and start a Tanzu Observability by Wavefront proxy and configure it with the API token you obtained.
- To have the proxy generate span RED metrics from
trace data, configure the proxy to receive traces by
setting
customTracingListenerPorts=30001. For metrics, the proxy listens on port 2878 by default.
Configuration
Given a Wavefront proxy at 10.10.10.10 configured withcustomTracingListenerPorts=30001, a basic configuration of
the Tanzu Observability exporter follows:
Advanced Configuration
Resource Attributes on Metrics
Client programs using an OpenTelemetry SDK can be configured to wrap all emitted telemetry (metrics, spans, logs) with a set of global key-value pairs, called resource attributes . By default, the Tanzu Observability Exporter includes resource attributes on spans but excludes them on metrics. To include resource attributes as tags on metrics, set the flagresource_attrs_included to true as per the example
below.
Note: Tanzu Observability has a 254-character limit on tag key-value pairs. If a resource attribute exceeds this
limit, the metric will not show up in Tanzu Observability.
Application Resource Attributes on Metrics
The Tanzu Observability Exporter will include application resource attributes on metrics (application, service.name
, cluster, and shard). To exclude these resource
attributes as tags on metrics, set the flag app_tags_excluded to true as per the example
below.
Note: A tag service.name(if provided) becomes service on the transformed wavefront metric. However, if both the
tags (service & service.name) are provided then the service tag will be included.
Queuing and Retries
This exporter uses OpenTelemetry Collector helpers to queue data and retry on failures.retry_on_failureDetails and defaults here .sending_queueDetails and defaults here
Recommended Pipeline Processors
The memory_limiter processor is recommended to prevent out of memory situations on the collector. It allows performing periodic checks of memory usage – if it exceeds defined limits it will begin dropping data and forcing garbage collection to reduce memory consumption. Details and defaults here . Note: The order matters when enabling multiple processors in a pipeline (e.g. the memory limiter and batch processors in the example config below). Please refer to the processors’ documentation for more information.Example Advanced Configuration
Attributes Required by Tanzu Observability
Source
Asource field is required in Tanzu
Observability spans
and metrics. The source is set to
the
first matching OpenTelemetry Resource Attribute:
sourcehost.namehostnamehost.id
Application Identity Tags on Spans
Application identity tags ofapplication and service are required for all spans in Tanzu Observability.
applicationis set to the value of the attributeapplicationon the OpenTelemetry Span or Resource. Default is ” defaultApp”.serviceis set the value of the attributeserviceorservice.nameon the OpenTelemetry Span or Resource. Default is “defaultService”.
Data Conversion for Traces
- Trace IDs and Span IDs are converted to UUIDs. For example, span IDs are left-padded with zeros to fit the correct size.
- Events are converted to Span Logs.
- Kind is converted to the
span.kindtag. - If a Span’s status code is error, a tag of
error=trueis added. If the status also has a description, it’s set tootel.status_description. - TraceState is converted to the
w3c.tracestatetag.
Data Conversion for Metrics
This section describes the process used by the Exporter when converting from OpenTelemetry Metrics to Tanzu Observability by Wavefront Metrics.| OpenTelemetry Metric Type | Wavefront Metric Type | Notes |
|---|---|---|
| Gauge | Gauge | |
| Cumulative Sum | Cumulative Counter | |
| Delta Sum | Delta Counter | |
| Cumulative Histogram (incl. Exponential) | Cumulative Counters | Details below. |
| Delta Histogram (incl. Exponential) | Histogram | |
| Summary | Gauges | Details below. |
Cumulative Histogram Conversion (incl. Exponential)
A cumulative histogram is converted to multiple counter metrics: one counter per bucket in the histogram. Each counter has a special “le” tag that matches the upper bound of the corresponding bucket. The value of the counter metric is the sum of the histogram’s corresponding bucket and all the buckets before it. When working with OpenTelemetry Cumulative Histograms that have been converted to Wavefront Counters, these functions will be of use:Example
Suppose a cumulative histogram named “http.response_times” has the following buckets and values:| Bucket | Value |
|---|---|
| ≤ 100ms | 5 |
| > 100ms to ≤ 200ms | 20 |
| > 200ms | 100 |
| Name | Tags | Value |
|---|---|---|
| http.response_times | le=“100” | 5 |
| http.response_times | le=“200” | 25 |
| http.response_times | le=“+Inf” | 125 |
Example WQL Query on a Cumulative Histogram
Using the cumulative histogram from the section above, this WQL query will produce a graph showing the 95th percentile of http response times in the last 15 minutes.le tags, which are http response times, and linear interpolation of the
bucket counts to estimate the 95th percentile of http.response_times over the last 15 minutes.
Summary Conversion
A summary is converted to multiple gauge metrics: one gauge for every quantile in the summary. A special “quantile” tag contains avalue between 0 and 1 indicating the quantile for which the value belongs.Configuration
Example Configuration
Last generated: 2026-04-14