Prometheusremotewrite Receiver
contrib
Maintainers: @dashpole, @ArthurSens, @perebaj
Source: opentelemetry-collector-contrib
Supported Telemetry
Overview
Prometheus Compatibility
The Prometheus Remote Write 2.0 protocol is still evolving. As the protocol specification changes, we update our implementation to match, which can affect compatibility with different Prometheus versions.| OTel Collector Contrib Version | Compatible Prometheus Versions |
|---|---|
| v0.141.0 and earlier | Prometheus 3.7.x and earlier |
| v0.142.0 and later | Prometheus 3.8.0 and later |
CreatedTimestamp to StartTimestamp and moving it from the TimeSeries message to individual Sample and Histogram messages. This is a wire-protocol incompatibility, so mismatched versions will not work correctly together.
Configuring PrometheusRemoteWriteReceiver
This componentâs configuration is based on confighttp. A minimal example can be seen below:
Configuring your Prometheus
To make Prometheus work with this component, youâll need to add a few extra configuration options. As described below:Metadata WAL Records feature flag
Prometheusâ Remote Write implementation relies on flushing the WAL records into its Remote Write Queue Manager. By default, metadata information, like metric Type, Unit and Help description, are not appended to the WAL. Since we require such information to translate Remote-Write into OTLP, Prometheus default configuration wonât work here. When spinning up your Prometheus, make sure you enable the metadata-wal-records feature flag:Remote Write Protobuf message
This component focuses exclusively on Prometheus Remote Write v2 Protocol. To enable it, please add the appropriateprotobuf_message in your remote write configuration block:
Why Not support Prometheus Remote Write v1?
We donât support Prometheus Remote Write v1 for a couple of reasons, which are explained below.Histogram Atomicity
Prometheus Remote Write v1 was developed before Prometheus Native Histograms were a thing. The original Histogram format, commonly known as Prometheus Classic Histograms, is composed of several separate time series that together work as a whole histogram. Each time series holds a single information, whether the bucket boundaries, the sum of all observations, or the count of observations. Now, being more specific to Prometheus Remote Write implementation. It was developed using the algorithm EWMA (Exponential Weighted Moving Average), used to control throughput. Simply put, the more time-series Prometheus is ingesting, the more workers Prometheus spins up to push metrics via Remote Write, and a decrease in worker count also happens when ingestion decreases. Since Classic Histograms are made of multiple time series, there is a high chance that parts of them are sent to the remote storage in separate remote-write requests. If, for any reason, one of those requests fails, it is impossible for the receiver to know if the time series it received was already enough to assemble and generate a complete histogram.
This problem was solved in Prometheus Remote Write v2 with the introduction of Native Histograms.
Decoupled Metadata
While, officially, Prometheus Remote Write v1 does NOT support sending metadata, e.g., Metric Type, Unit, and Help description. It was developed versions of the protocol where metadata can be sent separately from the metric. Similarly to the problem mentioned in Histogram Atomicity, sending this kind of information separately can cause issues if the data is lost during transport, not to mention the necessity of caching metrics or metric metadata while we wait for the subsequent request that will connect the two. In Prometheus Remote Write v2, this problem is solved since the time series are sent together with their metadata.Lack of Created Timestamp
Created Timestamp is a feature in Prometheus that works similarly and is translated to OTelâs StartTimeUnixNano. Prometheus Remote Write v1 doesnât send Created Timestamps, so we can never populate the StartTimeUnixNano field from that protocol.
Known Limitations
Summaries and Classic Histograms are unsupported
As mentioned in Histogram Atomicity, Prometheus Classic Histograms are split into several separate time series and, for this reason, it is impossible to determine if the amount of buckets received are the complete set. Summaries suffer from the same problem, a working Summary is composed by several time series just like Classic Histograms. The only difference is that instead of bucket boundaries, these time series represent pre-calculated quantiles. Since the quantiles can be sent in separate Remote Write requests, itâs impossible to determine if the amount of quantiles received are enough to generate a complete Summary.Resource Metrics Cache
target_info metrics and ânormalâ metrics are a match when they have the same job/instance labels (Please read the specification for more details). But these metrics do not always come in the same Remote-Write request. For this reason, the receiver uses an internal LRU (Least Recently Used) and stateless cache implementation to store resource metrics across requests.
The cleanup is based on a maximum size of 1000 resource metrics, and it will be cleaned when the limit is reached, cleaning the less recently used resource metrics.
This approach has some limitations, for example:
- If the process dies or restarts, the cache will be lost.
- Some inconsistencies can happen according to the order of the requests and the current cache size.
- The limit of 1000 resource metrics is hardcoded and not configurable for now.
Last generated: 2026-04-13