Datadog Exporter
contrib
Maintainers: @mx-psi, @dineshg13, @liustanley, @songy23, @mackjmr, @jade-guiton-dd, @IbraheemA
Source: opentelemetry-collector-contrib
Supported Telemetry
Overview
Please review the Collector’s security documentation, which contains recommendations on securing sensitive information such as the API key required by this exporter.
The Datadog Exporter now skips APM stats computation by default. It is recommended to only use the Datadog Connector in order to compute APM stats. To temporarily revert to the previous behavior, disable theFind the full configs of Datadog exporter and their usage in collector.yaml. More example configs can be found in the official documentation.exporter.datadogexporter.DisableAPMStatsfeature gate. Example:otelcol --config=config.yaml --feature-gates=-exporter.datadogexporter.DisableAPMStats
FAQs
Why am I getting errors 413 - Request Entity Too Large, how do I fix it?
This error indicates the payload size sent by the Datadog exporter exceeds the size limit (see previous examples https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/16834, https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/17566). This is usually caused by the pipeline batching too many telemetry data before sending to the Datadog API intake. To fix that, prefer using the Datadog exportersending_queue::batch section instead of the batch processor:
send_batch_size and send_batch_max_size in your config. You might want to have a separate batch processor dedicated for datadog exporter if other exporters expect a larger batch size, e.g.
send_batch_size and send_batch_max_size depends on your specific workload. Also note that, Datadog intake has different payload size limits for the 3 signal types:
- Trace intake: 3.2MB
- Log intake: https://docs.datadoghq.com/api/latest/logs/
- Metrics V2 intake: https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
Fall back to the Zorkian metric client with feature gate
Support for Zorkian is now deprecated, please use the metrics export serializer. See https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.122.0 and #37930 for more info about Metrics Export Serializer.Remap OTel’s service.name attribute to service for logs
NOTE this workaround is only needed when feature gateexporter.datadogexporter.UseLogsAgentExporter is disabled. This feature gate is enabled by default starting v0.108.0.
For Datadog Exporter versions 0.83.0 - v0.107.0, the service field of OTel logs is populated as OTel semantic convention service.name. However, service.name is not one of the default service attributes in Datadog’s log preprocessing.
To get the service field correctly populated in your logs, you can specify service.name to be the source of a log’s service by setting a log service remapper processor.
How to add custom log source
In order to add a custom source to your OTLP logs, set resource attributedatadog.log.source. This feature requires exporter.datadogexporter.UseLogsAgentExporter feature flag to be enabled (now enabled by default).
Example:
My Collector K8s pod is getting rebooted on startup when I don’t manually set a hostname under exporters::datadog::hostname
This is due to a bug with underlying hostname detection blocking the health_check extension from responding to liveness/readiness probes on startup. To fix, either set hostname_detection_timeout to be less than the pod/daemonset livenessProbe: failureThreshold * periodSeconds so that the timeout for hostname detection on startup takes less time than the control plane waits before restarting the pod, or leave hostname_detection_timeout at the default 25s value and double-check the livenessProbe and readinessProbe settings and ensure that the control plane will in fact wait long enough for startup to complete before restarting the pod.
Hostname detection is currently required to initialize the Datadog Exporter, unless a hostname is specified manually under hostname.
Configuration
Example Configuration
Last generated: 2026-04-13