Awskinesis Exporter
contrib
Maintainers: @Aneurysm9, @MovieStoreGuy
Source: opentelemetry-collector-contrib
Supported Telemetry
Overview
The kinesis exporter currently exports dynamic encodings to the configured kinesis stream. The exporter relies heavily on the kinesis.PutRecords api to reduce network I/O and reduces records into smallest atomic representation to avoid hitting the hard limits placed on Records (No greater than 1Mb). This producer will block until the operation is done to allow for retryable and queued data to help during high loads. The following settings are required:awsstream_name(no default): The name of the Kinesis stream to export to.
awskinesis_endpoint(no default)region(default = us-west-2): the region that the kinesis stream is deployed inrole(no default): The role to be used in order to send data to the kinesis stream
encodingname(default = otlp): defines the export type to be used to send to kinesis (available isotlp_proto,otlp_json,zipkin_proto,zipkin_json,jaeger_proto)- Note :
otlp_jsonis considered experimental and should not be used for production environments.
- Note :
compression(default = none): allows to set the compression type (defaults BestSpeed for all) before forwarding to kinesis (available isflate,gzip,zlibornone)
max_records_per_batch(default = 500, PutRecords limit): The number of records that can be batched together then sent to kinesis.max_record_size(default = 1Mb, PutRecord(s) limit on record size): The max allowed size that can be exported to kinesistimeout(default = 5s): Is the timeout for every attempt to send data to the backend.retry_on_failureenabled(default = true)initial_interval(default = 5s): Time to wait after the first failure before retrying; ignored ifenabledisfalsemax_interval(default = 30s): Is the upper bound on backoff; ignored ifenabledisfalsemax_elapsed_time(default = 120s): Is the maximum amount of time spent trying to send a batch; ignored ifenabledisfalse
sending_queueenabled(default = true)num_consumers(default = 10): Number of consumers that dequeue batches; ignored ifenabledisfalsequeue_size(default = 1000): Maximum number of batches kept in memory before dropping data; ignored ifenabledisfalse; User should calculate this asnum_seconds * requests_per_secondwhere:num_secondsis the number of seconds to buffer in case of a backend outagerequests_per_secondis the average number of requests per seconds.
Configuration
Example Configuration
Last generated: 2026-04-13