Skip to main content

Awss3 Exporter

Status Available in: contrib Maintainers: @atoulme, @pdelewski, @Erog38 Source: opentelemetry-collector-contrib

Supported Telemetry

Logs Metrics Traces

Overview

Schema supported

This exporter targets to support proto/json format.

Exporter Configuration

The following exporter configuration parameters are supported.
NameDescriptionDefault
regionAWS region.”us-east-1”
s3_bucketS3 bucket
s3_base_prefixroot prefix for the S3 key applied to all files.
s3_prefixprefix for the S3 key that can be overridden dynamically by resource_attrs_to_s3 parameter.
s3_partition_formatfilepath formatting for the partition; See strftime for format specification.”year=%Y/month=%m/day=%d/hour=%H/minute=%M”
s3_partition_timezonetimezone used to format partitionLocal
role_arnthe Role ARN to be assumed
file_prefixfile prefix defined by user
marshalermarshaler used to produce output dataotlp_json
encodingEncoding extension to use to marshal data. Overrides the marshaler configuration option if set.
encoding_file_extensionfile format extension suffix when using the encoding configuration option. May be left empty for no suffix to be appended.
endpoint(REST API endpoint) overrides the endpoint used by the exporter instead of constructing it from region and s3_bucket
storage_classS3 storageclassSTANDARD
aclS3 Object Canned ACLnone (does not set by default)
s3_force_path_styleset this to true to force the request to use path-style addressingfalse
disable_sslset this to true to disable SSL when sending requestsfalse
compressionshould the file be compressednone
sending_queueexporters common queuingdisabled
timeoutexporters common timeout5s
resource_attrs_to_s3determines the mapping of S3 configuration values to resource attribute values for uploading operations.
retry_modeThe retryer implementation, the supported values are “standard”, “adaptive” and “nop”. “nop” will set the retryer as aws.NopRetryer, which effectively disable the retry.standard
retry_max_attemptsThe max number of attempts for retrying a request if the retry_mode is set. Setting max attempts to 0 will allow the SDK to retry all retryable errors until the request succeeds, or a non-retryable error is returned.3
retry_max_backoffthe max backoff delay that can occur before retrying a request if retry_mode is set20s
unique_key_func_nameName of the function to use for generating a unique portion of the key name, defaults to a random integer. Only supported value is uuidv7.

Marshaler

Marshaler determines the format of data sent to AWS S3. Currently, the following marshalers are implemented:
  • otlp_json (default): the OpenTelemetry Protocol format, represented as json.
  • otlp_proto: the OpenTelemetry Protocol format, represented as Protocol Buffers. A single protobuf message is written into each object.
  • sumo_ic: the Sumo Logic Installed Collector Archive format.
    • _sourceCategory, _sourceHost, and _sourceName is needed
      resource/add_source_category:
        attributes:
        - action: insert
          key: _sourceCategory
          value: "value"
        - action: insert
          key: _sourceHost
          value: "value"
        - action: insert
          key: _sourceName
          value: "value"
      
    This format is supported only for logs.
  • body: export the log body as string. This format is supported only for logs.

Encoding

Encoding overrides marshaler if present and sets to use an encoding extension defined in the collector configuration. See https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/encoding.

Compression

  • none (default): No compression will be applied
  • gzip: Files will be compressed with gzip.
  • zstd: Files will be compressed with zstd.

resource_attrs_to_s3

  • s3_bucket: Defines which resource attribute’s value should be used as the S3 bucket. When this option is set, it dynamically overrides s3uploader/s3_bucket. If the specified resource attribute exists in the data,
    its value will be used as the bucket; otherwise, s3uploader/s3_bucket will serve as the fallback.
  • s3_prefix: Defines which resource attribute’s value should be used as the S3 prefix. When this option is set, it dynamically overrides s3uploader/s3_prefix. If the specified resource attribute exists in the data,
    its value will be used as the prefix; otherwise, s3uploader/s3_prefix will serve as the fallback.
Following example configuration defines to store output in ‘eu-central’ region and bucket named ‘databucket’.
exporters:
  awss3:
    s3uploader:
      region: 'eu-central-1'
      s3_bucket: 'databucket'
      s3_prefix: 'metric'

    # Optional (disabled by default)
    sending_queue:
      enabled: true
      num_consumers: 10
      queue_size: 100

    # Optional (5s by default)
    timeout: 20s      
Logs and traces will be stored inside ‘databucket’ in the following path format.
metric/year=YYYY/month=MM/day=DD/hour=HH/minute=mm

Partition Formatting

By setting the s3_partition_format option, users can specify the file path for their logs. See the strftime reference for more formatting options.
exporters:
  awss3:
    s3uploader:
      region: 'eu-central-1'
      s3_bucket: 'databucket'
      s3_prefix: 'metric'
      s3_partition_format: '%Y/%m/%d/%H/%M'
In this case, logs and traces would be stored in the following path format.
metric/YYYY/MM/DD/HH/mm
Optionally along with s3_partition_format you can provide s3_partition_timezone as name from IANA Time Zone database to change default local timezone to custom, for example UTC or Europe/London.

Base Path Configuration

The s3_base_prefix option allows you to specify a root path inside the bucket that is not overridden by resource_attrs_to_s3. If provided, s3_prefix will be appended to this base path.
exporters:
  awss3:
    s3uploader:
      region: 'eu-central-1'
      s3_bucket: 'databucket'
      s3_base_prefix: 'environment/prod'
      s3_prefix: 'metric'
      s3_partition_format: '%Y/%m/%d/%H/%M'
In this case, logs and traces would be stored in the following path format.
environment/prod/metric/YYYY/MM/DD/HH/mm

Data routing based on resource attributes

When resource_attrs_to_s3/s3_bucket or resource_attrs_to_s3/s3_prefix is configured, the S3 bucket and/or prefix are dynamically derived from specified resource attributes in your data. If the attribute values are unavailable, the bucket and prefix will fall back to the values defined in s3uploader/s3_bucket and s3uploader/s3_prefix respectively.
exporters:
  awss3:
    s3uploader:
      region: 'eu-central-1'
      s3_bucket: 'databucket'
      s3_prefix: 'metric'
      s3_partition_format: '%Y/%m/%d/%H/%M'
    resource_attrs_to_s3:
      s3_bucket: "com.awss3.bucket"
      s3_prefix: "com.awss3.prefix"
In this case, metrics, logs and traces would be stored in the following path format examples:
bucket1/prefix1/YYYY/MM/DD/HH/mm
bucket2/foo-prefix/YYYY/MM/DD/HH/mm
bucket3/prefix-bar/YYYY/MM/DD/HH/mm
databucket/metric/YYYY/MM/DD/HH/mm
...

Base Path with Resource Attributes

When using both s3_base_prefix and resource_attrs_to_s3/s3_prefix, the s3_base_prefix is always used while s3_prefix can be dynamically overridden by resource attributes.
exporters:
  awss3:
    s3uploader:
      region: 'eu-central-1'
      s3_bucket: 'databucket'
      s3_base_prefix: 'environment/prod'
      s3_prefix: 'default-metric'
      s3_partition_format: '%Y/%m/%d/%H/%M'
    resource_attrs_to_s3:
      s3_prefix: "com.awss3.prefix"
In this configuration:
  • Base Prefix: environment/prod (always included)
  • Prefix: Dynamically set from resource attribute com.awss3.prefix if available, otherwise falls back to default-metric
Path format examples:
# When resource attribute com.awss3.prefix = "service-a/metrics"
environment/prod/service-a/metrics/YYYY/MM/DD/HH/mm

# When resource attribute com.awss3.prefix = "service-b/logs"  
environment/prod/service-b/logs/YYYY/MM/DD/HH/mm

# When resource attribute is unavailable (fallback)
environment/prod/default-metric/YYYY/MM/DD/HH/mm
This allows you to maintain consistent organizational structure (via base path) while dynamically routing different data types or services to specific subdirectories.

Retry

Standard is the default retryer implementation used by service clients. See the retry package documentation for details on what errors are considered as retryable by the standard retryer implementation. See also the aws-sdk-go reference for more information.
exporters:
  awss3:
    s3uploader:
      region: 'eu-central-1'
      s3_bucket: 'databucket'
      s3_prefix: 'metric'
      retry_mode: "standard"
      retry_max_attempts: 5
      retry_max_backoff: "30s"

AWS Credential Configuration

This exporter follows default credential resolution for the aws-sdk-go. Follow the guidelines for the credential configuration.

OpenTelemetry Collector Helm Chart for Kubernetes

For example, when using OpenTelemetry Collector Helm Chart you could use extraEnvs in the values.yaml.
extraEnvs:
- name: AWS_ACCESS_KEY_ID
  value: "< YOUR AWS ACCESS KEY >"
- name: AWS_SECRET_ACCESS_KEY
  value: "< YOUR AWS SECRET ACCESS KEY >"

Configuration

Example Configuration

receivers:
  nop:

exporters:
  awss3:
    sending_queue:
      enabled: true
      num_consumers: 23
      queue_size: 42
    timeout: 8

    s3uploader:
        region: 'us-east-1'
        s3_bucket: 'foo'
        s3_prefix: 'bar'
        s3_partition_format: 'year=%Y/month=%m/day=%d/hour=%H/minute=%M'
        s3_partition_timezone: 'Europe/London'
        endpoint: "http://endpoint.com"

processors:
  nop:

service:
  pipelines:
    traces:
      receivers: [nop]
      processors: [nop]
      exporters: [awss3]

Last generated: 2026-04-13