Oct 19th, ‘24/6 min read

Prometheus RemoteWrite Exporter: A Comprehensive Guide

A comprehensive guide showing how to use PrometheusRemoteWriteExporter to send metrics from OpenTelemetry to Prometheus compatible backends

Prometheus RemoteWrite Exporter: A Comprehensive Guide

Table of Contents

1. Introduction

OpenTelemetry (OTel) is an open-source framework that helps collect and manage telemetry data for observability. A crucial part of this framework is the PrometheusRemoteWriteExporter, which connects OpenTelemetry to the Prometheus ecosystem. In this post, we'll look at how the PrometheusRemoteWriteExporter works, how to configure it, and how it integrates with different observability tools.

2. PrometheusRemoteWriteExporter: An Overview

The PrometheusRemoteWriteExporter is a component of the OpenTelemetry Collector that allows you to export metrics data in the Prometheus remote write format. This exporter is particularly useful when sending OpenTelemetry metrics to Prometheus-compatible backends that support the remote write API, such as Last9, Cortex, Thanos, or even Grafana Cloud.

Key features of the PrometheusRemoteWriteExporter include:

  • Support for all OpenTelemetry metric types (gauge, sum, histogram, summary)
  • Configurable endpoint for remote write
  • Optional TLS and authentication support
  • Customizable headers for HTTP requests
  • Ability to add Prometheus-specific metadata to metrics
  • Normalization of Metric names from OpenTelemetry naming convention to Prometheus-compatible naming conventions
  • Dropping Delta temporality metrics before sending them to Prometheus-compatible backends.

A significant advantage of using the PrometheusRemoteWriteExporter is that it lets you take advantage of OpenTelemetry's flexible instrumentation while still utilizing the monitoring and alerting tools you're used to with Prometheus. This is especially helpful if you're moving from Prometheus to OpenTelemetry or have a setup that uses both.

3. Configuring PrometheusRemoteWriteExporter in OpenTelemetry

To set up the PrometheusRemoteWriteExporter in your OpenTelemetry Collector, you must add it to your collector's configuration file, usually in YAML format.

Here’s a basic example of how to do this:

exporters:
  prometheusremotewrite:
    endpoint: "http://prometheus.example.com:9090/api/v1/write"
    tls:
      insecure: true
    headers:
      "Authorization": "Basic <base64-encoded-credentials>"
    namespace: "my_app"
    resource_to_telemetry_conversion:
      enabled: true

service:
  pipelines:
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [prometheusremotewrite]

Let’s break down the critical configuration options:

  • endpoint: This is the URL for your Prometheus-compatible backend, such as Last9 which supports Prometheus remote write protocol.
  • tls: This section is for secure connections (optional).
  • headers: Here, you can add any necessary HTTP headers, like authentication details.
  • namespace: This optional prefix is added to all metric names.
  • resource_to_telemetry_conversion: This option lets you convert all the resource attributes into metric labels. Default is false. This can increase high cardinality if the resource attributes, such as host names or IP addresses.

In the service section, we define a pipeline that receives OTLP metrics, processes them in batches, and exports them using the PrometheusRemoteWriteExporter.

The OpenTelemetry Collector (often called otel-collector) can also be set up to receive metrics in different formats, including the Prometheus exposition format.

You’ll use the Prometheus receiver, which scrapes metrics from targets like a Prometheus server would. Here’s an example of how to configure the Prometheus receiver:

receivers:
  prometheus:
    config:
      scrape_configs:
        - job_name: 'otel-collector'
      scrape_interval: 10s
      static_configs:
        - targets: ['0.0.0.0:8888']

service:
  pipelines:
    metrics:
      receivers: [prometheus]
      exporters: [prometheusremotewrite]

With this setup, the OpenTelemetry Collector can scrape Prometheus metrics and export them using the PrometheusRemoteWriteExporter. This effectively creates a bridge between applications instrumented with Prometheus and Prometheus-compatible backends using OpenTelemetry Collector.

4. Integration with Kubernetes and Docker

In cloud-native environments, applications are often run in containers orchestrated by Kubernetes. The PrometheusRemoteWriteExporter can effectively collect and export metrics from applications in these environments.

Kubernetes Integration

When deploying the OpenTelemetry Collector in a Kubernetes cluster, you can use the Kubernetes API server's service discovery mechanisms to discover and scrape metrics from your pods automatically. Here's an example of how you might configure this:

receivers:
  prometheus:
    config:
      scrape_configs:
        - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
        - role:
            pod
      relabel_configs:
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
          action:
            keep regex: true
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)

exporters:
  prometheusremotewrite:
    endpoint: "http://prometheus.example.com:9090/api/v1/write"
    service:
      pipelines:
        metrics:
          receivers: [prometheus]
          exporters: [prometheusremotewrite]

This configuration uses Kubernetes service discovery to find pods annotated with prometheus.io/scrape: "true" and scrape metrics from them. The metrics are then exported using the PrometheusRemoteWriteExporter.

Docker Integration

For Docker environments, you can run the OpenTelemetry Collector as a sidecar container alongside your application. This allows you to collect metrics from your application container and export them using the PrometheusRemoteWriteExporter.

Here's a simple Docker Compose example:

services:
  app:
    image: your-app-image
    ports:
      - "8080:8080"
  otel-collector:
    image: otel/opentelemetry-collector
    command: ["--config=/etc/otel-collector-config.yaml"]
    volumes:
      - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
    ports:
      - "8888:8888" # Prometheus metrics exporter
      - "8889:8889" # Prometheus exporter for the collector

In this setup, you would configure your application to send metrics to the OpenTelemetry Collector, using the PrometheusRemoteWriteExporter to send the metrics to your Prometheus-compatible backend.

5. Advanced Configurations and Best Practices

When working with the PrometheusRemoteWriteExporter, consider these advanced configurations and best practices:

Use appropriate filters

The OpenTelemetry Collector supports various processors that filter or transform metrics before exporting. This can reduce the volume of data sent to your Prometheus backend.

processors:
  filter/metrics:
    error_mode: ignore
    metrics:
      metric:
          - 'name == "http.requests.total" and resource.attributes["env"] == "dev"'
          - 'type == METRIC_DATA_TYPE_HISTOGRAM'
      datapoint:
          - 'metric.type == METRIC_DATA_TYPE_SUMMARY'
          - 'resource.attributes["service.name"] == "user-service"'

Implement retries and backoff

Configure the exporter to handle network issues gracefully:

exporters:
  prometheusremotewrite:
    retry_on_failure:
      enabled: true # (default = true)
      initial_interval: 5s # Time to wait after the first failure before retrying; ignored if enabled is false
      max_interval: 30s # Is the upper bound on backoff; ignored if enabled is false
      max_elapsed_time: 120s # Is the maximum amount of time spent trying to send a batch; ignored if enabled is false. If set to 0, the retries are never stopped.

Monitor the exporter

Use the built-in metrics provided by the OpenTelemetry Collector to monitor the health and performance of your PrometheusRemoteWriteExporter. Update the service section of the Otel Collector config with detailed metrics.

service:
  telemetry:
    metrics:
      level: "detailed"

Use resource detection processors

These can automatically detect and add relevant metadata about your environment:

processors:
  resourcedetection:
    detectors: [env, ec2]
    timeout: 2s

6. Troubleshooting and Common Issues

  1. Connectivity issues: Ensure that the OpenTelemetry Collector can reach the Prometheus backend. Check network configurations, firewalls, and security groups.
  2. Authentication errors: Verify that the authentication credentials are correctly encoded in the configuration.
  3. Data type mismatches: The PrometheusRemoteWriteExporter may sometimes struggle with certain OpenTelemetry metric types. Check the OpenTelemetry Collector logs for any conversion errors.
  4. High cardinality: Be cautious of high cardinality metrics, which can overwhelm Prometheus. If necessary, use the metrics transform processor to reduce cardinality. Alternatively, you can use high cardinality metric backends such as Last9.

To aid in troubleshooting, you can enable debug logging in the OpenTelemetry Collector:

service:
  telemetry:
    logs:
      level: debug 

You can also use the debug exporter alongside the PrometheusRemoteWriteExporter to log the metrics being sent:

exporters:
  debug:
    verbosity: detailed
  prometheusremotewrite:
    endpoint: "http://prometheus.example.com:9090/api/v1/write"
    service:
      pipelines:
        metrics:
          receivers: [otlp]
          processors: [batch]
          exporters: [logging, prometheusremotewrite]

This configuration will log the metrics before they're sent to Prometheus, allowing you to verify the exported data. You can optionally enable the debug exporter when needed.

As observability continues to evolve, several trends are shaping the future of tools like the PrometheusRemoteWriteExporter:

  • Support for delta temporality metrics: As Prometheus 3.0 prepares to support OpenTelemetry metrics, the PrometheusRemoteWrite Exporter can pass delta metrics to Prometheus in the future.
  • Support for Prometheus Remote Write 2.0 Protocol: Prometheus team is developing a new Prometheus 2.0 Protocol which was discussed in detail at PromCon 2024.
  • Improved Performance: Ongoing efforts aim to optimize the performance of the PrometheusRemoteWriteExporter, especially for high-volume metric streams.
  • Enhanced Resource Metadata Support: Improvements in handling and transmitting resource metadata and contextual information alongside metrics data are anticipated.

8. Conclusion

The PrometheusRemoteWriteExporter in OpenTelemetry is crucial for connecting the OpenTelemetry framework with Prometheus-compatible backends.

When building your observability strategy, focus on the best practices discussed here. Stay informed about new developments in the OpenTelemetry and Prometheus communities, and consider contributing your insights to these open-source projects.

If you still need to discuss some settings, jump onto the Last9 Discord Server to discuss any specifics you need help with. We have a dedicated channel where you can discuss your specific use case with other developers.

FAQs

What is the difference between OpenTelemetry and Prometheus?

Both are open-source projects under the Cloud Native Computing Foundation (CNCF) but serve different roles in observability. OpenTelemetry is a framework for instrumenting applications and collecting telemetry data (metrics, logs, and traces) in a standardized way. In contrast, Prometheus is primarily a monitoring system and time-series database focused on metrics collection and analysis, using a pull-based model to scrape data from instrumented applications.

What is the difference between telemetry and OpenTelemetry?

Telemetry refers to the collection and transmission of data for monitoring, typically including metrics, logs, and traces in software systems. OpenTelemetry is a specific framework that standardizes how telemetry data is generated, collected, and exported, making it easier for developers to implement observability in their applications.

What is the difference between OpenTelemetry and Grafana?

OpenTelemetry focuses on collecting and exporting telemetry data, acting as the plumbing that delivers observability data to backend systems. Grafana, on the other hand, is a visualization platform that creates dashboards and analyzes data from various sources, including Prometheus. While OpenTelemetry handles data collection, Grafana is responsible for visualizing that data.  

What is the difference between OpenTelemetry and distributed tracing?

OpenTelemetry is a comprehensive framework that supports metrics, logs, and traces, facilitating the generation and collection of all these telemetry types. Distributed tracing is a technique specifically for monitoring requests as they move through microservices, forming one of the critical components of observability

Contents


Newsletter

Stay updated on the latest from Last9.

Authors

Prathamesh Sonpatki

Prathamesh works as an evangelist at Last9, runs SRE stories - where SRE and DevOps folks share their stories, and maintains o11y.wiki - a glossary of all terms related to observability.

Handcrafted Related Posts