Last9 named a Gartner Cool Vendor in AI for SRE Observability for 2025! Read more →
Last9

How Prometheus Exporters Work With OpenTelemetry

Learn how Prometheus exporters expose OTLP metrics in Prometheus format, making it easier to scrape OpenTelemetry data.

Nov 6th, ‘25
How Prometheus Exporters Work With OpenTelemetry
See How Last9 Works

Unified observability for all your telemetry. Open standards. Simple pricing.

Talk to an Expert

Running distributed systems means you need clear visibility into how your services behave. Prometheus has been the standard for metrics for a long time, and OpenTelemetry is now giving teams a more consistent way to collect telemetry across their stack.

In many setups, you'll have both: existing Prometheus instrumentation that's already in place, and new components instrumented with OpenTelemetry. These approaches work well together, and understanding that makes it easier to plan your observability strategy.

The Prometheus exporter and OpenTelemetry integration isn't a migration path—it's simply a pattern that allows both systems to operate side by side. This blog walks through how that pattern works and where it fits.

Why Both Systems Can Coexist

Your infrastructure likely includes a mix of approaches. Some services use the Prometheus client library and expose metrics over HTTP. At the same time, you're introducing OpenTelemetry for new applications or gradual adoption. This setup is common, and most teams work this way.

The connection between the two is simple: the OpenTelemetry Collector includes a Prometheus receiver. It scrapes your existing endpoints the same way Prometheus does, then converts those metrics to OTLP format. Your current instrumentation stays intact, and you gain flexibility in where those metrics can be sent.

What this means practically:

  • Existing services keep exposing Prometheus metrics without any changes
  • The Collector scrapes those metrics and converts them to OTLP
  • Once in OTLP, you can route metrics to any OpenTelemetry-compatible backend
  • Over time, you instrument new services with OpenTelemetry directly

This gives you room to expand your observability stack at a pace that fits your environment, without forcing a switch from one system to another.

What Happens When Metrics Cross Over

When the OpenTelemetry Collector receives Prometheus metrics, a defined translation process takes place. It follows the semantic mapping specified by OpenTelemetry, so the conversion is predictable and standards-aligned.

Prometheus labels become OpenTelemetry attributes. Metric types carry over directly—Counters remain Counters, Gauges remain Gauges, and Histograms are mapped to their OTLP equivalents. The HELP text is preserved as the metric description. The goal is to retain meaning while reformatting the data.

In practice:

  • Prometheus labels → OTLP attributes: each label becomes an attribute key–value pair
  • Prometheus types → OTLP types: Counters, Gauges, and Histograms map directly
  • Prometheus units → OTLP units: units like milliseconds carry forward as ms
  • Prometheus HELP text → OTLP description: documentation becomes metadata

The OpenTelemetry specification defines this clearly, so both ecosystems preserve context and structure during translation.

💡
To understand how grouping and reshaping metrics work in Prometheus itself, which we’ve broken down here!

The Prometheus Receiver in Action

The OpenTelemetry Collector's Prometheus receiver handles the entire flow. It scrapes targets on a schedule, translates the metrics, and routes them through the Collector pipeline. If you've written Prometheus scrape configs before, the configuration syntax will feel familiar:

receivers:
  prometheus:
    config:
      scrape_configs:
      - job_name: 'my-service'
        scrape_interval: 30s
        static_configs:
        - targets: ['localhost:8080']

processors:
  batch:

exporters:
  otlp:
    endpoint: collector.example.com:4317

service:
  pipelines:
    metrics:
      receivers: [prometheus]
      processors: [batch]
      exporters: [otlp]

This configuration scrapes localhost:8080 every 30 seconds, batches the metrics, and sends them via OTLP. The receiver handles scraping and translation. The full Prometheus scrape_configs syntax works here—service discovery, authentication, TLS, all of it.

Filter and Rename Metrics

Prometheus users often rely on relabeling to control which metrics get collected and how they're shaped. The OpenTelemetry Collector preserves this capability through the Prometheus receiver, giving you the same flexibility you're used to when refining metric streams.

Relabeling can happen in two stages:

Before scraping (relabel_configs): This stage filters or labels targets based on metadata. It's useful when you want to include only certain services, workloads, or environments in your scrape targets.

After scraping (metric_relabel_configs): This is where you shape the metric data itself—keeping specific series, renaming them, or dropping fields you don't need. It helps control volume and ensures only meaningful signals continue through the pipeline.

Here's an example:

receivers:
  prometheus:
    config:
      scrape_configs:
      - job_name: 'app-metrics'
        scrape_interval: 30s
        static_configs:
        - targets: ['app:9090']
        metric_relabel_configs:
        - source_labels: [__name__]
          regex: 'app_requests.*'
          action: keep
        - source_labels: [__name__]
          regex: '.*_bucket'
          action: drop

In this setup:

  • Only metrics beginning with app_requests are kept.
  • Histogram bucket metrics (the _bucket series) are removed.
  • Everything that remains is translated to OTLP and forwarded through the Collector pipeline.

Relabeling in the Collector gives you the same control you already have in Prometheus—helping you keep the signals that matter, reduce noise, and maintain a clear metric flow as you transition into an OpenTelemetry-driven architecture.

💡
If you want a quick refresher on how counters and gauges behave during collection, our blog covers in detail!

Pull-Based vs. Push-Based Metrics

When you're wiring metrics into the OpenTelemetry Collector, you can choose how those metrics move through your system. Both patterns work—you just pick the one that fits how your services behave today.

Pull-based collection (Prometheus receiver)

Here, you expose a /metrics endpoint, and the Collector scrapes it on a schedule. If you're already using Prometheus, this feels familiar since nothing changes in your application code.

An example scrape config:

receivers:
  prometheus:
    config:
      scrape_configs:
      - job_name: 'cart-service'
        scrape_interval: 15s
        static_configs:
        - targets: ['cart:8080']

The Collector pulls metrics exactly as Prometheus would, translates them, and sends them downstream.

Push-based collection (Prometheus Remote Write exporter)

Your application sends metrics out instead of being scraped. This is useful when:

  • Your service runs behind a load balancer with no direct scraping access
  • You want to control when metrics are sent
  • You want immediate emission rather than waiting for a scrape interval

A simple Remote Write config:

exporters:
  prometheusremotewrite:
    endpoint: https://collector.example.com/api/v1/write

Which one should you use?

If you're already exposing Prometheus metrics, pull-based is the natural fit—you point the Collector at the same endpoints. If you need more control or your service isn't scrape-friendly, push-based works better.

Working With Metrics That Have Multiple Dimensions

Some metrics break down across several labels—user_id, tenant_id, region, or instance. When you're instrumenting real systems, this pattern is common:

http_requests_total{user_id="42"}
latency_ms{tenant="acme", route="/checkout"}
queue_depth{shard="7", node="cache-3"}

These are high-cardinality metrics, and they are valuable. You get a more detailed view of what specific users, tenants, or components are experiencing.

But sometimes you may want to reduce, reshape, or drop certain dimensions before exporting.

Example: dropping a dimension you don't need

metric_relabel_configs:
- source_labels: [__name__, user_id]
  regex: 'request_duration_ms;.*'
  action: drop

You might do this when:

  • You don't need per-user metrics
  • You only care about route-level latency
  • You want to reduce the number of series that reach your backend

Example: keeping only metrics you care about

metric_relabel_configs:
- source_labels: [__name__]
  regex: 'cart_.*'
  action: keep

This keeps all cart_* metrics and drops everything else—useful when you want a very focused dataset during testing.

Example: renaming metrics

metric_relabel_configs:
- source_labels: [__name__]
  target_label: __name__
  replacement: 'checkout_latency_ms'

This helps you maintain consistency across services that export similar metrics with different names.

How to Think About High-Cardinality Metrics

You get to pick the shape of your metrics:

  • Sometimes you want full detail—per-tenant latency or per-user error spikes give you fast answers.
  • Sometimes you want to aggregate and keep the signal tighter.
  • Sometimes you want to drop an entire dimension because it isn't meaningful for your dashboards or alerting.

Both choices are valid. The key is that you have control—and the Collector's relabeling rules let you apply that control at ingestion time before metrics move further into your pipeline.

💡
For concrete alerting patterns that work well in production, our guide covers proven Prometheus examples!

How New Services Emit Metrics in a Prometheus + OpenTelemetry Setup

If you're building a new service and you want it to emit metrics alongside your existing Prometheus setup, OpenTelemetry gives you straightforward options.

In Go, you can use the Prometheus bridge from OpenTelemetry. This lets you keep your existing Prometheus instrumentation while still exporting through OpenTelemetry:

import (
    "go.opentelemetry.io/contrib/bridges/prometheus"
    "go.opentelemetry.io/otel/sdk/metric"
)

bridge := prometheus.NewMetricProducer()
reader := metric.NewManualReader(metric.WithProducer(bridge))
provider := metric.NewMeterProvider(metric.WithReader(reader))

This works well when parts of your codebase already rely on Prometheus libraries. You don't have to rework instrumentation—the bridge ensures all metrics reach your backend, whether they originate from Prometheus or OpenTelemetry.

When You Need a Prometheus Exporter

A Prometheus exporter exposes metrics in the Prometheus text format so they can be scraped. If you're creating a service that other systems need to scrape, exporting in Prometheus format is the right choice.

OpenTelemetry includes its own Prometheus exporter, which takes OTLP metrics and exposes them in Prometheus format:

exporters:
  prometheus:
    endpoint: 'localhost:8888'

This makes OTLP metrics available at:

localhost:8888/metrics

A Prometheus server can scrape this endpoint just like it would scrape any other Prometheus target.

If you control both sides—your instrumentation and your backend—you can skip the extra translation step and export metrics directly using OTLP. It keeps your pipeline consistent and retains a full metric structure.

Practical Scenarios and How They Play Out

When you plug the Prometheus receiver into your OpenTelemetry setup, a few patterns naturally show up. These are signals that help you understand how the Collector is interpreting your metrics. Here's what you'll usually see and how you can troubleshoot each scenario.

Scrape targets aren't reachable

If the Collector can't scrape a target like localhost:8080, you'll see log entries similar to:

level=warn msg="Error scraping target" err="connection refused"

Common checks you can make:

Is the service actually running? Sometimes the Collector starts before the application does.

Is the container or host using a different network namespace? In Docker, localhost inside the Collector isn't the same as localhost on your machine.

Is a firewall or security group blocking the port? Port 8080 needs to be reachable from wherever the Collector runs.

Once the endpoint responds, the receiver resumes scraping automatically.

Metrics look different after relabeling

Relabeling rules can change the shape of the metrics you see later in the pipeline. This typically happens when a regex matches more broadly or narrowly than expected.

What you can check:

Print the raw /metrics output: Verify what labels and names exist before relabeling.

Temporarily remove a relabel rule: This helps identify which rule reshaped the series.

Test your regex against sample data: Small adjustments often produce very different results.

Relabeling is powerful—understanding exactly how your rules behave gives you precise control.

Duplicate metric names across scrape jobs

If job A and job B both expose a metric like requests_total, OpenTelemetry treats them as a single metric with combined datapoints.

What you can do:

  • Add a distinguishing label using relabel_configs, such as job="service-A"
  • Rename metrics per job if the data isn't meant to be unified
  • Check queries in your backend to confirm whether the combined metric still answers the questions you care about

This isn't a conflict—just a reminder that name collisions across jobs merge metrics unless you separate them.

Format conversions during translation

Prometheus and OpenTelemetry share many metric types, but not all. For example:

  • Prometheus untyped metrics → OpenTelemetry gauge
  • Prometheus histogram → OTLP histogram
  • Prometheus summary → histogram + attributes (depending on Collector version)

To verify conversions:

  • Inspect the OTLP payload using an OTLP debug exporter
  • Check Collector logs for translation messages
  • Look at your backend's representation (which might add its own conventions)

The goal of the translation is consistency, not altering meaning.

Metrics with multiple label combinations

You may notice a metric expanding into hundreds or thousands of series when labels vary by user, tenant, region, or instance. This often looks like:

request_duration_ms{tenant="acme", user="42", route="/checkout"}

Whether this is helpful depends on your analysis needs.

To work with these effectively:

  • Use metric_relabel_configs to drop or merge labels — for example, remove user_id while keeping tenant_id.
  • Aggregate in the Collector using processors such as:
    • transform
    • filter
    • delta
    • sum or histogram processors
  • Sample selectively (timers, traces) if needed.

High-cardinality metrics are common in real systems — being intentional about their shape helps you keep clarity and performance.

💡
Understanding how histogram buckets work and how to tune them is key for latency metrics—this guide explains it clearly!

The Practical Advantage

What makes this setup work is that you don't have to switch everything at once. Your existing Prometheus instrumentation continues running as-is, new services emit OTLP through OpenTelemetry, and the Collector bridges the gap. This gives you a migration path that fits how real systems grow: gradually, safely, and with full continuity.

In practice, you get a few clear advantages:

  • Your Prometheus /metrics endpoints remain untouched, so existing services stay stable.
  • New workloads adopt OpenTelemetry directly, without waiting for a full migration.
  • The Collector normalizes both formats, routing them through a single, consistent pipeline.
  • You change one subsystem at a time, rather than coordinating a large-scale overhaul.
  • You keep end-to-end visibility, since both Prometheus-scraped and OTLP-native metrics flow through one control point.

This pattern also fits naturally with Last9, a telemetry data platform built for high-cardinality metrics, large metric volumes, and mixed instrumentation setups. Our platform is Prometheus-compatible and OpenTelemetry-native, so you can route data from both systems without losing structure or context.

With Last9, you can:

  • Preserve high-cardinality dimensions without dropping labels
  • Run fast queries even as your metric volume scales
  • Use PromQL and OTLP-native signals in the same backend
  • Align directly with the Collector's data model, avoiding extra rewrites or transformations

This gives you an observability foundation that grows with your system. Prometheus continues doing what it's good at, OpenTelemetry lays the groundwork for future services, and Last9 receives the unified output without forcing timing or tooling changes.

Get started just takes minutes, and if you're stuck at any point, book sometime with us for a detailed walkthrough!

FAQs

Can Prometheus and OpenTelemetry run together?

Yes. Prometheus keeps exposing /metrics, and the OpenTelemetry Collector scrapes those endpoints and converts them to OTLP. New services can emit OTLP directly.

Do I need to rewrite my Prometheus instrumentation?

No. Existing Prometheus client libraries continue working. The Collector handles translation automatically.

How does the Prometheus receiver in the Collector work?

It scrapes your defined Prometheus targets, applies relabeling if configured, converts metrics to OTLP, and sends them through the Collector pipeline.

What happens to Prometheus' labels during translation?

Each label becomes an OpenTelemetry attribute. The mapping follows the OpenTelemetry spec, so structure and meaning stay intact.

How are Prometheus metric types mapped to OpenTelemetry?

Counters, Gauges, and Histograms map directly. Summaries map to histograms with attributes. Untyped metrics become gauges.

Can I still use relabeling rules?

Yes. Both relabel_configs and metric_relabel_configs work the same way in the Prometheus receiver. You can drop, keep, or rename metrics before they move downstream.

Should I use pull-based or push-based collection?

Pull-based scraping is simpler if you already expose Prometheus metrics. Push-based remote write works well when services can't be scraped or need explicit emission control.

Do I need the Prometheus exporter?

Only if another system needs to scrape metrics. If you control both ends, exporting via OTLP keeps the pipeline consistent and avoids extra formatting.

How do new services emit metrics if older ones still use Prometheus? New services use OpenTelemetry SDKs, while older services continue with Prometheus. The Collector unifies both streams.

How do I handle metrics with many label combinations? You can keep them as-is, drop labels using relabeling, or aggregate inside the Collector. High-cardinality metrics are normal—you shape them based on what you need to observe.

Authors
Anjali Udasi

Anjali Udasi

Helping to make the tech a little less intimidating. I

Contents

Do More with Less

Unlock unified observability and faster triaging for your team.