Last9

How to Connect Jaeger with Your APM

Learn how to connect Jaeger with your APM to combine tracing and performance monitoring for deeper system visibility.

Sep 22nd, ‘25
How to Connect Jaeger with Your APM
See How Last9 Works

Unified observability for all your telemetry. Open standards. Simple pricing.

Talk to an Expert

Microservices make it tough to understand how applications behave end-to-end. Most teams already rely on an Application Performance Monitoring (APM) tool to track system health. But as requests move across many services, you also need distributed tracing. Jaeger gives you that visibility.

The real value comes from connecting the two. Instead of running APM and Jaeger in silos, you can combine their strengths, metrics from your APM, and traces from Jaeger, to get a clearer view of performance.

In this blog, we talk about how to integrate Jaeger with your APM setup and use both together to understand your systems better.

Why Integrate Jaeger with Your APM

Most teams already depend on an APM tool for system health. It gives you dashboards, alerts, and a broad view of application performance. But when requests span multiple services, APM alone isn’t enough. That’s where distributed tracing with Jaeger adds value.

Jaeger is built to follow requests across services, showing you where latency builds up or errors originate. If a user request touches ten microservices, Jaeger lets you see that path end to end and spot the exact point of slowdown. That level of detail makes debugging and optimization far easier.

An APM setup represents serious time and effort. It already collects metrics, logs, and sometimes limited traces. Replacing it isn’t the goal. The real advantage comes from combining both: using Jaeger to add depth to the data you already rely on.

Integration gives you:

  • Richer insights – trace data alongside metrics and logs.
  • Faster debugging – Jaeger’s trace detail inside the APM views you already use.
  • Less tool-switching – one place to explore system health.
  • Better return on effort – your APM investment and Jaeger deployment working together.

Together, APM and Jaeger form a single, stronger observability stack—combining broad system health with deep request-level detail.

💡
If you’re new to tracing, this overview of APM tracing explains how it fits into application monitoring.

Prerequisites for Integration

Getting Jaeger and your APM tool to work together requires some groundwork. A few checks up front make the integration smoother and help avoid surprises later.

Check Your APM’s Trace Support

The first step is confirming that your APM can ingest external trace data. Many newer platforms—especially those built on OpenTelemetry—support this, but details vary. Look at your APM’s documentation for:

  • Supported formats – does it accept Jaeger’s Thrift/Protobuf, or prefer OTLP?
  • Integration method – APIs, agents, or collector-based approaches?
  • Schema mapping – how trace attributes and spans are handled inside the APM’s model.

Verify Jaeger Deployment

Jaeger also needs to be running cleanly before you connect it elsewhere. Make sure each part is in place:

  • Agents and libraries – applications instrumented with Jaeger clients sending traces.
  • Collector – receiving data from agents and storing it.
  • Query service – able to search and visualize traces in the UI.
  • Storage backend – Elasticsearch, Cassandra, or Kafka configured and healthy.

A stable Jaeger setup ensures the data you send into your APM is complete and accurate.

Use OpenTelemetry for Integration

OpenTelemetry (OTel) provides a standard way to collect and move telemetry data—traces, metrics, and logs—across systems. It includes APIs, SDKs, and a Collector service that can receive, process, and export telemetry in common formats.

For integration, OpenTelemetry helps in three ways:

  • Vendor neutrality – collect data once, export to multiple backends (Jaeger, your APM, or others).
  • Standardization – a common language for telemetry across tools.
  • Flexibility – the Collector can transform, filter, and route data.

Familiarity with the OpenTelemetry Collector is especially useful, since it becomes the central point for handling traces.

Method 1: Jaeger + APM Through the OpenTelemetry Collector

The most flexible way to connect Jaeger with an APM is to use the OpenTelemetry Collector. It can ingest Jaeger traces, process them, and forward them to your APM in the format it expects.

Step 1: Deploy the Collector

The Collector runs as a proxy for telemetry. In Kubernetes, it’s typically deployed as a DaemonSet or as a sidecar. On VMs, it runs as a standalone service.

Deployment choices:

  • Agent – runs close to applications (sidecar or DaemonSet) to receive traces.
  • Gateway – aggregates traces from agents, applies processing, and exports to backends.
  • Resource planning – ensure enough CPU and memory if you expect high trace volume or heavy processing.

Example DaemonSet:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: otel-collector-agent
  labels:
    app: otel-collector-agent
spec:
  selector:
    matchLabels:
      app: otel-collector-agent
  template:
    metadata:
      labels:
        app: otel-collector-agent
    spec:
      containers:
        - name: otel-collector
          image: otel/opentelemetry-collector-contrib:latest
          command: ["--config=/conf/collector-config.yaml"]
          ports:
            - containerPort: 14268   # Jaeger Thrift HTTP
            - containerPort: 14250   # Jaeger gRPC
            - containerPort: 4317    # OTLP gRPC
          volumeMounts:
            - name: otel-collector-config
              mountPath: /conf
      volumes:
        - name: otel-collector-config
          configMap:
            name: otel-collector-config

Step 2: Configure Collector to Receive Jaeger Traces

The Collector needs a Jaeger receiver to accept traces from your apps.

collector-config.yaml:

receivers:
  jaeger:
    protocols:
      thrifthttp:
      grpc:

processors:
  batch:
    send_batch_size: 100
    timeout: 10s

service:
  pipelines:
    traces:
      receivers: [jaeger]
      processors: [batch]
      exporters: []  # to be defined in next step

Update your Jaeger client libraries to point to the Collector’s Jaeger receiver ports instead of a standalone Jaeger Agent.

Step 3: Configure Export to APM

The Collector now needs to forward traces to your APM. Most vendors expose an OTLP endpoint, but some also provide dedicated exporters.

Common setup (OTLP):

exporters:
  otlp:
    endpoint: "your-apm-otlp-endpoint:4317"
    headers:
      api-key: "your-apm-api-key"

service:
  pipelines:
    traces:
      receivers: [jaeger]
      processors: [batch]
      exporters: [otlp]

Vendor exporters:
Some APMs (Datadog, New Relic, Splunk, etc.) have their own exporters in the opentelemetry-collector-contrib build. Check your APM’s docs for details.

Step 4: Validate Trace Flow

Before relying on the integration, confirm that traces are reaching the APM as expected.

  • Collector logs – check for successful reception and export.
  • Jaeger UI – traces should still appear here if Jaeger remains active.
  • APM dashboards – confirm traces show up alongside existing metrics and logs.
  • Trace detail – verify spans, attributes, and timings match what you’d expect from Jaeger.

If data looks off, revisit the Collector config, schema mapping, and your APM’s integration guide.

This method keeps Jaeger running as your tracing source while using OpenTelemetry to feed that data into your APM, giving you both detailed traces and high-level system health in one place.

If you’re looking for an OpenTelemetry-native backend that doesn’t force you to drop attributes or pre-aggregate, Last9 is built for that. Our platform ingests Jaeger traces directly over OTLP, keeps every span attribute queryable, and scales to millions of spans without slowing down dashboards.

💡
For more detail on how the OpenTelemetry Collector works and how to configure it, see our guide: The OpenTelemetry Collector Deep Dive.

Method 2: Direct Integration (When It’s Available)

Not every team needs the flexibility of the OpenTelemetry Collector. Some APM vendors support Jaeger natively, which can make integration faster if your requirements are simple.

Vendor-Specific Integrations

Direct integration usually falls into three categories:

1. Agents with Jaeger hooks
Some APM agents can intercept Jaeger spans directly from your application. For example:

  • Install the APM agent alongside your Jaeger client library.
  • Configure the agent to listen for Jaeger spans locally.
  • The agent enriches the trace (adding environment, host, or deployment metadata) and forwards it to the APM backend.

This keeps your application code unchanged but ties you to the vendor’s agent lifecycle.

2. Direct API ingestion
Certain APMs expose ingestion endpoints that accept Jaeger-formatted traces over HTTP or gRPC. In this model:

  • Your applications still send traces to a Jaeger Agent or Collector.
  • A small forwarding service (or config change) pushes those traces to the APM endpoint.
  • Supported protocols are usually Jaeger Thrift over HTTP or Jaeger gRPC.

Example config (pseudo):

exporters:
  jaeger-thrift:
    endpoint: "https://apm-vendor.example.com/api/v2/jaeger"
    insecure: false
    headers:
      authorization: "Bearer <API_KEY>"

This avoids running a full OpenTelemetry Collector but offers limited transformation and filtering.

3. Built-in connectors
Some APMs provide a toggleable Jaeger connector. Once enabled:

  • The APM backend pulls data directly from a Jaeger Collector or its storage backend (Cassandra, Elasticsearch, or Kafka).
  • No extra services are required, but schema alignment is handled entirely by the vendor.

This is the easiest setup but gives you the least control.

Custom Integrations

When neither vendor support nor the Collector fits, teams sometimes build custom bridges. While flexible, this requires careful engineering.

Key pieces you’ll need:

  • Data format conversion – e.g., transforming Jaeger’s Thrift/Protobuf payloads into your APM’s ingestion format (often JSON or OTLP).
  • Trace context mapping – making sure trace_id, span_id, and parent-child relationships stay intact. A mismatch here makes traces unusable.
  • Error handling – retries, backoff, and dead-letter queues to avoid losing spans during outages.
  • Performance considerations – buffering and batching to handle thousands of spans per second without backpressure.

A custom solution might be justified if:

  • Your APM doesn’t support OTLP or Jaeger natively.
  • You need strict schema transformations that vendor exporters don’t handle.
  • You want complete control over retention, enrichment, or redaction before export.

Example pseudo-code for a forwarder:

from jaeger_client.reporter import Reporter
import requests, json

class CustomForwarder(Reporter):
    def report_span(self, span):
        payload = {
            "traceId": span.trace_id,
            "spanId": span.span_id,
            "operation": span.operation_name,
            "start": span.start_time,
            "duration": span.duration,
            "tags": span.tags,
        }
        requests.post(
            "https://apm-vendor.example.com/traces",
            headers={"Authorization": "Bearer <API_KEY>"},
            data=json.dumps(payload),
        )

This works, but quickly becomes a maintenance burden.

When to Use Direct Integration

  • Small to medium workloads – where the APM already supports Jaeger and you don’t need complex processing.
  • Rapid POCs – for quick trials before committing to a more robust setup.
  • Teams with vendor lock-in – if you’re committed to one APM ecosystem and its agents.

Direct integration trades flexibility for simplicity. It’s less overhead than running a Collector, but it limits how much you can transform, enrich, or route your trace data.

Common Challenges and Troubleshooting

Even after you’ve set up Jaeger, the OpenTelemetry Collector, and your APM, you may still run into issues. Knowing what to look for will save you time. Here are the most common problems you’ll face and how you can work through them.

Data Mismatch and Inconsistent Trace Context

Problem: Traces show up in your APM, but they look broken — attributes don’t map correctly, service names don’t align, or spans don’t link together the way you expect.

What you should check:

  • Schema review – compare Jaeger’s trace schema with the schema your APM expects. Focus on service names, attributes (tags), and span kinds. A mismatch here is the root cause of most broken traces.
  • OpenTelemetry processors – use processors like attributes, resource, or spanmetrics in the Collector to rename, add, or adjust attributes before export. This gives you a clean way to normalize data without changing your application code.
  • Sampling alignment – confirm that both Jaeger and your APM use compatible sampling strategies. If Jaeger samples traces at one rate and your APM expects another, you’ll end up with incomplete or inconsistent data.

Network Connectivity and Firewall Issues

Problem: Your applications or Collector send traces, but nothing shows up in the APM.

What you should check:

  • Firewall rules – make sure all required ports are open. This includes the Collector’s receiver ports (e.g., Jaeger Thrift, gRPC, OTLP) and the APM’s ingestion endpoint.
  • Network reachability – run simple checks like ping, telnet, or curl from the Collector host to your APM endpoint. If those fail, your traces won’t get through either.
  • Proxy settings – if you’re in an environment that routes outbound traffic through a proxy, configure the Collector to use it. Otherwise, spans may never leave your network.
💡
With Last9 MCP, you can bring Jaeger traces and APM metrics into your IDE, giving you real-time production context to debug faster.

Performance Overhead of Collectors and Agents

Problem: Running the OpenTelemetry Collector or vendor agents increases CPU or memory usage, or you notice extra latency in your system.

What you should check:

  • Resource sizing – start with conservative CPU and memory allocations for the Collector. Monitor usage and scale up if you see pressure. Oversizing upfront can waste resources; undersizing can drop spans.
  • Batching – enable the batch processor in the Collector. This group spans together before sending, which reduces network overhead and improves throughput.
  • Sampling strategies – apply head-based sampling for predictable reduction or tail-based sampling in a gateway Collector for better control over which traces you keep. This keeps the load manageable while still giving you representative data.
  • Collector profiling – if overhead remains high, profile the Collector itself to identify slow processors or exporters. Sometimes a single misconfigured processor can become the bottleneck.

Verifying Trace Export and Ingestion

Problem: You’re not sure whether traces are actually making it from your applications into the APM backend.

What you should check:

  • Collector logs – look for messages that confirm spans were successfully received and exported. Most exporters log both successes and errors.
  • APM ingestion metrics – many APMs expose metrics that show how many traces or spans were ingested. Compare these against what you expect from your workload.
  • Synthetic transactions – generate a known trace by running a simple synthetic request in your application. Then track it end to end: confirm it appears in Jaeger, check the Collector logs, and finally validate that it surfaces in your APM dashboards. This is the most reliable way to confirm that every stage of the pipeline is working.
💡
When you need to tie together traces, metrics, and logs for faster debugging, our APM Logs guide shows how those pieces work together: APM Logs: How to Get Started for Faster Debugging

Best Practices for Maintaining an Integrated Observability Stack

Getting Jaeger, the OpenTelemetry Collector, and your APM to work together is only the start. You need to maintain that integration over time to keep it reliable and useful.

Centralize Your Configurations

You’ll be managing configs for multiple components — Jaeger, the Collector, and APM agents. Keep all of them in version control (Git) so you can track changes and roll back if needed.

Use tools like Kubernetes ConfigMaps, Helm charts, or configuration management systems such as Ansible or Puppet to distribute and update configs consistently. This approach makes your setup reproducible and reduces mistakes when changes are rolled out across environments.

Monitor the Integration Itself

Don’t just monitor your applications — monitor the observability pipeline too.

  • Collector metrics – the OpenTelemetry Collector exposes its own Prometheus metrics. Track dropped spans, exporter failures, and resource usage.
  • APM ingestion health – most APMs expose ingestion pipeline metrics or health checks. Use these to confirm data flow.
  • Alerts – create alerts for anomalies like a sudden drop in trace volume, persistent export errors, or rising Collector latency.

Stay Current with Releases

Both Jaeger and OpenTelemetry evolve quickly, and APM vendors ship frequent updates. Regular upgrades matter because they bring:

  • Performance improvements – better throughput, lower CPU/memory usage.
  • New features – more processors, new exporters, and richer trace handling.
  • Security patches – critical fixes that keep your telemetry infrastructure safe.

Always test upgrades in staging before rolling them out to production, especially when schemas or exporters change.

Document and Share Knowledge

Your integrated observability stack is complex — documenting it reduces friction for you and your team.

  • Architecture diagrams – show how traces flow from applications → Collector → APM.
  • Configuration details – note key parameters for Jaeger, the Collector, and APM exporters, and explain why they’re set.
  • Troubleshooting playbooks – capture recurring issues and fixes so others don’t have to rediscover them.
  • Training – run short sessions to help developers and operators understand how to use the tools for debugging and performance tuning.

Tracing and APM with Last9

Connecting Jaeger with your APM gives you visibility into performance, but keeping that integration running smoothly over time can be heavy on both operations and cost.

Last9 is built as an OpenTelemetry-native data platform that simplifies this process and adds capabilities beyond what Jaeger or traditional vendors provide.

Here’s how you benefit:

  • Simplify ingestion – send traces from Jaeger, metrics and application data from Last9 APM, and logs from your services into one place using OpenTelemetry. No extra adapters or one-off integrations.
  • Discover services automatically – Last9 builds a real-time service catalog from your telemetry, mapping dependencies across applications, databases, and infrastructure without manual tagging.
  • Go beyond Jaeger analysis – correlate traces with metrics and logs, run advanced analytics, and create alerts in a single platform, so you don’t need to keep switching tools.
  • Control observability costs – manage scale without losing context. Last9 applies streaming aggregation, tiered storage, and retention controls so you can keep detailed data without unpredictable bills.
  • Resolve issues faster – anomaly detection and correlation guide you toward the root cause quickly, cutting down mean time to resolution (MTTR).

Using Last9 as both your APM and your OpenTelemetry backend gives you a unified observability platform, one that maintains full context, continuously discovers services, and scales with your workloads.

Already sending us traces? Discover Services is ready for you now. New to tracing? It’s as simple as configuring OpenTelemetry.

FAQs

Is Jaeger an APM?
No. Jaeger is not an APM (Application Performance Monitoring) tool. It’s an open-source distributed tracing system that focuses on request flows across services. An APM usually covers metrics, logs, dashboards, and alerts in addition to tracing.

What is APM in Elasticsearch?
Elastic APM is part of the Elastic Observability stack. It collects performance metrics, errors, and traces from applications, storing them in Elasticsearch and visualizing them in Kibana.

What does an APM stand for?
APM stands for Application Performance Monitoring (or sometimes Application Performance Management). It refers to tools that track application health, performance, and availability.

How does Jaeger work?
Applications are instrumented with Jaeger client libraries that create spans for operations. These spans are sent to Jaeger Agents, collected by the Jaeger Collector, stored in a backend (like Elasticsearch or Cassandra), and queried through the Jaeger UI for visualization.

What is Jaeger Distributed Tracing?
It’s Jaeger’s ability to trace a single request as it moves through multiple services. Each service contributes spans, and Jaeger stitches them together into a full trace, showing where time was spent and where failures occurred.

What is AWS App Mesh?
AWS App Mesh is a service mesh for microservices running on AWS. It standardizes service-to-service communication, adding observability, traffic routing, and security at the mesh layer.

What is Elastic Observability?
Elastic Observability is Elastic’s solution for monitoring applications, infrastructure, and logs. It integrates Elastic APM, metrics, and logging into a single stack powered by Elasticsearch and Kibana.

How do I set up Jaeger APM for my application?
Strictly speaking, Jaeger is not an APM, but you can instrument your app with Jaeger client libraries or OpenTelemetry SDKs. Configure the app to send traces to a Jaeger Agent or directly to the OpenTelemetry Collector, then query them in the Jaeger UI.

How do I integrate Jaeger with my application?
Add Jaeger client libraries or OpenTelemetry instrumentation to your application code. Point the exporter to a Jaeger Agent or Collector endpoint. Once spans are generated, you can view full traces in the Jaeger UI.

How do you integrate Jaeger with Kubernetes?
Deploy Jaeger to your Kubernetes cluster (via Helm chart, Operator, or manifests). Instrument your applications with Jaeger or OpenTelemetry SDKs, then configure them to send spans to a Jaeger Agent sidecar or DaemonSet running in the cluster.

How do I integrate Jaeger with my application for performance monitoring?
Use OpenTelemetry SDKs to instrument your application for spans, then export those spans to Jaeger. Combine Jaeger traces with an APM tool for metrics and dashboards to get a full picture of application performance.

Authors
Anjali Udasi

Anjali Udasi

Helping to make the tech a little less intimidating. I

Contents

Do More with Less

Unlock unified observability and faster triaging for your team.