Last9 named a Gartner Cool Vendor in AI for SRE Observability for 2025! Read more →
Last9

Grafana Tempo: Setup, Configuration, and Best Practices

A practical guide to setting up Grafana Tempo, configuring key components, and understanding how to use tracing across your services.

Nov 4th, ‘25
Grafana Tempo: Setup, Configuration, and Best Practices
See How Last9 Works

Unified observability for all your telemetry. Open standards. Simple pricing.

Talk to an Expert

As systems grow, understanding how a request moves across multiple services becomes harder. Traces help bring this picture together by showing the exact path a request takes, along with the timings that matter.

Grafana Tempo is built for this kind of workload. It stores traces efficiently, works well with OpenTelemetry, and keeps the operational overhead low. In this guide, we’ll walk through how Tempo fits into a modern tracing stack, how to set it up, and the configuration patterns that teams use to keep things stable and predictable.

Why Distributed Tracing is Important

What Is Distributed Tracing, and When Do You Need It?

When you’re working with a growing set of services, it becomes harder to understand how a single request moves across your system. A request might pass through APIs, workers, queues, databases, or external services — all owned by different teams and built in different languages.

At some point, you’ll naturally start asking:

  • Where exactly did this request slow down?
  • Which service introduced the error?
  • How are these components connected during a live request?

Logs and metrics help, but they usually show you only one part of the story. They tell you what happened in one service, not how the entire request behaved across the system.

Distributed tracing fills this gap.

With tracing, each operation creates a span, and related spans form a trace. This gives you a clear, end-to-end picture of:

  • Timing for each step
  • Service dependencies
  • Errors in any part of the request
  • How long does the full transaction take

When you have this level of visibility, debugging becomes more straightforward and less dependent on guesswork. It also helps reduce MTTR and makes it easier for you to understand how your microservices interact.

💡
If you’re comparing tracing backends and want to understand where Tempo and Jaeger take different paths, we’ve broken that down here!

Grafana Tempo: A Cost-Effective Distributed Tracing Backend

If you’ve evaluated tracing backends before, you’ve probably noticed how quickly indexing and storage can increase your operational costs. Systems like Jaeger and Zipkin work well, but indexing every span attribute at scale becomes expensive.

This leads to questions you may be thinking about:

  • Do I really need to index every trace field?
  • Can I keep trace data longer without storage pressure?
  • How do I store high-volume trace data affordably?

Grafana Tempo approaches this differently. Tempo is a high-volume distributed tracing backend that focuses on cost efficiency. Instead of indexing everything, it uses a trace ID lookup model and writes spans to object storage—S3, GCS, MinIO, etc.

Because there’s no heavy indexing layer, you get:

  • Lower operational overhead
  • Cheaper long-term trace retention
  • Faster ingestion
  • A simpler scaling model

Tempo works easily with OpenTelemetry, Jaeger, and Zipkin, so you can keep your current instrumentation and point it to Tempo without major changes. This makes Tempo a strong fit when you want scalable tracing without running a large indexing cluster.

Tempo’s Architecture Before You Deploy

Before you install Tempo, it helps to understand how the main components fit together. Tempo processes traces in a sequence, and each part plays a focused role.

You start with the distributor, which receives spans from your applications or your OpenTelemetry Collector. It doesn’t store data; it simply validates spans, groups them, and forwards them to an ingester. This keeps ingestion balanced across the system.

The ingester is where spans start turning into persisted trace data. It keeps spans in memory for a short time and writes them out as blocks. These blocks roll over based on settings you define, such as:

  • How long a block should remain open
  • how large the block is allowed to grow
  • How many traces do you want batched together

These values influence how quickly you can query traces and how efficiently Tempo uses storage.

After blocks are created, the compactor organizes them in object storage. It merges smaller blocks, applies retention rules, and keeps the storage layout consistent. This step helps the querier return results quickly, even when your trace data grows over time.

When you run a query, the querier retrieves the relevant blocks. Since Tempo uses a trace ID lookup model, the querier reads directly from object storage and assembles the trace before returning it to Grafana.

Some setups also include a query frontend. This layer sits between Grafana and the querier and can help when you expect several users or high query volume. The query frontend breaks large queries into smaller tasks, runs them in parallel, and caches results.

How the Components Work Together

Here’s a compact view of how trace data flows through Tempo, from ingestion to visualization:

Your App / OTel SDK / OTel Collector
                |
                v
          [ Distributor ]
                |
                v
            [ Ingester ]
         (writes blocks to object storage)
                |
                v
            [ Compactor ]
       (merges and optimizes block layout)
                |
                v
     [ Query Frontend ] (optional)
                |
                v
            [ Querier ]
                |
                v
              Grafana

When you search for a trace in Grafana, the request flows back to the querier, which pulls the relevant blocks from object storage and returns the assembled trace.

💡
If you’re also working with logs and want to connect your ELK setup to Grafana, we’ve covered that here

Set up Your Grafana Tempo Instance in Minutes

Once you understand how Tempo processes traces, you can prepare your environment for installation. Setting up these pieces first makes deployment smoother and helps avoid changes later.

What You Need Before You Begin

Different parts of Tempo rely on specific external systems. Making sure you have these ready gives you a predictable foundation.

1. Kubernetes Cluster (Recommended)

Tempo can run on a single machine, but if you expect steady or high-volume spans, Kubernetes gives you a cleaner scaling path.
You’ll be ready to deploy if you’re comfortable with:

  • pods
  • services
  • deployments
  • ConfigMaps
  • Helm charts

Kubernetes also maps well to Tempo’s internal components (ingester, distributor, querier, compactor).

2. Object Storage

Tempo stores trace data only in object storage.
You can use:

  • AWS S3
  • Google Cloud Storage
  • Azure Blob Storage
  • MinIO

You’ll need:

  • access key
  • secret key
  • bucket name
  • region
  • endpoint (for MinIO or custom S3 setups)

You’ll reference these values in your Tempo configuration or Helm chart.

3. Grafana

Grafana is where you’ll query your traces.
If you already run Grafana somewhere, you can add Tempo as a data source.
If not, you can deploy it using Docker or Helm.

4. Prometheus (Optional)

Prometheus helps you observe Tempo’s own metrics, including:

  • ingester queue usage
  • compaction intervals
  • trace ingestion numbers
  • WAL activity

It works well if you want to connect traces to metrics.

5. Loki (Optional)

If you want to move from a trace to a log line, Loki provides that connection.
Using Tempo, Loki, and Grafana together gives you a complete trace-to-log experience.

6. Docker (For Local Work)

If you’re experimenting locally, Docker is the most straightforward way to test Tempo without provisioning infrastructure.

7. Helm (For Kubernetes Deployments)

For production environments, Helm keeps your setup consistent and easier to upgrade.

💡
For a deeper look at integrating OpenTelemetry with Grafana’s platform, check out our detailed blog!

Choose Your Installation Method

Tempo supports different installation paths. Pick the one that aligns with your environment.

Option 1: Docker Compose (Local Development or Quick Testing)

Docker Compose is practical when you want to explore Tempo, test your OpenTelemetry pipeline, or try Grafana queries without configuring Kubernetes.

Step 1: Create docker-compose.yaml

This defines your Tempo instance and Grafana.

version: '3.8'

services:
  tempo:
    image: grafana/tempo:latest
    command: [ "-config.file=/etc/tempo.yaml" ]
    volumes:
      - ./tempo-local.yaml:/etc/tempo.yaml
    ports:
      - "14268:14268"
      - "4317:4317"
      - "4318:4318"
      - "9411:9411"
      - "3200:3200"

  grafana:
    image: grafana/grafana:latest
    environment:
      - GF_PATHS_PROVISIONING=/etc/grafana/provisioning
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
    volumes:
      - ./grafana-datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
      - ./grafana-dashboards.yaml:/etc/grafana/provisioning/dashboards/dashboards.yaml
    ports:
      - "3000:3000"

Step 2: Create tempo-local.yaml

This configuration enables OTLP, Jaeger, and Zipkin receivers and uses lightweight local storage.

server:
  http_listen_port: 3200

distributor:
  receivers:
    jaeger:
      thrift_compact:
        port: 6831
      thrift_http:
        port: 14268
    zipkin:
      thrift_http:
        port: 9411
    otlp:
      grpc:
        port: 4317
      http:
        port: 4318

ingester:
  max_block_duration: 5m
  max_block_bytes: 100MB
  max_block_traces: 50000

compactor:
  compaction_interval: 10s
  block_retention: 1h

storage:
  trace:
    backend: local
    local:
      path: /tmp/tempo/traces

Step 3: Start the Stack

docker-compose up -d

You now have Tempo and Grafana running locally with all endpoints exposed.

Option 2: Kubernetes with Helm (Production or Scaled Deployments)

If you’re setting up Tempo for long-term use, Kubernetes and Helm provide a clean deployment model.

Step 1: Add the Grafana Helm Repository

helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

Step 2: Install Tempo Using Helm

helm install tempo grafana/tempo -f values.yaml

Minimal values.yaml for AWS S3

tempo:
  traces:
    otlp:
      grpc:
        enabled: true
      http:
        enabled: true

  storage:
    trace:
      backend: s3
      s3:
        bucket: "your-s3-bucket-name"
        endpoint: "s3.your-region.amazonaws.com"
        access_key_id: "YOUR_ACCESS_KEY"
        secret_access_key: "YOUR_SECRET_KEY"
        region: "your-region"

  wal:
    path: /var/tempo/wal

For production, you would expose these values through Kubernetes Secrets.

💡
If you’d like to make your Grafana dashboards more dynamic and reusable via variables, this guide walks you through it!

Configure Grafana Tempo for Optimal Performance

Once you have Tempo running, the next step is configuring it so it stays responsive under load. Tempo gives you a lot of control through its configuration file, and understanding these sections helps you tune it with confidence.

The Tempo Configuration Structure

Tempo uses a YAML file (often tempo.yaml) to define how each component behaves. The file is organized into clear sections, and each one maps directly to a part of Tempo’s architecture. When you read through it, you’ll see that most of the settings are grouped by function.

Here’s how the main sections contribute to how Tempo runs:

server

This sets the HTTP and gRPC ports. It’s a small section, but it defines how the rest of your telemetry pipeline will connect to Tempo.

distributor

This is where you configure the protocols you want Tempo to receive.
Depending on your setup, you might enable:

  • OTLP (gRPC or HTTP)
  • Jaeger
  • Zipkin

The distributor uses this section to understand how spans should enter the system.

ingester

The ingester holds spans in memory and writes them as blocks to object storage.
This section is especially important because it determines:

  • How long do spans stay in memory
  • How large a block can be
  • How many traces belong in a block

These choices affect durability, memory usage, and storage efficiency.

querier

This controls how Tempo retrieves and assembles trace data. You’ll tune this when you want to adjust query parallelism or match the querier’s performance to your expected workload.

compactor

The compactor organizes the blocks produced by ingesters.
It handles:

  • merging small blocks
  • trimming data based on retention rules
  • keeping the storage layout efficient

storage

This is where you define which object storage backend you want to use.
You provide:

  • the backend type (S3, GCS, Azure, MinIO, local)
  • credentials or connection settings
  • retention rules (if the backend supports it)

memberlist

If you’re running Tempo as a distributed system, this section configures the gossip layer that helps components find each other. It keeps the cluster aware of node membership without external service discovery.

blocks

These settings control how blocks are created and indexed.
You’ll rarely need to modify these early on, but they become important if you want to fine-tune ingestion and query behavior for large-scale traffic.

Key Configuration Parameters to Tune

Certain parameters have a direct effect on performance, reliability, and storage usage. Understanding what they do helps you choose values that match your workloads.

1. ingester.max_block_duration

This controls how long the ingester keeps spans in memory before writing them to storage.

Shorter durations mean:

  • more but smaller blocks
  • more storage operations
  • lower risk of losing recent spans during a restart

Longer durations mean:

  • fewer, larger blocks
  • better query performance
  • more memory usage
  • a larger in-memory window of spans

The right value depends on how much memory your ingester can use and how frequently you want blocks flushed.

2. ingester.max_block_bytes and ingester.max_block_traces

These limit the size of each block.

  • Larger limits reduce the number of blocks in storage
  • Fewer blocks lead to faster queries
  • Larger blocks consume more memory during creation

If your services generate high-cardinality spans, setting these values carefully helps avoid unnecessary memory pressure.

3. compactor.compaction_interval

This defines how often the compactor checks object storage for blocks it can reorganize.

Shorter intervals allow storage to stay tidy and efficient.
Longer intervals reduce compaction activity but may slow down queries on older traces.

You can tune this based on how active your system is and how much compaction overhead fits into your environment.

4. storage.trace.backend and backend settings

This is where you choose your storage engine:

  • s3
  • gcs
  • azure
  • local
  • minio

The backend determines how blocks are stored and retrieved.

Every backend also supports its own specific fields. For example, S3 has:

  • bucket
  • region
  • endpoint

Choosing the correct backend is one of the core decisions in any Tempo deployment.

5. storage.trace.<backend>.block_retention

Retention directly controls storage usage.
Shorter retention decreases storage load.
Longer retention gives you more trace history for debugging or analysis.

You can pick a value that matches your debugging window or compliance requirements.

6. overrides.max_query_parallelism

This setting determines how many queries the querier can run at the same time.
If you expect several engineers to query traces together, or if you rely on large trace searches, you can increase this value.
It should match the CPU and memory available in your querier.

7. limits.max_trace_bytes

This protects your deployment from unusually large traces.
A trace that exceeds this size will be dropped before it affects memory usage.

This is useful if:

  • A client generates very large spans
  • Instrumentation accidentally produces verbose payloads
  • A single malformed trace could consume too much memory

Setting a reasonable limit helps keep your system healthy.

💡
You can also set up Grafana in a fully containerized workflow, and we’ve outlined the approach here!

Integrate Grafana Tempo with Your Applications

Once Tempo is running, the next step is getting your services to send traces and setting up Grafana to read them. This is where your observability pipeline comes together — your applications produce spans, Tempo stores them, and Grafana helps you explore them.

Instrument Your Applications with OpenTelemetry

OpenTelemetry is the easiest way for you to add tracing to your applications. You start by adding the SDK for your language and then enabling span creation and context propagation. This gives you structured trace data that Tempo can understand.

You’ll begin by adding the OTel SDK to your service and initializing a tracer provider. This is where you define how spans are created and what resources describe your service. Most SDKs also provide automatic instrumentation, so you don’t have to wrap every operation manually.

Along with the SDK, you need a way to export spans. Many teams introduce the OpenTelemetry Collector as an intermediary. It receives spans from your application and forwards them to Tempo, which gives you space to add batching, filtering, or sampling later.

Here’s a minimal Collector example that sends traces to Tempo:

receivers:
  otlp:
    protocols:
      grpc:
      http:

exporters:
  otlp:
    endpoint: "tempo:4317"
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [otlp]

A Collector isn’t mandatory — you can export directly from your SDK to tempo:4317 — but using one makes the pipeline easier to manage as your system grows.

After this, you can start creating spans. Automatic instrumentation covers common frameworks, while manual spans highlight the operations that matter most to you. The key step is context propagation, where your service forwards trace context through headers or metadata so Tempo can link spans across services.

Connect Grafana to Tempo

Once traces reach Tempo, Grafana becomes your window into them. Grafana queries Tempo using its HTTP endpoint, so setup is mostly about pointing Grafana to the right URL.

You add Tempo as a data source from Connections → Data sources, then choose Tempo from the list. Grafana asks for a name and a query URL. Depending on your environment, this might look like:

  • Kubernetes:
    http://tempo.tempo.svc.cluster.local:3200
  • Docker Compose:
    http://tempo:3200

After adding the URL, you can decide whether to enable optional features:

  • Service Graphs if Prometheus is available
  • Trace-to-Logs when using Loki
  • Trace-to-Metrics when you want a direct bridge to Prometheus metrics

These integrations help you move across telemetry types without leaving the Grafana Explore view.

Click Save & test to confirm that Grafana can reach Tempo.

Once the data source is active, go to Explore, pick Tempo as the source, and start searching for traces. You can search by:

  • trace ID
  • service name
  • span name
  • duration
  • attributes

If instrumentation is working, you’ll see traces appear as soon as requests pass through your application.

Query and Explore Traces in Grafana Tempo

Once your applications begin sending traces to Tempo and Grafana is connected, you can start examining how requests travel through your system. This is where you see complete request paths instead of isolated events, which helps you understand delays, errors, and service interactions.

Explore View in Grafana

The Explore panel is where you search for traces and inspect them.
When you select your Tempo data source, Grafana switches to a tracing-focused layout that gives you multiple ways to find the data you need.

You can search by:

  • Trace ID, if you already have one
  • Service name, to view recent requests handled by that service
  • Span name, for specific operations such as handlers or database calls
  • Duration, to locate slow or unusually long requests
  • Attributes, such as user ID, route, or status code

These options give you a flexible way to narrow down the traces that matter.

Trace Details View

Opening a trace shows a structured timeline of all spans involved in the request.
Each part of the request is arranged visually so you can see:

  • the sequence of calls across services
  • time spent in each part of the request
  • upstream and downstream operations
  • attributes and metadata attached to each span

Expanding a span reveals tags, events, and resource information, which helps you understand where delays or unexpected behavior appear.

Service Relationships

If you enable service graph support, Grafana can display the connections between your services.
This gives you a broader, system-level view rather than focusing on a single trace.

Graphs help you spot:

  • common request paths
  • cross-service dependencies
  • services that handle high volumes of calls
  • unexpected communication between components

It’s a helpful way to confirm how your architecture behaves in production.

Logs and Metrics from Traces

When Tempo is integrated with Loki or Prometheus, Grafana allows you to jump from a span to related logs or metrics.
This creates a smooth workflow because you can:

  • move from a span to its logs
  • Check latency or error metrics for the same service
  • Open dashboards that show trends related to the trace

Everything stays in one place, so you don’t switch tools while debugging.

Identify Patterns and Unusual Behavior

As you get comfortable with Tempo and Grafana, you can use searches to identify broader patterns. You might look for:

  • traces that consistently exceed a duration threshold
  • differences in span timing between versions of a service
  • attributes linked to slow requests
  • changes in request flow during deploys or rollouts

This turns tracing into both a debugging tool and a way to understand your system over time.

💡
With OpenTelemetry, Tempo, and Last9 MCP, you can pull real trace data, latency, spans, and service context, directly into your IDE to shorten debugging time.

Production Practices for Grafana Tempo

Once you have a working Tempo setup, the next step is shaping it for production use. Tempo stays lightweight, but the way you configure storage, components, and query paths can influence performance as traffic grows. The goal is to give you predictable behavior, stable ingestion, and clear trace visibility during peak load.

Retention and Storage Strategy

Tempo stores all trace data in object storage, so your retention policy has a direct effect on cost and query volume.
A good approach is to start with a clear idea of how long you actually need traces.

You can tune retention by adjusting:

  • block retention settings in the storage section
  • compaction rules that merge or remove older blocks
  • object storage lifecycle policies, if your backend supports them

Shorter retention reduces storage volume.
Longer retention helps when you want to examine older incidents or long-running performance trends.

What matters is choosing a timeframe that matches how your team debugs and reviews issues.

Block Size and Block Timing

The size and timing of blocks written by the ingester have a noticeable impact on cost and query speed.
Larger blocks usually mean faster queries because Tempo needs fewer reads to reconstruct a trace. Smaller blocks give you lower memory usage but add read overhead.

A practical pattern is:

  • Keep block duration moderate (for example, a few minutes)
  • Set block bytes high enough to avoid too many small blocks
  • Adjust max block traces only if your services create exceptionally large spans

Start with defaults, then watch memory usage and query speed as traffic increases. Small adjustments go a long way here.

WAL Usage and Reliability

The Write-Ahead Log (WAL) helps Tempo avoid losing in-flight spans if an ingester restarts.
In production, it’s useful to keep WAL enabled and point it to durable storage inside your cluster.

When WAL is configured well, you gain:

  • consistent ingestion across restarts
  • predictable recovery behavior
  • lower risk of span gaps during deployments

The WAL directory does not need to be large, but it should sit on a reliable disk.

Scaling Tempo Components

Tempo’s components scale independently.
This means you can match capacity with the parts of your workload that need it most.

A few patterns work well:

  • More distributors if you expect high ingestion fan-out
  • More ingesters if your trace volume grows and you want smoother block creation
  • More queries when several people search traces together or dashboards rely on frequent queries
  • A query frontend when you want parallel query execution or caching

Start with a single instance for each role, then add replicas where you see pressure.

Load Expectations and Query Behavior

Most of your traffic will be write-heavy (spans coming from applications).
Query load fluctuates — sometimes only one engineer searches traces, other times several people look at slow requests during an incident.

You can manage this by:

  • setting query parallelism in the querier
  • giving the querier enough CPU to handle wide traces
  • enabling caching in the query frontend for popular spans or repeated searches

Even small changes here improve how quickly traces load during busy periods.

Sampling Strategy

If your system emits a high number of spans, sampling can help you manage storage and cost.
Tempo works well with upstream sampling in the OpenTelemetry Collector.

A simple pattern is:

  • head sampling for general workloads
  • tail sampling for slow traces, errors, or outliers
  • rules-based sampling for requests with specific attributes (for example, user type or route)

The key is sampling where you gain the most insight, not uniformly sampling everything.

Component Health and Metrics to Watch

Prometheus helps you understand how Tempo behaves in production.
A few metrics are useful to track:

  • ingester queue depth
  • number of completed compaction cycles
  • storage interaction latency
  • querier request rates
  • WAL usage
  • rejected or oversized traces

These numbers help you adjust resource limits or scale components based on real workload patterns.

Your Observability with Grafana Tempo + Last9

Adopting Grafana Tempo adds a crucial tracing layer to your observability stack. Tempo helps you follow how a request moves across services, how long each step takes, and where delays appear.

When you use it alongside Prometheus for metrics and Loki for logs, you get a complete view of your system—each signal supporting a different part of your debugging workflow.

With Last9, this combination fits naturally. We’ve designed our ingestion and storage layers to handle detailed telemetry without asking you to reduce attributes or shorten retention. Tempo’s trace model aligns well with how we organize metrics and logs, so you can move across all three signals without switching tools or managing separate backends.

With our platform, you get:

  • Stable ingestion for high-cardinality spans and attributes
  • Predictable performance during trace queries
  • Long-term storage without extra tuning
  • Clear connections between traces, logs, and metrics

Tempo shows how a request moved through your services. Prometheus explains how often something happens. Loki provides the event-level detail behind each span. Our platform brings these signals together, helping you move from a symptom to its cause with fewer steps.

The result is an observability setup that stays steady as your traffic grows and remains easy to work with over time.

Start for free today or book sometime with us for a detailed walkthrough!

Last9 Review
Last9 Review

FAQs

What is Grafana Tempo?
Grafana Tempo is a distributed tracing backend that stores trace data in object storage without requiring heavy indexing. It accepts spans from OpenTelemetry, Jaeger, and Zipkin, and works with Grafana to visualize request flows across services.

Is Grafana Tempo free?
Yes. Tempo is free to use. You can deploy it locally, in Kubernetes, or on any infrastructure without licensing costs.

Is Grafana Tempo open source?
Yes. Tempo is fully open source and maintained by Grafana Labs. You can find the source code, documentation, and community discussions on GitHub.

How to set up Grafana Tempo?
You can deploy Tempo using Docker Compose for local testing or Helm charts for Kubernetes. At minimum, you configure receivers (OTLP, Jaeger, Zipkin), set storage to an S3-compatible backend or local storage, and connect Grafana to the Tempo query endpoint. Once configured, your instrumented applications or OpenTelemetry Collector can start sending traces.

How does Grafana Tempo work?
Tempo receives spans through its distributor, writes them into blocks via ingesters, stores those blocks in object storage, and retrieves them through the querier. It uses a trace-ID lookup model rather than indexing every attribute, which keeps ingestion fast and storage costs low. Grafana queries Tempo to display traces.

What is Google Cloud Trace?
Google Cloud Trace is a distributed tracing service in Google Cloud. It records latency data and request paths for applications running on Google Cloud or instrumented with OpenTelemetry. It is tightly integrated with the Google Cloud operations suite.

What is Zipkin Tracing?
Zipkin is an open-source tracing system that collects, stores, and visualizes trace data. It was one of the early tracing tools and supports multiple tracers. Tempo can ingest spans from Zipkin because it understands the Zipkin format.

What is Prometheus?
Prometheus is an open-source monitoring system used for metrics collection and alerting. It scrapes time-series data from your services, stores it locally or remotely, and lets you query it using PromQL. In a full telemetry stack, Prometheus handles metrics, Loki handles logs, and Tempo handles traces.

Can you report an issue on GitHub for this and how to replicate it?
Yes. You can open an issue in the Grafana Tempo GitHub repository. When reporting, include:

  • Your Tempo version
  • Deployment method (Docker, Kubernetes, Helm)
  • Your config file or relevant snippets
  • Steps to reproduce the issue
  • Expected vs. observed behavior
  • Logs from distributors, ingesters, or queriers
  • Details about your object storage backend

This information helps maintainers reproduce the issue and identify the cause.

What are the benefits of using Grafana Tempo for distributed tracing?
Tempo provides an efficient way to store traces without indexing, which keeps storage simple and cost-effective. It scales well, works with standard tracing protocols, and integrates cleanly with Grafana, Prometheus, and Loki. This makes it straightforward to analyze request flows and correlate traces with logs and metrics.

How does Grafana Tempo compare to Jaeger?
Jaeger includes both tracing and storage components and relies on indexing to support trace queries. Tempo removes most indexing and stores trace blocks directly in object storage. This reduces operational overhead and makes it easier to scale trace retention. Both support OpenTelemetry, but Tempo is often a better fit when you want simple operations, large retention windows, and integration with the Grafana ecosystem.

Authors
Anjali Udasi

Anjali Udasi

Helping to make the tech a little less intimidating. I

Contents

Do More with Less

Unlock unified observability and faster triaging for your team.