When you run services on Quarkus, you need a steady stream of signals to understand how the application behaves—CPU trends, request timings, memory patterns, and how each endpoint responds under load.
Metrics give you that visibility. They help answer questions like:
- Is this service slowing down?
- Why did latency spike after the last deploy?
OpenTelemetry fits well here because it gives Quarkus a common way to generate and export metrics without locking you into a specific monitoring tool. You can send data to Prometheus, an OpenTelemetry Collector, Last9, or any backend that accepts OTLP.
This blog breaks down how OpenTelemetry metrics work in Quarkus and how you can expose them cleanly in your application.
Why OpenTelemetry Works Well for Metrics
OpenTelemetry provides a single specification for generating, transforming, and exporting metrics across distributed systems. Instead of relying on framework-specific registries or vendor plugins, OTel defines a stable data model (Gauge, Counter, Histogram, Exponential Histogram) and a standard export protocol (OTLP) that works across any backend.
In Quarkus, this removes the need to maintain separate paths for Micrometer, Prometheus, or custom exporters. The OTel SDK handles metric creation, aggregation, and collection, and the Quarkus OTel extension exposes them through a consistent pipeline. This means JVM metrics, REST endpoint metrics, and custom application metrics all follow the same semantic conventions and aggregation rules.
Because the data flows through a defined OTel pipeline, you can add processors, apply views to adjust aggregation, or route metrics through an OpenTelemetry Collector without changing application code. This keeps instrumentation predictable and makes it easier to correlate Quarkus metrics with traces and logs across the stack.
What is OpenTelemetry
OpenTelemetry gives Quarkus a consistent way to produce metrics from runtime components and application code. Instead of relying on multiple metric libraries or exporter-specific formats, OTel defines how instruments behave, how data is sampled, and how aggregation happens before export. This makes the metric pipeline predictable when you're running several Quarkus services in a distributed setup.
OTel also defines semantic conventions for HTTP servers, JVM internals, and common application patterns. These conventions ensure you know exactly what a metric means—its unit, expected behavior, and labels—without working through separate documentation for each library.
What Do Metrics Represent in OpenTelemetry?
Metrics in OTel are numeric samples collected at intervals. They show how your service behaves over time and help track patterns like throughput trends, timing shifts, resource pressure, or load variations.
The main OTel metric instruments include:
- Counter — monotonic values such as request totals
- Gauge — sampled values like queue size or active threads
- Histogram — latency or size distributions
- Exponential Histogram — high-resolution distributions with compact encoding
In a Quarkus application, these instruments power runtime insights across HTTP endpoints, JVM memory, garbage collection, CPU load, and any custom business metrics.
Typical questions metrics answer:
- How much CPU does this service use during peak traffic?
- Is memory stabilizing after a deployment?
- Is this API slowing down?
Why Consistent Metric Behavior Matters
OpenTelemetry defines how instruments behave and how data flows through the pipeline—creation, aggregation, temporality, and export. This predictable behavior helps you maintain stable dashboards and alerts across environments. Whether metrics go to Prometheus, an OTel Collector, or a backend like Last9, the structure remains the same.
Because Quarkus uses the OTel SDK directly, you can add processors, modify aggregation with metric views, or change exporters without rewriting instrumentation code.
┌────────────────────┐
│ Quarkus Service │
│ (Your Application) │
└─────────┬──────────┘
│
│ Emits measurements using
│ OTel instruments (Counter,
│ Gauge, Histogram, etc.)
▼
┌────────────────────┐
│ OTel Metrics SDK │
│ - Instruments │
│ - Metric Readers │
│ - Aggregations │
└─────────┬──────────┘
│
│ Handles:
│ - temporality (delta/cumulative)
│ - aggregation (sum, histogram)
│ - resource + attributes
▼
┌────────────────────┐
│ Metric Exporter │
│ (OTLP / Prometheus)│
└─────────┬──────────┘
│
│ Exports metrics in OTel
│ format at scrape/interval
▼
┌─────────────────────────────┐
│ OpenTelemetry Collector │
│ (optional but recommended) │
│ - processors │
│ - filters │
│ - routing │
└──────────────┬──────────────┘
│
│ Sends optimized,
│ transformed metrics
▼
┌───────────────────────────┐
│ Backend / Monitoring │
│ Prometheus / Last9 / │
│ Grafana / Any OTLP sink │
└───────────────────────────┘Integrate OpenTelemetry Metrics with Quarkus
When you add OpenTelemetry to a Quarkus service, you're wiring in a clear metric pipeline that works the same across local development, containers, and Kubernetes. You define the instruments you need—counters, gauges, histograms—and Quarkus exposes them automatically through the OTel SDK.
This gives you a structured way to track how your service behaves at runtime without adding manual exporters or custom metric adapters.
Why Quarkus Works Well for OpenTelemetry Metrics
Quarkus gives you extensions that make setup straightforward without requiring extra libraries. You can enable the OTel SDK, metric readers, and exporters through configuration alone.
Note that by default, only tracing is enabled in the Quarkus OpenTelemetry extension—metrics are optional and can be enabled with a single property. Once you enable metrics, Quarkus publishes:
- HTTP server timings (http.server.duration)
- Request counts
- Basic JVM metrics
For more comprehensive automated metrics (including detailed JVM memory, GC activity, thread state, and CPU metrics), you can use the quarkus-micrometer-opentelemetry extension, which provides all Micrometer metrics unified with OpenTelemetry output.
Because these pieces plug directly into Quarkus's runtime, you don't need to patch handlers or wrap endpoints to get basic telemetry. The extensions handle discovery and registration for you, so your application starts emitting structured data as soon as it runs.
How Quarkus Architecture Helps You Build Observable Services
Quarkus performs most of its analysis during the build step, which helps you keep instrumentation overhead predictable. The runtime uses the already-prepared metadata to register OTel instruments, route measurements, and attach resource attributes. When your service starts, all metric instruments are ready from the first request, even in short-lived or high-churn deployments.
Native mode adds another benefit: the executable initializes quickly, and metric readers start collecting samples immediately. This is useful when you're running autoscaling workloads or jobs that start and stop frequently.
Metric Path Inside a Quarkus + OpenTelemetry Setup
┌──────────────────────────────┐
│ Your Quarkus App │
│ HTTP, JVM, custom logic │
└──────────────┬───────────────┘
│
│ Metric instruments used in code
▼
┌─────────────────────────┐
│ OpenTelemetry SDK │
│ - Instruments │
│ - Readers │
│ - Aggregations │
└──────────────┬──────────┘
│
│ Collects and prepares metric data
▼
┌─────────────────────────┐
│ OTel Export Config │
│ (interval + destination) │
└──────────────┬──────────┘
│
▼
┌─────────────────────────┐
│ Collector / Sink │
│ (receives OTel data) │
└──────────────────────────┘Configure OpenTelemetry Metrics in Your Quarkus Application
You don't need much to start emitting OpenTelemetry metrics from a Quarkus service. Quarkus ships with extensions that hook into the OTel SDK, so once you add the right modules and configuration, your application begins producing HTTP, JVM, and custom metrics automatically.
Step 1: Add the Required Dependencies
Start by making sure your project includes the OpenTelemetry extension. If you don't have a Quarkus app yet, you can generate one:
mvn io.quarkus.platform:quarkus-maven-plugin:createThen add the OTel extension to your pom.xml:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-opentelemetry</artifactId>
</dependency>This enables the OpenTelemetry API and SDK inside your application. By default, Quarkus will automatically instrument tracing for:
- HTTP server endpoints
- REST clients
- JDBC operations
For Comprehensive Metrics: If you want full automated metrics, including detailed JVM metrics that comply with OpenTelemetry semantic conventions, you can add:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-micrometer-opentelemetry</artifactId>
</dependency>This extension bridges Micrometer's comprehensive metric instrumentation with OpenTelemetry's export capabilities, providing unified telemetry output.
Step 2: Enable Metrics and Define Where They Should Go
You configure everything through application.properties. To turn on OpenTelemetry metrics and choose an export destination, set:
# Enable OpenTelemetry metrics
quarkus.otel.metrics.enabled=true
# Enable OTLP metrics export
quarkus.otel.exporter.otlp.metrics.endpoint=http://localhost:4317Port 4317 is the standard OTLP gRPC endpoint.
If you're using an OpenTelemetry Collector, point this to the Collector service instead.
Step 3: Set the Service Name
You want your metrics to show up under a clear, identifiable service name. Quarkus exposes this through:
quarkus.application.name=my-quarkus-serviceThe OTel SDK uses this value as the service.name resource attribute for all emitted telemetry.
Step 4: Control the Export Frequency (Optional)
The default export interval works well for most cases, but you can adjust it if needed:
quarkus.otel.metric.export.interval=5000msFor most setups, 5–10 seconds is a good starting point.
Complete Configuration Example
Here's a complete working configuration:
# Application identification
quarkus.application.name=my-quarkus-service
# Enable OpenTelemetry metrics
quarkus.otel.metrics.enabled=true
# Configure OTLP exporter
quarkus.otel.exporter.otlp.metrics.endpoint=http://localhost:4317
# Optional: control export frequency
quarkus.otel.metric.export.interval=5000ms
# Optional: disable specific automatic instrumentation if needed
# quarkus.otel.instrument.jvm-metrics=false
# quarkus.otel.instrument.http-server-metrics=falseAdvanced Metric Collection
Once you enable the default OpenTelemetry setup in Quarkus, you get HTTP and basic JVM metrics out of the box (or comprehensive metrics with the Micrometer bridge). These are useful, but they rarely tell you the full story of how your application behaves.
To analyze performance and business flow effectively, you need metrics tied to your own logic—metrics for processed orders, the duration of critical paths, or the usage rate of specific features. Custom metric instruments let you capture these values precisely.
Types of Metrics You Can Use
OpenTelemetry gives you three main instruments, each meant for a specific kind of measurement.
A Counter tracks events that accumulate over time. You use it for values that only move forward, such as processed requests or errors. It's ideal when you want to understand throughput or frequency.
A Gauge, on the other hand, represents the current value of something. It's useful when numbers rise or fall throughout the lifecycle of your service—queue lengths, active sessions, or in-flight requests.
Then there's the Histogram, which records the distribution of values like latency or payload size. Instead of a single average, you get meaningful ranges and percentiles that show how your system behaves under different loads.
You'll typically reach for a Counter when you need a running total, a Gauge when you care about the current state, and a Histogram when you want a clear picture of how long something takes.
Custom Metrics
This is where OpenTelemetry becomes much more than a source of system-level metrics. You can define instruments that directly track your business workflows:
- How many orders do you process
- How long does a checkout step take
- How often a user triggers an expensive operation
Quarkus exposes the OTel Meter so you can register these instruments in your own code. Here's a Quarkus-native example showing a Counter and Histogram:
import io.opentelemetry.api.metrics.Meter;
import io.opentelemetry.api.metrics.LongCounter;
import io.opentelemetry.api.metrics.LongHistogram;
import jakarta.inject.Inject;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
@Path("/checkout")
public class CheckoutResource {
private final LongCounter ordersCounter;
private final LongHistogram checkoutLatency;
@Inject
public CheckoutResource(Meter meter) {
this.ordersCounter = meter
.counterBuilder("app.orders.total")
.setDescription("Total number of orders processed")
.build();
this.checkoutLatency = meter
.histogramBuilder("app.checkout.duration")
.setDescription("Checkout latency in milliseconds")
.ofLongs()
.build();
}
@GET
@Produces(MediaType.TEXT_PLAIN)
public String process() {
long start = System.currentTimeMillis();
// business logic here
try { Thread.sleep((long)(Math.random() * 200)); } catch (Exception ignored) {}
ordersCounter.add(1);
checkoutLatency.record(System.currentTimeMillis() - start);
return "Order processed";
}
}With this pattern, you can measure any part of your application—API calls, batch jobs, background workers, or feature usage—in a way that directly reflects what matters to you.
Context Propagation
One of the practical strengths of OpenTelemetry is that it keeps context flowing across all signals. If a request has an active span, any metric you record inside that span automatically shares the same trace context. You can move from a metric spike to a trace, and from a trace to logs, without jumping through multiple tools or guessing which event belongs where.
For example, if your Histogram shows a sudden jump in checkout latency, you can open a trace from that timeframe and see:
- Which endpoint was slow
- What external dependency contributed to the delay
- The logs generated during that call
The signals become connected rather than isolated, giving you a complete path from symptom to root cause.
Export and Visualize Your Quarkus Metrics
Once your Quarkus service starts generating metrics, your next step is simple: get them out of the JVM and into a backend you can query and visualize. The goal is to make your metrics usable, not just available.
Choose Your Exporter
You have two options when working with Quarkus: OTLP or Prometheus.
Option 1: Export via OTLP (Most Practical for Modern Setups)
If you want a clean, future-proof pipeline, you can export metrics using OTLP and route them through an OpenTelemetry Collector.
Add this to application.properties:
# Enable metrics
quarkus.otel.metrics.enabled=true
# OTLP Metrics Export
quarkus.otel.exporter.otlp.metrics.endpoint=http://otel-collector:4317
# Optional: tweaking export frequency
quarkus.otel.metric.export.interval=5000msWhy this setup works well:
- You don't tie your service to a single backend.
- You can route metrics to Prometheus, Loki, Last9, Elasticsearch, or multiple destinations at once.
- You can add processors later (batching, filtering, dropping unwanted labels).
- You can change backends without modifying Quarkus configuration.
To complete this pipeline, your Collector needs a metrics receiver + exporter:
receivers:
otlp:
protocols:
grpc:
exporters:
prometheus:
endpoint: "0.0.0.0:9464"
service:
pipelines:
metrics:
receivers: [otlp]
exporters: [prometheus]Now your Quarkus app → Collector → Prometheus → Grafana.
This is the most flexible setup.
Option 2: Expose Prometheus Format Directly (Simple & Fast)
If you already have Prometheus running in your cluster and you just want it to scrape your Quarkus pod:
Add this:
quarkus.micrometer.export.prometheus.enabled=truePrometheus will scrape:
/q/metricsNote that:
- This uses the standard Prometheus format.
- You work directly with Prometheus without the Collector layer.
Use this if you want "plug-and-play" with minimal overhead.
Visualize Your Metrics
Once the data hits Prometheus or an OTLP-compatible backend, visualization becomes straightforward.
If you use Prometheus → Grafana
Add Prometheus as a Grafana datasource, then create panels like:
p95 latency:
histogram_quantile(0.95, sum(rate(app_checkout_duration_bucket[5m])) by (le))Request throughput:
sum(rate(app_orders_total[1m]))JVM memory usage:
jvm_memory_used_bytes{area="heap"}If you use an OTLP-native backend (Last9, etc.)
You don't need PromQL.
Data arrives with OTel's semantic conventions, so you can directly plot:
http.server.durationhttp.server.request.countprocess.runtime.jvm.memory.usageapp.checkout.duration(your custom Histogram)app.orders.total(your custom Counter)
This is often simpler when you're working with custom business metrics.
Best Practices for OpenTelemetry Metrics in Quarkus
Granularity vs Performance
You want metrics that help you debug and optimize, not a firehose that slows the service.
Keep it practical:
- Track metrics that influence reliability or business flow—API latency, error counts, processing rates, queue depth.
- For high-volume operations, record aggregated values instead of emitting a metric on every event.
- Keep labels tight. A few well-chosen dimensions are useful; unbounded values (user IDs, UUIDs, timestamps) work better when normalized.
A good rule: if a label can produce unbounded combinations, consider normalizing it or using an alternative approach.
Naming Conventions
Clear naming saves you time when querying dashboards.
Use consistent patterns:
- Prefix app-specific metrics:
app.checkout.duration,app.orders.total - Counters use event-focused names:
*.total,*.count - Gauges reflect current state:
*.active,*.size - Include units when helpful:
query_time_ms,payload_bytes
This keeps your metrics predictable across services as your system grows.
Continuous Refinement
Your metrics can evolve with your code.
A lightweight workflow helps:
- Review metrics periodically to see what's being queried.
- Add metrics when a new feature or dependency introduces new monitoring needs.
- After an incident, check whether additional metrics would have helped with detection or diagnosis.
Consider your metrics as part of your application's observability surface—they benefit from periodic updates to stay useful.
Overcome Common Challenges in OpenTelemetry Metrics Implementation
Quarkus and OpenTelemetry work well together, but you may still want a few adjustments to keep the metric pipeline efficient and accurate. The points below give you practical guidance without adding overhead to your service.
Practical Ways to Optimize Performance
OpenTelemetry metric collection remains lightweight when you structure it well. A few focused decisions help you maintain consistent performance:
Batch exports: Set an appropriate export interval with quarkus.otel.metric.export.interval. Larger intervals reduce the number of outbound metric batches.
Constrain label sets: Labels with broad value ranges expand metric dimensions quickly. Use stable, finite label values to keep the series count predictable.
Lightweight custom instrumentation: A metric update works best when it executes in constant time. Keep the surrounding code simple and minimize extra I/O or complex operations.
Local aggregation for very busy paths: For paths with extremely high event rates, you can accumulate values in memory and record them periodically instead of recording each event individually.
These approaches help you maintain a clear and efficient metric signal as your service grows.
Troubleshooting Data Discrepancies: A Practical Checklist
When a dashboard shows unexpected values or missing series, you can isolate the cause by examining each stage of the pipeline.
1. Inspect exported output: Enable debug-level output in the OpenTelemetry Collector to view the raw metric data. This confirms the values emitted by your Quarkus service.
2. Review Collector configuration: A mismatch in receiver or exporter definitions can alter the metric flow. Check that:
- The OTLP receiver accepts metrics.
- The metrics pipeline references the correct receiver and exporter
- Processors preserve attributes essential to your queries
3. Verify backend ingestion: Prometheus and other backends may skip samples with invalid timestamps or unexpected label sets. Check backend logs or status indicators for ingestion details.
4. Ensure consistent time sources: Different clock offsets between your service, the Collector, and the backend can create gaps or misaligned samples. Synchronized time across components helps maintain data consistency.
5. Validate instrument behavior: Double-check that each instrument reports what you expect:
- Counters move upward
- Gauges reflect current values
- Histograms record realistic ranges
These checks help you confirm that your metric data matches actual service behavior.
Where Last9 Fits in Your OTel Pipeline
Once your Quarkus service exports metrics over OTLP, Last9 receives them in their native OpenTelemetry format. You don't need custom exporters or extra instrumentation—the same Counter, Gauge, and Histogram definitions you add in code arrive exactly as expected.
Last9 handles detailed metric series without timeouts or drops, which is useful when your Quarkus service emits high-volume latency histograms, busy HTTP metrics, or business-specific counters. You also get a clear view of label patterns and series growth, making it easier to adjust your metric design when needed.
To extend this setup, you can configure ingestion keys, OTLP endpoints, and Collector routes the same way you would for traces or logs. The platform supports:
- Direct OTLP/gRPC and OTLP/HTTP ingestion
- Multi-sink Collector pipelines
- Per-service retention tiers
- Detailed queries aligned with OTel semantic conventions
This lets you keep your Quarkus instrumentation simple while relying on Last9 to store, query, and visualize the data with consistent behavior across metrics, traces, and logs.
Start for free today or book sometime with us for a detailed walkthrough!
FAQs
How does Quarkus integrate with OpenTelemetry?
Quarkus integrates with OpenTelemetry through the quarkus-opentelemetry extension. The extension adds the OTel SDK to your application and wires it into Quarkus's HTTP layer, REST client, and runtime internals. Once enabled, Quarkus provides automatic span creation for incoming/outgoing requests and exposes the OTel Meter and Tracer APIs for custom instrumentation. You configure it through application.properties, typically with OTLP exporters for metrics and traces.
Is there any way to also utilize OpenTelemetry metrics and push them to the same OTLP collector as the traces?
Yes. Quarkus can export traces and metrics to the same OTLP collector endpoint. You enable both signals in application.properties and point them to the same OTLP URL:
quarkus.otel.traces.enabled=true
quarkus.otel.metrics.enabled=true
quarkus.otel.exporter.otlp.endpoint=http://otel-collector:4317The Collector then routes them through its metrics and traces pipelines.
Is there a GitHub issue to keep track of adding OTel Metric support to the Quarkus OpenTelemetry plugin?
OpenTelemetry metrics support has been added to Quarkus as of version 3.x, and the original tracking issue (#39033) has been resolved.
Quarkus now supports OTel metrics through two key extensions:
quarkus-opentelemetry— provides basic OpenTelemetry metrics.quarkus-micrometer-opentelemetry— provides full Micrometer metrics with OpenTelemetry export.
For ongoing updates and additional improvements, you can follow the Quarkus OpenTelemetry label on GitHub (quarkusio/quarkus), where metric-related changes continue to land.
How can I set up OpenTelemetry metrics in a Quarkus application?
You add the OTel extension, enable the metrics signal, and set your OTLP endpoint. A minimal working setup looks like:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-opentelemetry</artifactId>
</dependency># Enable metrics
quarkus.otel.metrics.enabled=true
# Metrics export
quarkus.otel.exporter.otlp.metrics.endpoint=http://otel-collector:4317
# Optional: export interval
quarkus.otel.metric.export.interval=5000ms
# Service name
quarkus.application.name=my-quarkus-serviceWith this configuration, Quarkus emits default runtime metrics and allows you to define custom Counters, Gauges, and Histograms through the OTel Meter API. For comprehensive automated metrics, you can add the quarkus-micrometer-opentelemetry extension.