Last9 named a Gartner Cool Vendor in AI for SRE Observability for 2025! Read more →
Last9

Instrument Jenkins With OpenTelemetry

Instrument Jenkins with OpenTelemetry to understand pipeline behavior, stage latency, and deploy steps using a single telemetry flow.

Nov 27th, ‘25
Instrument Jenkins With OpenTelemetry
See How Last9 Works

Unified observability for all your telemetry. Open standards. Simple pricing.

Talk to an Expert

You can instrument Jenkins with OpenTelemetry using the official plugin and an OpenTelemetry Collector, then send the data to a backend like Last9 to understand where pipeline latency and failures actually originate.

Jenkins provides job status and console logs, but it doesn't show how time is distributed across stages, agents, plugins, and external systems. OpenTelemetry fills that gap by emitting traces, metrics, and logs in a standard format that any OTLP-compatible backend can process.

With tracing enabled, you can answer questions like: Which stage dominates runtime? Where do failures cluster? Which pipelines consume the most compute resources? All of this becomes visible without manually digging through logs.

The OpenTelemetry community has also introduced CI/CD semantic conventions, making pipeline spans and attributes consistent across different tools and platforms.

If you already collect application telemetry with OpenTelemetry, adding Jenkins to the same pipeline gives you end-to-end visibility from git push to deployment and production traffic. A backend that supports CI signals, such as Last9, can treat Jenkins as part of your reliability, performance, and cost analysis rather than an isolated system.

Why You Should Care About Jenkins + OpenTelemetry

You already know when a pipeline is red. The missing part is why it is slow or failing.

With Jenkins sending OpenTelemetry data, you can typically resolve the following pain points:

Slow pipelines: Identify which stages, agents, or external calls dominate total duration.

Flaky runs: Group failures by stage, environment, or test suite instead of by individual job.

Agent waste: See which pipelines consume the most agent minutes and where parallelization helps.

Change impact: Correlate a spike in failed builds with a specific deploy, infra change, or dependency issue.

The value comes from joining Jenkins data with application and infrastructure telemetry. For example, you can see that a particular pipeline slowdown matches higher latency from a package registry or a noisy Kubernetes node. Last9 can overlay this Jenkins signal with service metrics and SLOs, so you see whether CI instability is starting to affect production reliability.

A Simple Jenkins → OpenTelemetry Collector → Backend Architecture

You do not have to tie Jenkins to any specific vendor. The easiest approach is to send everything to an OpenTelemetry Collector and let the Collector forward the data to the backends you use.

Here's how the flow typically works:

  1. The Jenkins OpenTelemetry plugin exports traces (and optionally metrics and logs) over OTLP.
  2. An OpenTelemetry Collector receives this data.
  3. The Collector forwards it to one or more backends (Last9, Jaeger, Tempo, Elastic, etc.).

A basic Collector configuration for traces looks like this:

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch: {}

exporters:
  otlphttp/last9:
    endpoint: https://ingest.last9.io/v1/traces
    headers:
      authorization: "Bearer <LAST9_API_TOKEN>"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/last9]

This setup keeps the Jenkins side simple. You point it to one OTLP endpoint, and the Collector handles routing, retries, and any backend-specific details.

Configure the Jenkins OpenTelemetry Plugin

The Jenkins OpenTelemetry plugin is the main integration point, and once you configure it at the controller level, every job and pipeline automatically follows the same setup—no per-job changes needed.

1. Install the plugin

Go to Manage Jenkins → Manage Plugins → Available, search for OpenTelemetry, install it, and restart if Jenkins asks.

If it doesn't show up in the Available tab, check the Updates tab or verify your update center configuration.

2. Configure connection and identity

Open Manage Jenkins → Configure System → OpenTelemetry.

Endpoint

Set the OTLP receiver address of your OpenTelemetry Collector, for example:

  • http://otel-collector:4317 (gRPC)
  • http://otel-collector:4318 (HTTP)

Pick the protocol that matches how your Collector is configured. gRPC is efficient and common, while HTTP is helpful when working behind proxies or when gRPC is blocked.

Service name

Use a stable name such as:

  • jenkins-ci
  • jenkins-controller
  • jenkins-prod

This becomes the service.name for all emitted spans, so choose something you can rely on for dashboards and filters.

Resource attributes

Add resource attributes that help you sort and query Jenkins traces later. Common examples include:

  • jenkins.instance=prod-main
  • env=prod
  • region=ap-south-1
  • team=platform

These resource values appear on every span, which makes it easier to separate data from multiple controllers, regions, or environments.

3. Environment variable export

Enable "Expose OpenTelemetry configuration as environment variables".

This is useful because build tools and test frameworks automatically read standard OTel environment variables. As a result:

  • Trace context flows through pipeline steps, test runners, integration scripts, and language SDKs.
  • Tools like Maven or Gradle reuse the same endpoint and headers without extra setup.
  • End-to-end traces span the whole CI run instead of stopping at the controller boundary.

Once this is on, Jenkins injects variables such as:

  • OTEL_EXPORTER_OTLP_ENDPOINT
  • OTEL_SERVICE_NAME
  • OTEL_RESOURCE_ATTRIBUTES

These are automatically picked up by most modern test and build systems.

Making sure it works in practice

With this configuration, you shouldn't need to change individual jobs unless they override environment variables or manually configure their own tracing. Both scripted and declarative pipelines emit spans automatically, including stage boundaries, agent transitions, queue wait time, and shell step execution. If you want extra detail, you can still add custom spans inside pipeline steps.

To confirm everything is wired correctly, you can check the Collector logs for incoming OTLP traffic and look for your configured service.name in your backend (Last9, Jaeger, Tempo, Elastic, etc.).

Build Clean, High-Signal Pipeline Traces

The plugin can emit spans for jobs, stages, and steps, but how you structure and name your pipelines decides how readable those traces are. Clear labels and consistent attributes make analysis much easier in any backend, including Last9.

1. Use explicit labels for steps

In both scripted and declarative pipelines, set the label field so span names clearly describe the action being taken.

stage('Build') {
  steps {
    sh label: 'maven-package', script: './mvnw -B -Dmaven.test.skip package'
  }
}

stage('Test') {
  steps {
    sh label: 'unit-tests', script: './mvnw -B test'
  }
}

Use labels that describe the step's purpose rather than step-specific values.

Values that change on every run—branch names, commit hashes, or ticket identifiers—belong in attributes, where they can be queried and filtered cleanly. Keeping the label stable simply makes timelines, summaries, and comparisons easier to read.

2. Attach useful attributes

Use environment variables and pipeline variables to attach attributes such as:

  • service
  • repo
  • branch
  • team
  • deployment_env

Scripts and tools can read these values and add them to spans or logs so that every run carries a consistent metadata set. A practical pattern is to wrap your build or test commands in small helper scripts that:

  1. Read a standard set of environment variables
  2. Attach a consistent attribute schema
  3. Execute the underlying tool

This gives any backend, including Last9, structured metadata that makes grouping pipelines by service, branch, or environment straightforward, without affecting how steps are labeled.

Extend Beyond Jenkins: Build, Test, and Deploy Spans

Jenkins spans give you the top-level view of a pipeline, but long build or test phases often need deeper visibility. Adding spans from the tools inside those stages gives you a complete picture of where time is spent and where failures cluster.

1. Build tools

For Java projects, the Maven OpenTelemetry extension or Gradle plugins can emit spans for compilation, packaging, and test phases. These spans appear nested under the Jenkins stage span, making it easy to understand which part of the build contributes most to the overall duration.

Other ecosystems can use language SDKs or small CLI wrappers to instrument build commands. This works well for Go, Node.js, Python, Rust, or any workflow driven by a command-line tool.

2. Tests

Instrumentation for test frameworks—JUnit, pytest, and similar tools—lets you see which suites, files, or test groups account for most of the runtime or the highest failure rates. This helps with decisions such as splitting test groups, running selected tests in parallel, or isolating unstable test sets.

3. Deployment scripts

If your deployment logic is shell-based, tools like otel-cli or lightweight wrapper scripts can emit spans for each step. Instead of a single opaque log block, deployment phases appear as structured spans that can be correlated with cluster activity, application traces, and infrastructure metrics.

When Jenkins, build tools, test frameworks, and deployment scripts all send data through the same Collector and into any backend, you get a continuous trace that covers the entire path: pipeline trigger → build → tests → deploy → application spans. This end-to-end view makes it much easier to understand how changes progress from CI to runtime behavior.

Operate This Setup in Production

Once this runs in a busy CI environment, you'll start caring about three practical areas: volume, cost, and consistency. Most of this is handled through good naming, clear attributes, and Collector-side controls.

1. Control volume and cardinality

Use stable span names and place run-specific values in attributes. This keeps traces easy to read and makes aggregated views cleaner.

If certain pipelines generate large amounts of low-value data, apply sampling in the Collector. You can keep full details for important pipelines and reduce noise for others.

For logs, keep the high-signal entries flowing through OTLP and store full debug logs in a cheaper location. This reduces pressure on the telemetry pipeline without losing debugging detail.

Backends that handle high-cardinality data well, such as Last9, let you use useful labels such as team, repo, feature_flag, or pipeline_type without worrying about hitting limits or degrading performance.

2. Standardize attribute keys

Define a small, predictable attribute schema and use it everywhere:

  • service.name
  • ci.pipeline.name
  • ci.pipeline.id
  • repo
  • branch
  • env
  • team

When attributes stay consistent, dashboards, queries, and SLOs become reusable. In Last9, this makes views like pipeline latency, failure rate per pipeline, or agent utilization per team straightforward to build and maintain.

3. Reuse one telemetry pipeline

Keep a single OpenTelemetry pipeline for applications, infrastructure, and Jenkins. This avoids separate configuration layers and keeps trace context connected across systems.

If you need different destinations, use Collector routing rules. The same incoming OTel stream can go to multiple backends—for example, Last9 for analysis, plus a raw trace store for long-term retention.

Why Last9 Is a Better Fit for Jenkins + OpenTelemetry

Most backends can receive OTLP data from Jenkins, but they store the spans as generic service telemetry. Pipeline runs, stage boundaries, queue delays, and agent transitions end up looking like ordinary spans, which means you still have to build CI-specific views, filters, and alert conditions on your own.

Last9 takes the Jenkins structure that comes through OpenTelemetry and presents it using concepts that map directly to CI workflows. Pipelines, stages, run durations, failure points, and retry patterns appear as explicit CI signals, not as untyped spans. This gives you ready-to-use dashboards for latency, stability, and run history without extra modeling or custom queries.

The platform's support for high-cardinality telemetry also fits CI workloads well. You can tag Jenkins spans with team, repo, branch, service, feature_flag, environment, or any pipeline-specific dimension, and still query them consistently. This makes it straightforward to answer questions such as:

  • Which pipelines are the least stable across teams?
  • Which repos or services correlate with most failed deploys?
  • Where is CI compute usage spiking?
  • Which pipeline runs changed behavior after an infra or dependency update?

Since this CI data lives in the same system as your application metrics, traces, and SLOs, you can connect pipeline performance with what happens in production. Slow builds, failed deploys, and runtime issues no longer live in separate tools.

A practical next step is to instrument one Jenkins controller with the OpenTelemetry plugin, send data to your Collector, and feed it into Last9. You'll get a clear view of pipeline behavior—duration patterns, failure modes, and deployment traces—alongside the production signals you already track.

Try Last9 today or book sometime with our team for a detailed walkthrough!

FAQs

Do you need to change every pipeline to use OpenTelemetry?

No. Once the plugin is configured centrally, you start getting spans for jobs and stages immediately. You only need to touch pipeline definitions if you want better span names, custom attributes, or deeper instrumentation in specific steps.

Can you use Jenkins OpenTelemetry with existing tools like Jaeger or Grafana?

Yes. Jenkins sends OTLP to the Collector, and the Collector can export to Jaeger, Grafana Tempo, Elastic APM, or any other supported exporter. You can use Last9 alongside those tools to get CI-aware SLOs and cost views while still keeping existing trace explorers.

What is the runtime overhead on Jenkins?

The plugin is designed for production use and adds minimal overhead compared to typical build and test workloads. The main impact is network I/O for telemetry export, which you handle via the Collector; for most teams, this is negligible relative to the value of faster debugging and better resource usage.

What if you are not using OpenTelemetry elsewhere yet?

You can still start with Jenkins plus a single Collector and a single backend. If you later add OpenTelemetry for applications or infrastructure, they can reuse the same Collector and exporters, so the initial Jenkins work is not wasted.

Authors
Anjali Udasi

Anjali Udasi

Helping to make the tech a little less intimidating. I

Contents

Do More with Less

Unlock unified observability and faster triaging for your team.