Skip to content
Last9
Book demo

Kubernetes Resource Attributes

Enrich Ruby OpenTelemetry traces with Kubernetes semantic convention attributes using the Downward API

When a Ruby application (Rails, Sinatra, Roda, Grape, or plain Rack) runs on Kubernetes, OpenTelemetry traces benefit from k8s.* resource attributes — they let you filter spans by pod, namespace, node, or deployment in Last9.

The OpenTelemetry Ruby SDK does not ship a dedicated Kubernetes resource detector gem. The recommended approach is the Kubernetes Downward API, which injects pod metadata as environment variables. The SDK reads OTEL_RESOURCE_ATTRIBUTES automatically at startup and merges those values into every span’s resource.

This guide works for any Ruby OpenTelemetry setup — no code changes beyond what your existing Sinatra / Rails / Roda guide already covers.

A complete working example is available at opentelemetry-examples/ruby/k8s-downward-api↗.

How it works

  1. Kubernetes exposes pod metadata (name, UID, namespace, node, IP, labels) as environment variables via valueFrom.fieldRef in the container spec.
  2. A composite OTEL_RESOURCE_ATTRIBUTES env var interpolates those values into OpenTelemetry’s key=value,key=value format.
  3. The Ruby OpenTelemetry SDK parses OTEL_RESOURCE_ATTRIBUTES during OpenTelemetry::SDK.configure and attaches the attributes to every exported span.

Deployment manifest

Add the following env block to your container in your Deployment (or StatefulSet / DaemonSet) manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
name: ruby-app
labels:
app.kubernetes.io/name: ruby-app
app.kubernetes.io/version: "1.0.0"
spec:
template:
metadata:
labels:
app.kubernetes.io/name: ruby-app
app.kubernetes.io/version: "1.0.0"
spec:
containers:
- name: app
image: your-ruby-app:latest
env:
# OTel exporter — existing Last9 config
- name: OTEL_SERVICE_NAME
value: ruby-app
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: https://otlp.last9.io
- name: OTEL_EXPORTER_OTLP_HEADERS
value: "Authorization=<BASIC_AUTH_HEADER>"
# Downward API — pod metadata as env vars
- name: K8S_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: K8S_POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: K8S_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: K8S_NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: K8S_HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
# Deployment name via pod label — fieldRef cannot walk owner refs
- name: K8S_DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app.kubernetes.io/name']
- name: K8S_APP_VERSION
valueFrom:
fieldRef:
fieldPath: metadata.labels['app.kubernetes.io/version']
# Hardcoded — not available via Downward API
- name: K8S_CLUSTER_NAME
value: production
- name: K8S_CONTAINER_NAME
value: app
# Compose OTEL_RESOURCE_ATTRIBUTES — SDK reads this automatically
- name: OTEL_RESOURCE_ATTRIBUTES
value: >-
k8s.cluster.name=$(K8S_CLUSTER_NAME),
k8s.namespace.name=$(K8S_NAMESPACE_NAME),
k8s.node.name=$(K8S_NODE_NAME),
k8s.pod.name=$(K8S_POD_NAME),
k8s.pod.uid=$(K8S_POD_UID),
k8s.pod.ip=$(K8S_POD_IP),
k8s.container.name=$(K8S_CONTAINER_NAME),
k8s.deployment.name=$(K8S_DEPLOYMENT_NAME),
host.ip=$(K8S_HOST_IP),
service.version=$(K8S_APP_VERSION),
service.namespace=$(K8S_NAMESPACE_NAME),
service.instance.id=$(K8S_NAMESPACE_NAME).$(K8S_POD_NAME).$(K8S_CONTAINER_NAME)

Attributes emitted

AttributeSource
k8s.cluster.namehardcoded
k8s.namespace.namemetadata.namespace
k8s.node.namespec.nodeName
k8s.pod.namemetadata.name
k8s.pod.uidmetadata.uid
k8s.pod.ipstatus.podIP
k8s.container.namehardcoded
k8s.deployment.namepod label app.kubernetes.io/name
host.ipstatus.hostIP
service.versionpod label app.kubernetes.io/version
service.namespacemetadata.namespace
service.instance.idcomposed

Verify

Port-forward the pod and trigger a request:

kubectl port-forward deploy/ruby-app 4567:4567
curl localhost:4567/your-endpoint

In Last9, open the Traces view and inspect any span — the Resource Attributes panel should list every k8s.* key configured above.

Alternatively, run an OpenTelemetry Collector with the debug exporter to print received spans; look for the Resource attributes section.

Limitations of the Downward API

The fieldRef mechanism cannot walk owner references or query cluster state, so the following attributes need a different approach:

  • k8s.replicaset.name
  • k8s.deployment.name (without a pod label)
  • k8s.statefulset.name, k8s.daemonset.name, k8s.job.name, k8s.cronjob.name
  • k8s.node.uid
  • k8s.pod.start_time
  • k8s.cluster.uid

For these, use the OpenTelemetry Collector k8sattributesprocessor. It queries the Kubernetes API server (requires RBAC for pod / namespace / node read) and enriches spans in-transit, so your Ruby application does not need to know anything about Kubernetes.

A hybrid setup — Downward API for pod-level basics, Collector processor for owner chain — is what most production deployments use.

Troubleshooting

  • Attributes missing on spans: verify env vars are resolved inside the pod with kubectl exec deploy/ruby-app -- env | grep K8S_, and confirm OTEL_RESOURCE_ATTRIBUTES contains the expected comma-separated list.
  • k8s.deployment.name shows empty string: the pod must carry the app.kubernetes.io/name label — Kubernetes does not auto-propagate deployment names to pods.
  • Downward API expansion produces literal $(VAR_NAME): env vars referenced via $(...) must be declared earlier in the same container’s env list.

Get in touch on Discord↗ or Email if you run into issues.