Kubernetes Resource Attributes
Enrich Ruby OpenTelemetry traces with Kubernetes semantic convention attributes using the Downward API
When a Ruby application (Rails, Sinatra, Roda, Grape, or plain Rack) runs on Kubernetes, OpenTelemetry traces benefit from k8s.* resource attributes — they let you filter spans by pod, namespace, node, or deployment in Last9.
The OpenTelemetry Ruby SDK does not ship a dedicated Kubernetes resource detector gem. The recommended approach is the Kubernetes Downward API, which injects pod metadata as environment variables. The SDK reads OTEL_RESOURCE_ATTRIBUTES automatically at startup and merges those values into every span’s resource.
This guide works for any Ruby OpenTelemetry setup — no code changes beyond what your existing Sinatra / Rails / Roda guide already covers.
A complete working example is available at opentelemetry-examples/ruby/k8s-downward-api↗.
How it works
- Kubernetes exposes pod metadata (name, UID, namespace, node, IP, labels) as environment variables via
valueFrom.fieldRefin the container spec. - A composite
OTEL_RESOURCE_ATTRIBUTESenv var interpolates those values into OpenTelemetry’skey=value,key=valueformat. - The Ruby OpenTelemetry SDK parses
OTEL_RESOURCE_ATTRIBUTESduringOpenTelemetry::SDK.configureand attaches the attributes to every exported span.
Deployment manifest
Add the following env block to your container in your Deployment (or StatefulSet / DaemonSet) manifest:
apiVersion: apps/v1kind: Deploymentmetadata: name: ruby-app labels: app.kubernetes.io/name: ruby-app app.kubernetes.io/version: "1.0.0"spec: template: metadata: labels: app.kubernetes.io/name: ruby-app app.kubernetes.io/version: "1.0.0" spec: containers: - name: app image: your-ruby-app:latest env: # OTel exporter — existing Last9 config - name: OTEL_SERVICE_NAME value: ruby-app - name: OTEL_EXPORTER_OTLP_ENDPOINT value: https://otlp.last9.io - name: OTEL_EXPORTER_OTLP_HEADERS value: "Authorization=<BASIC_AUTH_HEADER>"
# Downward API — pod metadata as env vars - name: K8S_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: K8S_POD_UID valueFrom: fieldRef: fieldPath: metadata.uid - name: K8S_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: K8S_NAMESPACE_NAME valueFrom: fieldRef: fieldPath: metadata.namespace - name: K8S_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: K8S_HOST_IP valueFrom: fieldRef: fieldPath: status.hostIP # Deployment name via pod label — fieldRef cannot walk owner refs - name: K8S_DEPLOYMENT_NAME valueFrom: fieldRef: fieldPath: metadata.labels['app.kubernetes.io/name'] - name: K8S_APP_VERSION valueFrom: fieldRef: fieldPath: metadata.labels['app.kubernetes.io/version']
# Hardcoded — not available via Downward API - name: K8S_CLUSTER_NAME value: production - name: K8S_CONTAINER_NAME value: app
# Compose OTEL_RESOURCE_ATTRIBUTES — SDK reads this automatically - name: OTEL_RESOURCE_ATTRIBUTES value: >- k8s.cluster.name=$(K8S_CLUSTER_NAME), k8s.namespace.name=$(K8S_NAMESPACE_NAME), k8s.node.name=$(K8S_NODE_NAME), k8s.pod.name=$(K8S_POD_NAME), k8s.pod.uid=$(K8S_POD_UID), k8s.pod.ip=$(K8S_POD_IP), k8s.container.name=$(K8S_CONTAINER_NAME), k8s.deployment.name=$(K8S_DEPLOYMENT_NAME), host.ip=$(K8S_HOST_IP), service.version=$(K8S_APP_VERSION), service.namespace=$(K8S_NAMESPACE_NAME), service.instance.id=$(K8S_NAMESPACE_NAME).$(K8S_POD_NAME).$(K8S_CONTAINER_NAME)Attributes emitted
| Attribute | Source |
|---|---|
k8s.cluster.name | hardcoded |
k8s.namespace.name | metadata.namespace |
k8s.node.name | spec.nodeName |
k8s.pod.name | metadata.name |
k8s.pod.uid | metadata.uid |
k8s.pod.ip | status.podIP |
k8s.container.name | hardcoded |
k8s.deployment.name | pod label app.kubernetes.io/name |
host.ip | status.hostIP |
service.version | pod label app.kubernetes.io/version |
service.namespace | metadata.namespace |
service.instance.id | composed |
Verify
Port-forward the pod and trigger a request:
kubectl port-forward deploy/ruby-app 4567:4567curl localhost:4567/your-endpointIn Last9, open the Traces view and inspect any span — the Resource Attributes panel should list every k8s.* key configured above.
Alternatively, run an OpenTelemetry Collector with the debug exporter to print received spans; look for the Resource attributes section.
Limitations of the Downward API
The fieldRef mechanism cannot walk owner references or query cluster state, so the following attributes need a different approach:
k8s.replicaset.namek8s.deployment.name(without a pod label)k8s.statefulset.name,k8s.daemonset.name,k8s.job.name,k8s.cronjob.namek8s.node.uidk8s.pod.start_timek8s.cluster.uid
For these, use the OpenTelemetry Collector k8sattributesprocessor. It queries the Kubernetes API server (requires RBAC for pod / namespace / node read) and enriches spans in-transit, so your Ruby application does not need to know anything about Kubernetes.
A hybrid setup — Downward API for pod-level basics, Collector processor for owner chain — is what most production deployments use.
Troubleshooting
- Attributes missing on spans: verify env vars are resolved inside the pod with
kubectl exec deploy/ruby-app -- env | grep K8S_, and confirmOTEL_RESOURCE_ATTRIBUTEScontains the expected comma-separated list. k8s.deployment.nameshows empty string: the pod must carry theapp.kubernetes.io/namelabel — Kubernetes does not auto-propagate deployment names to pods.- Downward API expansion produces literal
$(VAR_NAME): env vars referenced via$(...)must be declared earlier in the same container’senvlist.