Skip to content
Last9
Book demo

Kubernetes Resource Attributes

Enrich Go OpenTelemetry traces with Kubernetes semantic convention attributes using the Downward API

When a Go application (Gin, Echo, Chi, Fiber, gRPC, or plain net/http) runs on Kubernetes, OpenTelemetry traces benefit from k8s.* resource attributes — they let you filter spans by pod, namespace, node, or deployment in Last9.

The OpenTelemetry Go SDK does not ship a dedicated Kubernetes resource detector. The recommended SDK-only approach is the Kubernetes Downward API, which injects pod metadata as environment variables. The SDK reads OTEL_RESOURCE_ATTRIBUTES automatically through resource.Default() and merges those values into every span’s resource.

This guide works for any Go OpenTelemetry setup — no code changes beyond what your existing Gin / Echo / Chi / net/http guide already covers.

A complete working example is available at opentelemetry-examples/go/k8s-downward-api↗.

How it works

  1. Kubernetes exposes pod metadata (name, UID, namespace, node, IP, labels) as environment variables via valueFrom.fieldRef in the container spec.
  2. A composite OTEL_RESOURCE_ATTRIBUTES env var interpolates those values into OpenTelemetry’s key=value,key=value format.
  3. The Go OpenTelemetry SDK reads OTEL_RESOURCE_ATTRIBUTES during resource.Default() (which calls resource.WithFromEnv() internally) and attaches the attributes to every exported span.

SDK setup

No Kubernetes-specific code is required. The standard tracer provider boilerplate already does the right thing:

import (
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
)
func initTracer(ctx context.Context) (func(context.Context) error, error) {
exporter, err := otlptracehttp.New(ctx)
if err != nil {
return nil, err
}
// resource.Default() reads OTEL_SERVICE_NAME and OTEL_RESOURCE_ATTRIBUTES
// from the environment via resource.WithFromEnv() and merges with detected
// process attributes. The Kubernetes Downward API populates the latter.
res, err := resource.Merge(resource.Default(), resource.Empty())
if err != nil {
return nil, err
}
tp := sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exporter),
sdktrace.WithResource(res),
)
otel.SetTracerProvider(tp)
return tp.Shutdown, nil
}

If your existing setup builds a resource manually with resource.NewWithAttributes(...), switch to resource.Default() (or resource.New(ctx, resource.WithFromEnv(), ...)) so env-driven attributes are honored. Manually constructed resources without WithFromEnv() will silently ignore OTEL_RESOURCE_ATTRIBUTES.

Deployment manifest

Add the following env block to your container in your Deployment (or StatefulSet / DaemonSet) manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
name: go-app
labels:
app.kubernetes.io/name: go-app
app.kubernetes.io/version: "1.0.0"
spec:
template:
metadata:
labels:
app.kubernetes.io/name: go-app
app.kubernetes.io/version: "1.0.0"
spec:
containers:
- name: app
image: your-go-app:latest
env:
# OTel exporter — existing Last9 config
- name: OTEL_SERVICE_NAME
value: go-app
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: https://otlp.last9.io
- name: OTEL_EXPORTER_OTLP_HEADERS
value: "Authorization=<BASIC_AUTH_HEADER>"
# Downward API — pod metadata as env vars
- name: K8S_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: K8S_POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: K8S_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: K8S_NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: K8S_HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
# Deployment name via pod label — fieldRef cannot walk owner refs
- name: K8S_DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app.kubernetes.io/name']
- name: K8S_APP_VERSION
valueFrom:
fieldRef:
fieldPath: metadata.labels['app.kubernetes.io/version']
# Hardcoded — not available via Downward API
- name: K8S_CLUSTER_NAME
value: production
- name: K8S_CONTAINER_NAME
value: app
# Compose OTEL_RESOURCE_ATTRIBUTES — SDK reads this automatically
- name: OTEL_RESOURCE_ATTRIBUTES
value: >-
k8s.cluster.name=$(K8S_CLUSTER_NAME),
k8s.namespace.name=$(K8S_NAMESPACE_NAME),
k8s.node.name=$(K8S_NODE_NAME),
k8s.pod.name=$(K8S_POD_NAME),
k8s.pod.uid=$(K8S_POD_UID),
k8s.pod.ip=$(K8S_POD_IP),
k8s.container.name=$(K8S_CONTAINER_NAME),
k8s.deployment.name=$(K8S_DEPLOYMENT_NAME),
host.ip=$(K8S_HOST_IP),
service.version=$(K8S_APP_VERSION),
service.namespace=$(K8S_NAMESPACE_NAME),
service.instance.id=$(K8S_NAMESPACE_NAME).$(K8S_POD_NAME).$(K8S_CONTAINER_NAME)

Attributes emitted

AttributeSource
k8s.cluster.namehardcoded
k8s.namespace.namemetadata.namespace
k8s.node.namespec.nodeName
k8s.pod.namemetadata.name
k8s.pod.uidmetadata.uid
k8s.pod.ipstatus.podIP
k8s.container.namehardcoded
k8s.deployment.namepod label app.kubernetes.io/name
host.ipstatus.hostIP
service.versionpod label app.kubernetes.io/version
service.namespacemetadata.namespace
service.instance.idcomposed

Why this matters for the Services view

Last9’s Discover → Services → Infrastructure Metrics panel keys off the k8s.pod.name resource attribute on traces. When present, it auto-loads the Kubernetes pod dashboard with CPU, memory, network, restart, and throttling panels grouped by pod (data sourced from cAdvisor + kube-state-metrics).

Without k8s.pod.name, the panel falls back to a host-based dashboard that expects system_* and jvm_* metrics. For Go services on Kubernetes that emit neither, the panel will appear empty until this configuration is in place.

Verify

Port-forward the pod and trigger a request:

kubectl port-forward deploy/go-app 8080:8080
curl localhost:8080/your-endpoint

In Last9, open the Traces view and inspect any span — the Resource Attributes panel should list every k8s.* key configured above.

Alternatively, run an OpenTelemetry Collector with the debug exporter to print received spans; look for the Resource attributes section.

Limitations of the Downward API

The fieldRef mechanism cannot walk owner references or query cluster state, so the following attributes need a different approach:

  • k8s.replicaset.name
  • k8s.deployment.name (without a pod label)
  • k8s.statefulset.name, k8s.daemonset.name, k8s.job.name, k8s.cronjob.name
  • k8s.node.uid
  • k8s.pod.start_time
  • k8s.cluster.uid

For these, use the OpenTelemetry Collector k8sattributesprocessor. It queries the Kubernetes API server (requires RBAC for pod / namespace / node read) and enriches spans in-transit, so your Go application does not need to know anything about Kubernetes.

A hybrid setup — Downward API for pod-level basics, Collector processor for owner chain — is what most production deployments use. The Last9 Kubernetes Operator ships the latter out of the box.

Troubleshooting

  • Attributes missing on spans: verify env vars are resolved inside the pod with kubectl exec deploy/go-app -- env | grep K8S_, and confirm OTEL_RESOURCE_ATTRIBUTES contains the expected comma-separated list.
  • k8s.deployment.name shows empty string: the pod must carry the app.kubernetes.io/name label — Kubernetes does not auto-propagate deployment names to pods.
  • Downward API expansion produces literal $(VAR_NAME): env vars referenced via $(...) must be declared earlier in the same container’s env list.
  • Resource attributes from OTEL_RESOURCE_ATTRIBUTES not appearing on spans: your tracer provider is using a manually-built resource without WithFromEnv(). Use resource.Default() or add resource.WithFromEnv() explicitly when calling resource.New(ctx, ...).

Get in touch on Discord↗ or Email if you run into issues.