Kubernetes Resource Attributes
Enrich Go OpenTelemetry traces with Kubernetes semantic convention attributes using the Downward API
When a Go application (Gin, Echo, Chi, Fiber, gRPC, or plain net/http) runs on Kubernetes, OpenTelemetry traces benefit from k8s.* resource attributes — they let you filter spans by pod, namespace, node, or deployment in Last9.
The OpenTelemetry Go SDK does not ship a dedicated Kubernetes resource detector. The recommended SDK-only approach is the Kubernetes Downward API, which injects pod metadata as environment variables. The SDK reads OTEL_RESOURCE_ATTRIBUTES automatically through resource.Default() and merges those values into every span’s resource.
This guide works for any Go OpenTelemetry setup — no code changes beyond what your existing Gin / Echo / Chi / net/http guide already covers.
A complete working example is available at opentelemetry-examples/go/k8s-downward-api↗.
How it works
- Kubernetes exposes pod metadata (name, UID, namespace, node, IP, labels) as environment variables via
valueFrom.fieldRefin the container spec. - A composite
OTEL_RESOURCE_ATTRIBUTESenv var interpolates those values into OpenTelemetry’skey=value,key=valueformat. - The Go OpenTelemetry SDK reads
OTEL_RESOURCE_ATTRIBUTESduringresource.Default()(which callsresource.WithFromEnv()internally) and attaches the attributes to every exported span.
SDK setup
No Kubernetes-specific code is required. The standard tracer provider boilerplate already does the right thing:
import ( "go.opentelemetry.io/otel" "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp" "go.opentelemetry.io/otel/sdk/resource" sdktrace "go.opentelemetry.io/otel/sdk/trace")
func initTracer(ctx context.Context) (func(context.Context) error, error) { exporter, err := otlptracehttp.New(ctx) if err != nil { return nil, err }
// resource.Default() reads OTEL_SERVICE_NAME and OTEL_RESOURCE_ATTRIBUTES // from the environment via resource.WithFromEnv() and merges with detected // process attributes. The Kubernetes Downward API populates the latter. res, err := resource.Merge(resource.Default(), resource.Empty()) if err != nil { return nil, err }
tp := sdktrace.NewTracerProvider( sdktrace.WithBatcher(exporter), sdktrace.WithResource(res), ) otel.SetTracerProvider(tp) return tp.Shutdown, nil}If your existing setup builds a resource manually with resource.NewWithAttributes(...), switch to resource.Default() (or resource.New(ctx, resource.WithFromEnv(), ...)) so env-driven attributes are honored. Manually constructed resources without WithFromEnv() will silently ignore OTEL_RESOURCE_ATTRIBUTES.
Deployment manifest
Add the following env block to your container in your Deployment (or StatefulSet / DaemonSet) manifest:
apiVersion: apps/v1kind: Deploymentmetadata: name: go-app labels: app.kubernetes.io/name: go-app app.kubernetes.io/version: "1.0.0"spec: template: metadata: labels: app.kubernetes.io/name: go-app app.kubernetes.io/version: "1.0.0" spec: containers: - name: app image: your-go-app:latest env: # OTel exporter — existing Last9 config - name: OTEL_SERVICE_NAME value: go-app - name: OTEL_EXPORTER_OTLP_ENDPOINT value: https://otlp.last9.io - name: OTEL_EXPORTER_OTLP_HEADERS value: "Authorization=<BASIC_AUTH_HEADER>"
# Downward API — pod metadata as env vars - name: K8S_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: K8S_POD_UID valueFrom: fieldRef: fieldPath: metadata.uid - name: K8S_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: K8S_NAMESPACE_NAME valueFrom: fieldRef: fieldPath: metadata.namespace - name: K8S_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: K8S_HOST_IP valueFrom: fieldRef: fieldPath: status.hostIP # Deployment name via pod label — fieldRef cannot walk owner refs - name: K8S_DEPLOYMENT_NAME valueFrom: fieldRef: fieldPath: metadata.labels['app.kubernetes.io/name'] - name: K8S_APP_VERSION valueFrom: fieldRef: fieldPath: metadata.labels['app.kubernetes.io/version']
# Hardcoded — not available via Downward API - name: K8S_CLUSTER_NAME value: production - name: K8S_CONTAINER_NAME value: app
# Compose OTEL_RESOURCE_ATTRIBUTES — SDK reads this automatically - name: OTEL_RESOURCE_ATTRIBUTES value: >- k8s.cluster.name=$(K8S_CLUSTER_NAME), k8s.namespace.name=$(K8S_NAMESPACE_NAME), k8s.node.name=$(K8S_NODE_NAME), k8s.pod.name=$(K8S_POD_NAME), k8s.pod.uid=$(K8S_POD_UID), k8s.pod.ip=$(K8S_POD_IP), k8s.container.name=$(K8S_CONTAINER_NAME), k8s.deployment.name=$(K8S_DEPLOYMENT_NAME), host.ip=$(K8S_HOST_IP), service.version=$(K8S_APP_VERSION), service.namespace=$(K8S_NAMESPACE_NAME), service.instance.id=$(K8S_NAMESPACE_NAME).$(K8S_POD_NAME).$(K8S_CONTAINER_NAME)Attributes emitted
| Attribute | Source |
|---|---|
k8s.cluster.name | hardcoded |
k8s.namespace.name | metadata.namespace |
k8s.node.name | spec.nodeName |
k8s.pod.name | metadata.name |
k8s.pod.uid | metadata.uid |
k8s.pod.ip | status.podIP |
k8s.container.name | hardcoded |
k8s.deployment.name | pod label app.kubernetes.io/name |
host.ip | status.hostIP |
service.version | pod label app.kubernetes.io/version |
service.namespace | metadata.namespace |
service.instance.id | composed |
Why this matters for the Services view
Last9’s Discover → Services → Infrastructure Metrics panel keys off the k8s.pod.name resource attribute on traces. When present, it auto-loads the Kubernetes pod dashboard with CPU, memory, network, restart, and throttling panels grouped by pod (data sourced from cAdvisor + kube-state-metrics).
Without k8s.pod.name, the panel falls back to a host-based dashboard that expects system_* and jvm_* metrics. For Go services on Kubernetes that emit neither, the panel will appear empty until this configuration is in place.
Verify
Port-forward the pod and trigger a request:
kubectl port-forward deploy/go-app 8080:8080curl localhost:8080/your-endpointIn Last9, open the Traces view and inspect any span — the Resource Attributes panel should list every k8s.* key configured above.
Alternatively, run an OpenTelemetry Collector with the debug exporter to print received spans; look for the Resource attributes section.
Limitations of the Downward API
The fieldRef mechanism cannot walk owner references or query cluster state, so the following attributes need a different approach:
k8s.replicaset.namek8s.deployment.name(without a pod label)k8s.statefulset.name,k8s.daemonset.name,k8s.job.name,k8s.cronjob.namek8s.node.uidk8s.pod.start_timek8s.cluster.uid
For these, use the OpenTelemetry Collector k8sattributesprocessor. It queries the Kubernetes API server (requires RBAC for pod / namespace / node read) and enriches spans in-transit, so your Go application does not need to know anything about Kubernetes.
A hybrid setup — Downward API for pod-level basics, Collector processor for owner chain — is what most production deployments use. The Last9 Kubernetes Operator ships the latter out of the box.
Troubleshooting
- Attributes missing on spans: verify env vars are resolved inside the pod with
kubectl exec deploy/go-app -- env | grep K8S_, and confirmOTEL_RESOURCE_ATTRIBUTEScontains the expected comma-separated list. k8s.deployment.nameshows empty string: the pod must carry theapp.kubernetes.io/namelabel — Kubernetes does not auto-propagate deployment names to pods.- Downward API expansion produces literal
$(VAR_NAME): env vars referenced via$(...)must be declared earlier in the same container’senvlist. - Resource attributes from
OTEL_RESOURCE_ATTRIBUTESnot appearing on spans: your tracer provider is using a manually-built resource withoutWithFromEnv(). Useresource.Default()or addresource.WithFromEnv()explicitly when callingresource.New(ctx, ...).