Kubernetes Events
Collect and monitor Kubernetes events as logs using OpenTelemetry for cluster observability
Use Last9’s OpenTelemetry endpoint to ingest Kubernetes Events as logs for comprehensive cluster observability. This integration captures important cluster events like pod scheduling, deployments, failures, and configuration changes.
Prerequisites
Before setting up Kubernetes events monitoring, ensure you have:
- Kubernetes Cluster: A running Kubernetes cluster (v1.19+)
- kubectl: Configured and connected to your cluster
- Helm: Installed (v3.9 or higher)
- Cluster Admin Access: Required for reading cluster events
- Last9 Account: With log ingestion credentials
What Are Kubernetes Events?
Kubernetes events provide insights into cluster activities such as:
- Pod Lifecycle: Creation, scheduling, starting, termination
- Deployment Activities: Rolling updates, scaling operations
- Resource Issues: Insufficient resources, failed pulls, scheduling failures
- Configuration Changes: ConfigMap/Secret updates, volume mounts
- Node Events: Node ready/not ready states, resource pressure
-
Install Helm and Create Namespace
Ensure Helm is installed and create a dedicated namespace:
# Create namespace for Last9 componentskubectl create namespace last9 -
Create Helm Values Configuration
Create a file named
last9-kube-events-agent-values.yamlwith the OpenTelemetry Collector configuration optimized for Kubernetes events:# Default values for opentelemetry-collector.# This is a YAML-formatted file.# Declare variables to be passed into your templates.nameOverride: "last9-kube-events-agent"fullnameOverride: ""# Valid values are "daemonset", "deployment", and "statefulset".mode: "deployment"# Specify which namespace should be used to deploy the resources intonamespaceOverride: "last9"# Handles basic configuration of components that# also require k8s modifications to work correctly.presets:# Configures the collector to collect logs.logsCollection:enabled: falseincludeCollectorLogs: falsestoreCheckpoints: falsemaxRecombineLogSize: 102400# Configures the collector to collect host metrics.hostMetrics:enabled: false# Configures the Kubernetes Processor to add Kubernetes metadata.kubernetesAttributes:enabled: true# When enabled the processor will extract all labels for an associated pod and add them as resource attributes.extractAllPodLabels: true# When enabled the processor will extract all annotations for an associated pod and add them as resource attributes.extractAllPodAnnotations: true# Configures the collector to collect node, pod, and container metrics from the API server on a kubelet.kubeletMetrics:enabled: false# Configures the collector to collect kubernetes events.# Adds the k8sobjects receiver to the logs pipeline# and collects kubernetes events by default.# Best used with mode = deployment or statefulset.kubernetesEvents:enabled: true# Configures the Kubernetes Cluster Receiver to collect cluster-level metrics.clusterMetrics:enabled: falseconfigMap:# Specifies whether a configMap should be created (true by default)create: true# Base collector configuration.config:exporters:# Use when you need to debug outputdebug:verbosity: detailedsampling_initial: 5sampling_thereafter: 200otlp/last9:endpoint: "{{ .Logs.WriteURL }}"headers:"Authorization": "{{ .Logs.AuthValue }}"extensions:# The health_check extension is mandatory for this chart.health_check:endpoint: ${env:MY_POD_IP}:13133processors:batch:send_batch_size: 15000send_batch_max_size: 15000timeout: 10s# Transform processor to enhance Kubernetes events with metadatatransform/logs/last9:error_mode: ignorelog_statements:- context: logstatements:- set(resource.attributes["service.name"], Concat([attributes["event.domain"], attributes["k8s.resource.name"]], "-"))# Additional resource attributes can be set as follows.# Use additional resource attributes to differentiate between clusters and deployments.# - set(resource.attributes["deployment.environment"], "staging")# Set timestamp from event creation time- set(time_unix_nano, UnixNano(Time(body["object"]["metadata"]["creationTimestamp"], "%Y-%m-%dT%H:%M:%SZ"))) where time_unix_nano == 0# Set severity text based on event type- set(severity_text, "INFO") where body["object"]["type"] == "Normal"- set(severity_text, "WARN") where body["object"]["type"] == "Warning"- set(severity_text, "ERROR") where body["object"]["type"] == "Error"# Set severity number based on event type- set(severity_number, SEVERITY_NUMBER_INFO) where body["object"]["type"] == "Normal"- set(severity_number, SEVERITY_NUMBER_WARN) where body["object"]["type"] == "Warning"- set(severity_number, SEVERITY_NUMBER_ERROR) where body["object"]["type"] == "Error"memory_limiter/logs:# check_interval is the time between measurements of memory usage.check_interval: 5s# By default limit_mib is set to 85% of ".Values.resources.limits.memory"limit_percentage: 85# By default spike_limit_mib is set to 15% of ".Values.resources.limits.memory"spike_limit_percentage: 15receivers:otlp:protocols:grpc:endpoint: ${env:MY_POD_IP}:4317http:endpoint: ${env:MY_POD_IP}:4318prometheus:config:scrape_configs:- job_name: opentelemetry-collectorscrape_interval: 30sstatic_configs:- targets:- ${env:MY_POD_IP}:8888service:telemetry:metrics:readers:- pull:exporter:prometheus:host: ${env:MY_POD_IP}port: 8888extensions:- health_checkpipelines:logs:receivers:- otlpprocessors:- memory_limiter/logs- k8sattributes- transform/logs/last9- batchexporters:- otlp/last9metrics:receivers:- otlp- prometheusprocessors:- batchexporters:- otlp/last9traces:receivers:- otlpprocessors:- batchexporters:- otlp/last9image:repository: "otel/opentelemetry-collector-contrib"pullPolicy: IfNotPresenttag: "0.126.0"digest: ""imagePullSecrets: []serviceAccount:create: trueannotations: {}name: ""clusterRole:create: falseannotations: {}name: ""rules: []clusterRoleBinding:annotations: {}name: ""podSecurityContext: {}securityContext: {}nodeSelector: {}tolerations: []affinity: {}topologySpreadConstraints: []priorityClassName: ""extraEnvs: []extraEnvsFrom: []extraVolumes: []extraVolumeMounts: []extraManifests: []# Configuration for portsports:otlp:enabled: truecontainerPort: 4317servicePort: 4317hostPort: 4317protocol: TCPappProtocol: grpcotlp-http:enabled: truecontainerPort: 4318servicePort: 4318hostPort: 4318protocol: TCPmetrics:enabled: falsecontainerPort: 8888servicePort: 8888protocol: TCPuseGOMEMLIMIT: true# Resource limits & requestsresources: {}# resources:# limits:# cpu: 250m# memory: 512MipodAnnotations: {}podLabels: {}additionalLabels: {}hostNetwork: falsehostAliases: []dnsPolicy: ""dnsConfig: {}schedulerName: ""# only used with deployment modereplicaCount: 1revisionHistoryLimit: 10annotations: {}extraContainers: []initContainers: []# Pod lifecycle policieslifecycleHooks: {}# Health check probeslivenessProbe:httpGet:port: 13133path: /readinessProbe:httpGet:port: 13133path: /startupProbe: {}service:type: ClusterIPannotations: {}ingress:enabled: falsepodMonitor:enabled: falseextraLabels: {}serviceMonitor:enabled: falseextraLabels: {}# PodDisruptionBudget is used only if deployment enabledpodDisruptionBudget:enabled: false# autoscaling is used only if mode is "deployment" or "statefulset"autoscaling:enabled: falseminReplicas: 1maxReplicas: 10targetCPUUtilizationPercentage: 80rollout:rollingUpdate: {}strategy: RollingUpdateprometheusRule:enabled: falsegroups: []defaultRules:enabled: falseextraLabels: {}statefulset:volumeClaimTemplates: []podManagementPolicy: "Parallel"persistentVolumeClaimRetentionPolicy:enabled: falsewhenDeleted: RetainwhenScaled: RetainnetworkPolicy:enabled: falseannotations: {}allowIngressFrom: []extraIngressRules: []egressRules: []shareProcessNamespace: falseReplace the placeholder values in the
exporters.otlp/last9section with your actual Last9 credentials. -
Add OpenTelemetry Helm Repository
Add the OpenTelemetry Helm repository and update:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-chartshelm repo update -
Install the Events Collector
Deploy the OpenTelemetry Collector configured for Kubernetes events:
helm upgrade --install last9-kube-events-agent open-telemetry/opentelemetry-collector \--version 0.125.0 \-n last9 \--create-namespace \-f last9-kube-events-agent-values.yaml -
Verify Installation
Check that the events collector is running correctly:
kubectl get pods -n last9You should see a pod similar to:
NAME READY STATUS RESTARTS AGElast9-kube-events-agent-xxx-xxx 1/1 Running 0 2mCheck the collector logs to ensure it’s collecting events:
kubectl logs -n last9 deployment/last9-kube-events-agent
Understanding the Configuration
Kubernetes Events Receiver
The k8sobjects receiver is configured to:
- Collect Events: Automatically discovers and collects Kubernetes events
- Real-time Processing: Streams events as they occur in the cluster
- Metadata Enhancement: Adds Kubernetes metadata to each event
- Filtering: Can be configured to filter specific event types or sources
Event Transformation
The transform processor enhances events with:
- Service Naming: Creates meaningful service names from event metadata
- Timestamp Mapping: Uses event creation time as log timestamp
- Severity Mapping: Maps Kubernetes event types to log severity levels:
Normal→INFO(Green events, successful operations)Warning→WARN(Yellow events, potential issues)Error→ERROR(Red events, failures or critical issues)
Custom Attributes
You can add additional resource attributes for better event categorization:
transform/logs/last9: log_statements: - context: log statements: - set(resource.attributes["deployment.environment"], "staging") - set(resource.attributes["cluster.name"], "production-cluster") - set(resource.attributes["team"], "platform")Verification
-
Check Event Collection
Verify events are being collected by checking collector logs:
kubectl logs -n last9 deployment/last9-kube-events-agent | grep -i event -
Generate Test Events
Create some test events to verify the collection:
# Create a test pod to generate eventskubectl run test-pod --image=nginx:latestkubectl delete pod test-pod -
Monitor Collector Health
Check the collector’s health endpoint:
kubectl port-forward -n last9 deployment/last9-kube-events-agent 13133:13133curl http://localhost:13133/ -
Verify Events in Last9
Log into your Last9 account and check that Kubernetes events are being received in the Logs dashboard.
Look for events with:
- Service names like
apps-deployment,core-pod, etc. - Kubernetes metadata attributes
- Proper timestamp and severity levels
- Service names like
Advanced Configuration
Resource Limits
Configure resource limits based on your cluster size and event volume:
resources: limits: cpu: 250m memory: 512Mi requests: cpu: 100m memory: 256MiEvent Filtering
Filter events by type, source, or other attributes:
transform/logs/last9: log_statements: - context: log statements: # Only keep Warning and Error events - keep_matching_keys(body, "object") where body["object"]["type"] != "Normal"High Availability
For production environments, enable high availability:
replicaCount: 2podDisruptionBudget: enabled: true minAvailable: 1Troubleshooting
No Events Being Collected
Check RBAC permissions and collector configuration:
kubectl describe clusterrole last9-kube-events-agentkubectl logs -n last9 deployment/last9-kube-events-agent | grep -i permissionHigh Memory Usage
Monitor and adjust memory limits:
kubectl top pods -n last9kubectl describe pod -n last9 <collector-pod-name>Connection Issues to Last9
Verify credentials and network connectivity:
kubectl logs -n last9 deployment/last9-kube-events-agent | grep -i errorkubectl describe secret -n last9 # if using secrets for credentialsUninstallation
To remove the Kubernetes events collector:
helm uninstall last9-kube-events-agent -n last9kubectl delete namespace last9 # if no other Last9 componentsBest Practices
- Resource Management: Set appropriate resource limits based on cluster event volume
- Event Retention: Events can be numerous; consider log retention policies in Last9
- Filtering: Filter noisy or irrelevant events to reduce data volume and costs
- Monitoring: Monitor collector health and resource usage
- Security: Use Kubernetes secrets for sensitive credentials when possible
Need Help?
If you encounter any issues or have questions:
- Join our Discord community for real-time support
- Contact our support team at support@last9.io