Diagnosing issues in Kubernetes applications often requires navigating large volumes of log data spread across pods, nodes, and containers. Without the right tools, identifying the root cause can be slow and error-prone. Traditional debugging methods, such as inline print statements or basic file-based logging, do not scale in distributed environments.
Kubernetes addresses this challenge by exposing logs at the container and pod level. The kubectl logs
command provides direct access to these logs, making it the primary mechanism for inspecting application behavior and troubleshooting in real time.
The Challenge of Log Retrieval in Kubernetes
In Kubernetes, applications are deployed across clusters that may run tens, hundreds, or thousands of instances. Each instance runs inside a container, which itself is encapsulated within a pod. When an error occurs, identifying which container generated the failure and accessing its diagnostic output becomes non-trivial.
Unlike traditional environments, you cannot simply inspect a local log file — containers are ephemeral, distributed across nodes, and continuously scheduled by the orchestrator.
Kubernetes Logging
Kubernetes captures application logs through the standard output (stdout
) and standard error (stderr
) streams of containers. When an application writes to these streams, Kubernetes redirects the data and stores it on the node where the pod is running.
By default, Kubernetes does not aggregate these logs across the cluster, persist them indefinitely, or provide advanced search capabilities. It only exposes mechanisms to access logs from individual pods.
This design has an important implication: the logs retrieved via kubectl logs
are limited to what is available from the container’s current stdout
/stderr
buffer on its host node.
Once containers are rotated, rescheduled, or terminated, logs may no longer be available unless an external logging solution has been configured.
The kubectl logs
Command
Kubectl is the command-line interface for interacting with Kubernetes clusters. It allows developers and administrators to deploy applications, inspect and manage cluster resources, and view logs.
The kubectl logs
command is the primary utility for retrieving logs from running pods and their containers. It provides direct, on-demand access to a container stdout
and stderr
streams without requiring SSH access to nodes or manual navigation of file systems. Developers and operators use it to:
- Inspect the runtime behavior of services.
- Diagnose application errors.
- Validate that deployments are functioning as expected.
Because it offers immediate visibility into application output, kubectl logs
is a core troubleshooting tool in Kubernetes environments.
Kubernetes Architecture and Logging Internals
Let's understand the basics of Kubernetes architecture and how logging fits into this ecosystem:
Key Components and Their Roles
Kubernetes follows a distributed architecture with several key components:
- Master Node(s):
- API Server: The central management entity that receives all REST requests.
- etcd: A distributed key-value store that stores all cluster data.
- Scheduler: Assigns pods to nodes.
- Controller Manager: Manages various controllers that regulate the state of the cluster.
- Worker Nodes:
- Kubelet: Ensures containers are running in a pod.
- Container Runtime: Software responsible for running containers (e.g., Docker, containerd).
- Kube-proxy: Maintains network rules on nodes.
- Pods: The smallest deployable units in Kubernetes, containing one or more containers.
How Logging Works in Kubernetes
Kubernetes doesn't provide a native, cluster-wide logging solution. Instead, it relies on the underlying container runtime to capture stdout and stderr streams from each container. Here's how logging works:
- Container Logs:
- Applications inside containers write logs to stdout and stderr.
- The container runtime captures these streams and writes them to files on the node's disk.
- Node-level Logging:
- Kubelet on each node is responsible for exposing these logs to the Kubernetes API.
- Log files are typically stored
/var/log/pods/
on the node.
- Cluster-level Access:
- The Kubernetes API server provides endpoints to access these logs.
- kubectl logs command uses these endpoints to retrieve logs from specific pods or containers.
- Log Lifecycle:
- Logs are tied to the lifecycle of the container. When a container is deleted, its logs are typically deleted as well.
- Some container runtimes implement log rotation to manage disk space.
- Advanced Logging Solutions:
- For production environments, cluster-wide logging solutions (like Levitate, ElasticSearch, and Kibana stack) are often implemented to aggregate logs from all nodes and provide advanced search and analysis capabilities.
Getting Started: Your First kubectl logs
Command
The simplest usage of kubectl logs
requires specifying the pod name:
kubectl logs <pod-name>
For example, to retrieve logs from a pod named my-web-app-789abc-xyz12
:
kubectl logs my-web-app-789abc-xyz12
When executed, kubectl
contacts the Kubernetes API server, determines the node where the pod is scheduled, and streams the container’s stdout
and stderr
output directly to your terminal.
Pod Name Identification
Since pod names are generated with unique suffixes (hashes), you must know the exact name before retrieving logs. Use:
kubectl get pods
This lists all pods in the current namespace with their status, restart counts, and uptime.
Example:
NAME READY STATUS RESTARTS AGE
my-web-app-789abc-xyz12 1/1 Running 0 5d
another-service-def456-ijk34 1/1 Running 0 3d
database-pod-lmn789-opq56 1/1 Running 0 10h
From here, copy the exact pod name (my-web-app-789abc-xyz12
) into the kubectl logs
command.
Use Labels Instead of Pod Names
In many cases, pods are managed by higher-level controllers such as Deployments or DaemonSets, and you don’t need logs from a specific replica. Instead, you can use labels to select pods.
kubectl get pods -l app=my-web-app
This returns all pods with the label app=my-web-app
. To fetch logs directly from one matching pod:
kubectl logs -l app=my-web-app
Kubernetes will choose a pod from the set. This approach is particularly useful when working with multiple replicas and you only need a representative log stream.
Advanced kubectl logs
Features
kubectl logs
also provides several flags that refine log retrieval and improve troubleshooting efficiency.
Logs from a Specific Container
In multi-container pods (common with sidecars such as proxies or log shippers), kubectl logs
defaults to the first container unless specified. Use -c
(or --container
) to target a container:
kubectl logs <pod-name> -c <container-name>
To identify available container names:
kubectl describe pod <pod-name>
Example: Retrieve logs from the log-aggregator
sidecar in my-web-app
:
kubectl logs my-web-app-789abc-xyz12 -c log-aggregator
Real-Time Log Stream
Use -f
(or --follow
) to continuously stream logs, similar to tail -f
:
kubectl logs -f <pod-name>
kubectl logs -f <pod-name> -c <container-name>
Press Ctrl+C
to stop. This is useful for observing application startup, request handling, or error messages as they occur.
Logs from Previous Instances
When a container restarts, its current logs reset. The -p
(or --previous
) flag retrieves logs from the immediately prior instance:
kubectl logs -p <pod-name>
kubectl logs -p <pod-name> -c <container-name>
This is critical for diagnosing crash loops or containers that fail during initialization.
Tail Output
The --tail
flag restricts output to a set number of lines:
kubectl logs --tail=100 my-web-app-789abc-xyz12
This is useful for quickly reviewing recent activity without parsing the full log history.
Time-Based Filters
To narrow logs to a specific time range:
- Relative duration with
--since
:
kubectl logs --since=5m my-web-app-789abc-xyz12
kubectl logs --since=1h my-web-app-789abc-xyz12
- Absolute timestamp with
--since-time
(RFC3339 format):
kubectl logs --since-time="2023-01-01T14:00:00Z" my-web-app-789abc-xyz12
These options reduce noise when investigating issues that occurred at known times.
Common Scenarios and Troubleshooting with kubectl logs
Pod Not Found
Error:
Error from server (NotFound): pods "mypod" not found
Causes:
- Pod name mismatch – Kubernetes generates pod names with unique hashes (e.g.,
my-app-7f8c9d4b5d-xyz12
). If you only type the base name (my-app
), the API server cannot resolve it. - Namespace mismatch –
kubectl
defaults to the current namespace context. If the pod runs in another namespace, it won’t be visible.
How to fix it:
If you frequently switch namespaces, set it in your context:
kubectl config set-context --current --namespace=<namespace>
Specify the namespace explicitly:
kubectl logs <pod-name> -n <namespace>
Check across namespaces:
kubectl get pods --all-namespaces | grep my-app
Confirm the exact pod name:
kubectl get pods
No Logs for Container
Error:
No logs for container "mycontainer" in pod "mypod"
Causes:
- Container not yet started – The pod is stuck in
Pending
orContainerCreating
. No logs exist until the container process runs. - Crash before output – Containers can terminate before writing to
stdout
/stderr
. - Non-standard logging path – Some applications write to files inside the container instead of
stdout
/stderr
. - Tail limits – If you used
--tail
with too few lines, you may get no results.
How to fix it:
Remove or increase the tail limit:
kubectl logs --tail=500 <pod-name>
If the application writes to a file instead of stdout:
kubectl exec -it <pod-name> -- cat /path/to/app.log
(Better: reconfigure the app to log to stdout
for proper integration with Kubernetes logging.)
Retrieve logs from the previous container instance (useful for crash loops):
kubectl logs -p <pod-name> -c <container-name>
Check pod lifecycle:
kubectl describe pod <pod-name>
Look at the Events
section for reasons (image pull errors, CrashLoopBackOff
, etc.).
CrashLoopBackOff Containers
A common scenario where logs are missing is when pods enter a CrashLoopBackOff
state. The container starts, fails, restarts, and repeats.
Steps to troubleshoot:
- If logs show nothing, confirm the image entrypoint is valid and not exiting immediately.
Check exit code and reasons:
kubectl describe pod <pod-name>
Inspect previous logs:
kubectl logs -p <pod-name> -c <container-name>
This reveals the error before the crash.
Get current status:
kubectl get pod <pod-name>
Logs from Pods in Other Namespaces
By default, kubectl logs
only looks in the active namespace. If the pod is deployed elsewhere, the command will fail.
How to fix it:
Specify the namespace:
kubectl logs <pod-name> -n <namespace>
Example:
kubectl logs database-pod-lmn789-opq56 -n prod-db
To avoid repeating -n
, change your context:
kubectl config set-context --current --namespace=prod-db
Where Does kubectl logs
Retrieve Logs From?
Understanding where kubectl logs retrieve their information is crucial for effective log management and troubleshooting in Kubernetes:
Container Runtime Log Files
- Local Node Storage: When you run kubectl logs, it reads log files directly from the local storage of the node where the pod is running. These log files are typically stored in a directory on the node's filesystem.
- Container Runtime: The exact location and format of these log files depending on the container runtime being used (e.g., Docker, containerd, CRI-O). For example:
- Docker:
/var/lib/docker/containers/<container-id>/<container-id>-json.log
- containerd:
/var/log/pods/<pod-uid>/<container-name>/0.log
- CRI-O:
/var/log/pods/<pod-namespace>_<pod-name>_<pod-uid>/<container-name>/0.log
- Docker:
- Log Rotation: The container runtime usually handles log rotation to prevent log files from consuming too much disk space. The rotation policy can affect how far back you can retrieve logs using kubectl logs.
Kubernetes API Server
- API Request: When you run kubectl logs, it sends a request to the Kubernetes API server.
- Kubelet Communication: The API server then communicates with the kubelet on the specific node where the pod is running.
- Kubelet Retrieval: The kubelet accesses the container runtime to fetch the requested logs.
- Stream Back: The logs are then streamed back through the API server to your kubectl client.
stdout and stderr Streams
- Standard Streams: Kubernetes captures logs that are written to the stdout (standard output) and stderr (standard error) streams of the container processes.
- Application Logging: This means that your application needs to write its logs to stdout/stderr for them to be accessible via kubectl logs. Logging to files inside the container won't be captured unless you've set up a logging sidecar.
Logging Drivers
- Container Runtime Logging: The container runtime uses logging drivers to capture the stdout/stderr streams and write them to files.
- Driver Configuration: The logging driver can be configured to affect how logs are stored and rotated. For example, Docker's JSON file driver stores logs in JSON format with metadata.
Limitations of kubectl logs
in Production
kubectl logs
is valuable for immediate debugging but has several limitations when used at scale:
- Ephemeral storage – Logs are tied to pod lifecycles and node storage. If a pod is evicted or a node fails, logs may be lost.
- Lack of centralization – The command only retrieves logs from individual pods. It cannot aggregate across replicas, namespaces, or clusters.
- Limited retention – Container runtimes rotate logs based on size or age. Kubernetes does not guarantee long-term log preservation.
- No advanced analysis – Output is plain text with no parsing, indexing, or anomaly detection.
- API server load – Continuous streaming with
kubectl logs -f
across many pods can create unnecessary overhead on the control plane.
For these reasons, kubectl logs
should be treated as a first-line troubleshooting tool, not as a production-grade logging strategy.
Centralized Logging Architectures
A production-ready setup usually follows a three-tier approach:
- Log collection agents – DaemonSets such as Fluent Bit, Fluentd, or Filebeat run on each node and capture container logs from
stdout
/stderr
. - Aggregation and storage layer – Collected logs are forwarded to a central datastore for indexing and retention. Examples include Elasticsearch, Loki, cloud-native log stores, or an OpenTelemetry-native backend like Last9, which can ingest logs directly over OTLP.
- Visualization and analysis – Tools like Kibana (for Elasticsearch) and Grafana (for Loki) are common, but OTel-native platforms such as Last9 integrate visualization directly with metrics and traces. This makes it possible to correlate logs with distributed traces and high-cardinality metrics in the same system.
This design ensures logs are:
- Preserved beyond pod/node lifetimes.
- Queryable across services and namespaces.
- Correlated with metrics and traces for faster debugging.
- Integrated with monitoring and alerting pipelines.
Cloud-Managed Logging Services
Major cloud providers integrate logging natively with Kubernetes clusters:
- Google Cloud Logging (Stackdriver) – Automatically collects logs from GKE clusters. Supports advanced queries, retention policies, and metric extraction.
- AWS CloudWatch Logs – EKS clusters can stream logs to CloudWatch for long-term storage, dashboards, and alerts.
- Azure Monitor Logs – AKS clusters integrate with Log Analytics Workspace for querying, visualization, and correlation with metrics.
These services remove the operational burden of maintaining log infrastructure while offering durability, scalability, and ecosystem integration.
Kubernetes Logging at Last9
We built logging in Last9 to behave like metrics and traces — OTel-native, full-fidelity, and easy to run in Kubernetes.
Here’s how we approach it:
- Collector on every node – We run the OpenTelemetry Collector as a DaemonSet. It picks up container logs from
stdout
/stderr
using thefilelog
receiver, so every pod is covered automatically. - Smart transformations – Pod labels and metadata get mapped into
service.name
and other OTel attributes. If you want to add context likedeployment_environment
, you can do that right in the pipeline. - Built-in guardrails – The Collector comes with memory limits and batching tuned for log traffic. That way, even when log volume spikes, you don’t run out of RAM or choke the pipeline.
- Simple setup – We ship a Helm values file that brings up the Collector with log collection, Kubernetes metadata, and resource controls enabled out of the box.
- Unified observability – Once logs reach Last9, they sit alongside metrics and traces. You can filter by attributes like
feature.flag
orcustomer.segment
and jump between signals without losing context.
The result: you don’t lose logs when pods restart, you don’t hit schema translation issues, and you don’t need to guess whether your attributes survived ingestion. Logs, metrics, and traces work together — exactly the way OTel intended.
Start for free now, or if you'd like a detailed walkthrough, book sometime with us!
A Quick Reference (Cheat Sheet)
# Basic log viewing
kubectl logs <pod-name>
# View logs from a specific container
kubectl logs <pod-name> -c <container-name>
# Tail logs in real-time
kubectl logs <pod-name> -f
# View previous instance logs
kubectl logs <pod-name> --previous
# Limit log output
kubectl logs <pod-name> --tail=100
# View logs from multiple pods
kubectl logs -l app=myapp
# Include timestamps
kubectl logs <pod-name> --timestamps
# View logs since a specific time
kubectl logs <pod-name> --since=1h
# View logs until a specific time
kubectl logs <pod-name> --until=2023-08-24T10:00:00Z
# View only stderr logs
kubectl logs <pod-name> --container <container-name> --stderr
FAQs
How do you get logs from a pod?
Use the kubectl logs <pod-name>
command. If the pod has multiple containers, specify the container with -c <container-name>
.
Where are the logs stored in a Kubernetes pod?
By default, container logs are written to files under /var/log/containers/
or /var/log/pods/
on the node’s filesystem. Kubernetes itself does not aggregate or retain logs permanently.
How does logging work in Kubernetes?
Applications write to stdout
and stderr
. Kubernetes redirects these streams to the container runtime, which stores them on the node. Tools like kubectl logs
or logging agents (Fluent Bit, Filebeat, etc.) then access and forward them.
What is the kubectl logs command?
A command-line utility to fetch logs from a specific pod (and optionally, a specific container). It supports flags like -f
for streaming, -p
for previous instance logs, and --tail
for limiting lines.
How to monitor Kubernetes cluster health?
Cluster health is typically monitored using metrics (CPU, memory, pod status) exposed via APIs and scraped by tools like Prometheus. Autoscaling is handled by the Horizontal Pod Autoscaler (HPA) or Vertical Pod Autoscaler (VPA), which adjust resources based on metrics.
How does a centralized logging platform help with security beyond just debugging?
Centralized logging provides audit trails, anomaly detection, and tamper-resistant storage. This enables detection of suspicious activity, correlation of security events across services, and compliance with regulatory requirements.
What Are Kubernetes Pod Logs?
Pod logs are the stdout
and stderr
output streams from containers running inside the pod. They reflect runtime behavior and are the primary source for debugging and monitoring application activity.
How can I retrieve application logs from a specific Kubernetes pod?
Run:
kubectl logs <pod-name>
Add -c <container-name>
if the pod has multiple containers, or -n <namespace>
if it is in a non-default namespace.
How to get application logs from a Docker container?
For containers running under Docker, use:
docker logs <container-id>
In Kubernetes, the equivalent is kubectl logs <pod-name>
.
How to export logs from a Kubernetes pod?
Redirect the output of kubectl logs
to a file:
kubectl logs <pod-name> > pod-logs.txt
For long-term export, configure a centralized logging solution (e.g., Fluent Bit + Elasticsearch, or an OpenTelemetry backend like Last9).