If you’ve worked with Kubernetes, you know logs are essential for understanding what’s happening inside your clusters. However, unlike traditional servers, Kubernetes logs present their unique challenges. Pods frequently start and stop, containers restart regularly, and logs stored locally can be lost quickly.
Because of this, managing logs in Kubernetes requires a different approach. In this blog, we’ll go through what Kubernetes logs are, how they differ from traditional logs, and practical ways to handle them so you can focus on solving issues with reliable information at hand.
What Are Kubernetes Logs?
Kubernetes logs are the output generated by your containers and the system components that run your cluster. Typically, these are the standard output (stdout) and standard error (stderr) streams that your applications write to. When issues arise, these logs provide valuable information—errors, warnings, or general messages that help you understand what your application is doing.
The challenge is that pods are ephemeral. When a pod stops running or is rescheduled to a different node, any logs stored locally on that pod are lost unless you have a system in place to collect and store them elsewhere.
kubectl logs
in this article!How to Get Logs from Your Pods: The Basics
When working with Kubernetes, the primary tool for viewing logs is the kubectl logs
command. It fetches the output from your containers, giving you details about what your applications are doing.
The most straightforward usage is:
kubectl logs <pod-name>
This command retrieves logs from the main container in the specified pod. However, many pods run multiple containers. In that case, you need to tell Kubernetes exactly which container’s logs you want by adding the -c
flag:
kubectl logs <pod-name> -c <container-name>
For example, if your pod runs both an application container and a sidecar container (like a logging agent or proxy), this lets you pick which one’s logs to see.
Pods can also restart due to crashes or updates, which means their current logs only show the latest container instance. To see logs from the previous instance before it restarted, use the --previous
flag:
kubectl logs <pod-name> --previous
This is especially useful when troubleshooting crashes or failures that cause pods to restart unexpectedly.
Remember that kubectl logs
works on one pod at a time. If you need to aggregate logs from multiple pods or across your entire cluster, you’ll want to use a centralized logging solution. But for quick, on-the-spot checks, kubectl logs
is the go-to command.
How to Narrow Down Kubernetes Logs with kubectl
When you’re digging through logs, pulling up everything can be overwhelming and time-consuming. Kubernetes provides several options to help you narrow down the logs you see, so you can focus on what matters.
Here are some key flags to use with kubectl logs
that saves time during debugging:
Limit the total size of log output:
kubectl logs nginx --limit-bytes=1048576
This caps the log output size to 1 megabyte (1,048,576 bytes), which can prevent overwhelming your terminal or tool.
Add timestamps to each log line:
kubectl logs nginx --timestamps=true
This prepends the exact timestamp to each log entry, which helps when correlating logs with events or metrics.
Show logs since a specific timestamp:
kubectl logs nginx --since-time=2024-12-01T10:00:00Z
If you know exactly when an incident started, this option fetches logs starting from that date and time.
Show logs from the last hour:
kubectl logs nginx --since=1h
This filters logs to only those generated in the last hour. You can adjust the time frame by changing the value (e.g., 30m
for 30 minutes).
Show only the last 50 lines:
kubectl logs nginx --tail=50
This limits the output to the most recent 50 lines instead of dumping the entire log history. It’s handy when you just want to see the latest events.
How to Retrieve Logs Across Pods and Deployments in Kubernetes
It’s common to have many pods running the same application or service. Checking logs one pod at a time can quickly become tedious. But, kubectl
offers options to fetch logs from multiple pods or containers in one go.
Here are some commands that help you gather logs efficiently across pods and containers:
Limit concurrent log requests to avoid overloading the Kubernetes API:
kubectl logs -l app=nginx --max-log-requests=10
When pulling logs from many pods at once, this option restricts the number of simultaneous API calls to avoid overwhelming the cluster or your local machine.
Get logs from all pods matching a label selector:
kubectl logs -l app=nginx --all-containers=true
This fetches logs from all pods with the label app=nginx
, and from all containers within those pods. Labels let you target groups of pods easily without specifying each pod name.
Get logs from all pods in a deployment:
kubectl logs deployment/nginx --all-pods=true
Instead of targeting individual pods, this command pulls logs from every pod managed by the deployment named nginx
. Great for seeing cluster-wide behavior of an app.
Get logs from all containers in a single pod:
kubectl logs nginx --all-containers=true
This retrieves logs from every container within the specified pod. Useful when your pod runs multiple containers and you want to see logs from all of them at once.
Using kubectl to Stream Logs Live from Your Cluster
Sometimes you need to watch logs live — for example, when debugging an issue as it unfolds or verifying that a deployment is behaving correctly. Kubernetes provides options to stream logs in real time using kubectl
.
Follow logs from multiple pods with source identification:
kubectl logs -f -l app=nginx --prefix=true
When streaming logs from multiple pods selected by a label, --prefix=true
adds the pod name as a prefix to each log line. This helps you tell which pod produced which log entry, keeping things clear when watching multiple sources at once.
Follow logs and keep going even if errors happen:
kubectl logs -f nginx --ignore-errors=true
Sometimes network hiccups or pod restarts cause the log stream to break. Adding --ignore-errors=true
makes the stream more resilient, continuing to fetch logs despite temporary issues.
Follow logs from a single pod:
kubectl logs -f nginx
The -f
(or --follow
) flag streams the logs continuously, showing new log lines as they’re generated. This is similar to tail -f
on a log file.
Access Logs from Kubernetes Deployments, Jobs, and StatefulSets
The kubectl logs
command works with more than just individual pods. You can also pull logs from higher-level Kubernetes resources that manage multiple pods:
Get logs from all pods in a StatefulSet:
kubectl logs statefulset/database
Fetches logs from every pod in the database
StatefulSet, handy for stateful applications like databases.
Get logs from a Kubernetes Job:
kubectl logs job/backup-job
Useful for inspecting logs from batch or one-time jobs.
Get logs from all pods in a deployment:
kubectl logs deployment/nginx
This retrieves logs aggregated from all pods managed by the nginx
deployment.
Step-by-Step Log Commands for Debugging Crashing Pods
When a pod crashes or misbehaves, here’s a straightforward workflow to help you troubleshoot:
If the pod runs multiple containers, specify which container’s logs to check:
kubectl logs <pod-name> -c <container-name> --previous
Get logs from the container’s previous instance (before it crashed):
kubectl logs <pod-name> --previous
This fetches logs from the last terminated container, which often holds clues about why the pod restarted.
Check pod status and events to understand what’s happening:
kubectl describe pod <pod-name>
This shows detailed information about the pod’s state, recent events (like crashes or restarts), and resource usage.
For pods running multiple containers, here’s how to manage logs efficiently:
Get logs from all containers in the pod at once:
kubectl logs <pod-name> --all-containers=true
Get logs from a specific container:
kubectl logs <pod-name> -c <container-name>
List all containers inside a pod:
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[*].name}'
This outputs the names of all containers in the pod so you know which one to check.
Why Relying on Local Logs Isn’t Enough in Kubernetes
It might seem straightforward to write logs to files inside a container or directly on a node. But in Kubernetes, that approach is unreliable. Pods are designed to be temporary—they can restart, move to different nodes, or be deleted at any time. When that happens, any logs stored locally vanish along with the pod.
Because of this, most Kubernetes setups use logging agents running on each node. These agents watch container logs as they’re generated and forward them to a centralized system where the data is safely stored and easier to search. This way, your logs survive pod restarts and node changes, giving you a consistent view of what’s happening in your cluster.
Challenges You Might Face with Kubernetes Logging
- Pods and containers are temporary: They restart, move between nodes, or get deleted. Any logs stored locally inside them disappear unless collected centrally, which can leave gaps in your troubleshooting.
- High volume of logs: With many pods running simultaneously, logs quickly add up. This can strain storage systems and slow down searches if your logging solution isn’t built to scale.
- High cardinality from labels and metadata: Adding context like pod names, namespaces, or unique request IDs is helpful, but leads to a large number of unique values. This “high cardinality” makes storing and querying logs more complex and resource-intensive.
- Resource consumption by log collectors: Agents that gather logs on each node must be efficient. If they consume too much CPU, memory, or network bandwidth, they risk impacting your cluster’s performance and stability.
How to Set Up Reliable Log Collection in Kubernetes
The most common and reliable way to collect logs in Kubernetes is by running a log collector as a DaemonSet. This means you deploy an agent on every node in your cluster, and that agent continuously watches the logs generated by containers on that node.
These agents, like Fluent Bit or Fluentd, read logs directly from the node’s filesystem, typically from directories such as /var/log/containers/
. As they collect logs, they add useful metadata like the pod name, namespace, and container name to help you filter and search later.
Once enriched, the logs get forwarded to a centralized storage or analysis system. This setup ensures your logs are captured reliably, survive pod restarts, and are easy to query when you need them.
What to Consider When Building Your Kubernetes Logging Pipeline
When designing your Kubernetes logging, keep these key factors in mind:
- Structured Logs: Using formats like JSON makes logs easier to parse and search compared to plain text. Structured logs allow you to query specific fields and get more precise results.
- Metadata Tagging: Adding details such as pod name, namespace, or container helps you filter and organize logs effectively. This context is essential when you’re troubleshooting or monitoring.
- Filtering and Rate Limiting: Not all logs are equally important. Filtering out noisy or verbose messages and limiting log volume helps prevent your logging system from getting overwhelmed and keeps costs manageable.
- Managing High Cardinality: Be cautious about the number of unique labels or IDs you include in logs. High cardinality can slow down queries and increase storage needs, so tracking only what’s necessary is key for performance.
Why It’s Important to Combine Metrics and Traces with Logs
Logs give you the detailed, line-by-line story of what’s happening inside your applications. But they only show part of the picture.
Metrics provide a broader view by tracking trends over time, such as spikes in error rates, CPU usage, or response latency. They help you spot when something’s going wrong before users notice.
Traces add another layer by showing how individual requests move through different services or components. This lets you pinpoint exactly where delays or errors occur in complex systems.
When you combine logs, metrics, and traces, you get a complete, connected view of your system’s behavior, and Last9 helps you view everything in one place.
Quick Reference: Common kubectl logs Scenarios
Scenario | Command | Why Use It |
---|---|---|
Pod won't start | kubectl logs <pod> --previous |
See logs from crashed container |
Too many logs | kubectl logs <pod> --tail=20 |
Just see recent entries |
Find recent errors | kubectl logs <pod> --since=10m | grep -i error |
Filter time + content |
Multiple apps failing | kubectl logs -l app=myapp --all-containers |
Bulk troubleshooting |
Live debugging | kubectl logs -f <pod> --timestamps |
Real-time with timing |
Multi-container pod | kubectl logs <pod> --all-containers |
See all container logs |
Deployment issues | kubectl logs deployment/myapp |
Get logs from all replicas |
Kubernetes Log Sources
Log Type | What It Shows | How to Access |
---|---|---|
Pod Logs | Output from containers | kubectl logs <pod> |
Node System Logs | Node-level OS and system events | Access via node or cloud provider |
Kubernetes System Logs | Control plane and kubelet logs | Varies, usually on master nodes |
Troubleshooting Common Issues with kubectl logs
If you run kubectl logs
and don’t see what you expect, here are some common reasons and simple fixes:
Logs not showing up
- The pod might still be starting. You can tell Kubernetes to wait longer with
--pod-running-timeout=30s
. - The container inside the pod may not have started yet. Check the pod’s status with
kubectl describe pod <pod-name>
. - If your pod has more than one container, you might be looking at the wrong one. Use
--all-containers=true
to see logs from all containers in that pod.
The logs are too long or overwhelming
- Use
--tail=N
to see only the last N lines of logs (like--tail=100
). - Use
--since=duration
to restrict logs to a recent time frame, for example--since=30m
for the last 30 minutes. - Use
--limit-bytes
to cap the total size of the log output, which prevents your terminal from getting flooded.
Log streaming stops unexpectedly
- Add
--ignore-errors=true
to keep the stream running even if some errors happen. - Make sure you have permission to view logs in the namespace and for the pod you’re targeting.
Final Thoughts
Managing Kubernetes logs well is crucial when things go wrong. Don’t rely on local logs alone—centralize your log collection to avoid losing important data. Enrich your logs with metadata to make searching easier, and keep an eye on log volume and cardinality to keep your logging system efficient.
FAQs About Kubernetes Logs
Q: Can I get logs from all my pods at once?
A: Yes! Use kubectl logs -l <label-selector> --all-containers=true
for label-based selection, or kubectl logs deployment/<name> --all-pods=true
for deployment-wide logs.
Q: How do I make sure I don't lose logs when pods crash?
A: Use log collectors running on nodes that ship logs off before pods go away, or ensure you're checking --previous
logs immediately after crashes.
Q: What's high cardinality in logs?
A: It means having many unique values (like user IDs) in log labels, which can slow down storage and queries if not managed.
Q: How are Kubernetes system logs different from app logs?
A: System logs track cluster components and node activity, while app logs come from your containers. System logs are usually found on master nodes or through your cloud provider.
Q: Why do my kubectl logs commands sometimes show nothing?
A: Common causes: pod isn't running yet, container hasn't started, logs are in a different container, or you need higher permissions. Try --all-containers
and check kubectl describe pod
first.