Efficient resource utilization is key to running Kubernetes workloads smoothly. Whether you're troubleshooting performance issues, optimizing resource requests and limits, or keeping an eye on cluster health, the kubectl top
command is an essential tool. It provides real-time CPU and memory usage metrics for nodes and pods, helping you make informed decisions about scaling and resource allocation.
This guide will walk you through everything you need to know about kubectl top
, including how it works, prerequisites, command usage, troubleshooting, and best practices.
What Is kubectl top
?
kubectl top
is a built-in Kubernetes CLI command that retrieves real-time resource usage statistics for pods and nodes. It helps administrators and developers quickly assess cluster resource consumption.
Unlike kubectl get
or kubectl describe
, which provides configuration details, kubectl top
focuses on live metrics. This makes it particularly useful for:
- Monitoring CPU and memory usage
- Detecting resource bottlenecks
- Troubleshooting performance issues
- Right-sizing resource requests and limits
- Avoiding out-of-memory (OOM) errors
Prerequisites for Using kubectl top
Before running kubectl top
, ensure the following requirements are met:
- Metrics Server Installed
- The
kubectl top
command relies on the Kubernetes Metrics Server to collect and expose resource metrics.
- The
- Proper Role-Based Access Control (RBAC) Permissions
- Your user or service account must have the necessary permissions to access resource metrics. You may need to configure RBAC policies if access is restricted.
If it's missing, install it using:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
You can check if the Metrics Server is running using:
kubectl get deployment metrics-server -n kube-system
How to Use kubectl top
Checking Node Resource Usage
To view CPU and memory usage for all nodes in the cluster:
kubectl top nodes
Example Output:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-node-1 250m 12% 800Mi 40%
k8s-node-2 180m 9% 600Mi 30%
This output shows CPU usage in millicores (m
) and memory usage in mebibytes (Mi
).
Checking Pod Resource Usage
To view resource usage per pod:
kubectl top pods --all-namespaces
Example Output:
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default my-app-564bcd47d7-hk5gn 120m 256Mi
default my-db-789d9c6c4f-tn7mv 300m 512Mi
By default, kubectl top pods
it only shows pods in the current namespace. Use --all-namespaces
to see metrics for all namespaces.
Sorting and Filtering
You can sort pods by resource usage to quickly identify outliers:
kubectl top pods --sort-by=cpu
Or filter by a specific namespace:
kubectl top pods -n my-namespace
Interpreting kubectl top
Output
The kubectl top
command provides real-time CPU and memory usage metrics for Kubernetes nodes and pods.
kubectl top node
Output
The kubectl top node
command displays resource usage across all nodes in the cluster:
kubectl top node
Example Output:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
node-1 850m 42% 6Gi 75%
node-2 400m 20% 3Gi 38%
node-3 1200m 60% 7Gi 80%
How to Interpret This Output:
- NAME: The name of the node.
- CPU(cores): The current CPU usage in millicores (m). 1000m equals 1 core.
- CPU%: The percentage of total CPU capacity used on the node.
- MEMORY(bytes): The current memory usage in bytes, usually displayed in MiB (megabytes) or GiB (gigabytes).
- MEMORY%: The percentage of total memory capacity used.
Key Takeaways:
- A node with high CPU% or MEMORY% (above 80%) may be overloaded and could require additional resources.
- A node with low CPU% or MEMORY% (below 20%) might be underutilized, indicating room for workload redistribution.
- If a node reaches 100% CPU or memory usage, new workloads may fail to schedule or experience performance degradation.
kubectl top pod
Output
The kubectl top pod
command provides resource usage for each pod in the cluster or a specific namespace:
kubectl top pod -n <namespace>
Example Output:
NAME CPU(cores) MEMORY(bytes)
pod-a 150m 500Mi
pod-b 600m 2Gi
pod-c 900m 1Gi
How to Interpret This Output:
- NAME: The name of the pod.
- CPU(cores): The total CPU used by the pod across all its containers.
- MEMORY(bytes): The total memory used by the pod across all its containers.
Key Takeaways:
- If a pod’s CPU usage is high, it may be experiencing CPU throttling, affecting performance.
- If a pod’s memory usage is close to its limit, it may be at risk of OOM (Out of Memory) kills, where Kubernetes terminates processes to free up memory.
- A pod with low resource usage may have over-allocated requests, leading to wasted resources.
To get more detailed insights, check container-level metrics using:
kubectl top pod <pod-name> --containers -n <namespace>
Example Output:
NAME CONTAINER CPU(cores) MEMORY(bytes)
pod-a app-container 100m 300Mi
pod-a sidecar 50m 200Mi
3. Comparing Usage with Requests and Limits
The kubectl top
command only shows actual usage, not the requested or limited resources. To compare, use:
kubectl describe pod <pod-name> -n <namespace>
Example Output:
Containers:
app-container:
Requests:
cpu: 250m
memory: 512Mi
Limits:
cpu: 500m
memory: 1Gi
How to Interpret This:
- If CPU or memory usage exceeds requests, it may indicate under-provisioning, leading to performance issues.
- If usage is close to the limits, the container may be throttled or terminated when resources are constrained.
- If usage is significantly lower than requests, the pod may be over-provisioned, wasting resources.
4. Detecting Performance Issues Using kubectl top
The kubectl top
command helps identify performance issues by highlighting resource constraints at the node or pod level. Below are common issues, their symptoms in kubectl top
, and possible solutions.
High CPU Usage
Indications:
- Nodes consistently show CPU usage above 80%.
- Pods frequently hit their CPU limits, leading to throttling.
Next Steps:
- Identify which pods are consuming the most CPU:
kubectl top pod --all-namespaces | sort -k3 -nr | head -10
- Check pod resource requests and limits:
kubectl describe pod <pod-name> -n <namespace>
- Adjust CPU requests and limits based on workload needs.
- If necessary, scale up the deployment or add more nodes.
High Memory Usage
Indications:
- Nodes show memory usage above 80%.
- Pods get OOMKilled (Out of Memory killed) frequently.
Next Steps:
- Identify high-memory-consuming pods:
kubectl top pod --all-namespaces | sort -k4 -nr | head -10
- Inspect pod logs for memory-related errors:
kubectl logs <pod-name> -n <namespace>
- Increase memory limits for affected pods if necessary.
- Analyze the application for potential memory leaks.
Node Resource Saturation
Indications:
- Nodes consistently operate at nearly 100% CPU or memory.
- New pods fail to schedule due to insufficient resources.
Next Steps:
- Check node resource usage:
kubectl top node
- If all nodes are saturated, consider adding new nodes or increasing instance sizes in the cluster.
- Use cluster autoscaler to dynamically adjust the node pool.
Pods Not Getting Enough Resources
Indications:
- A pod’s CPU or memory usage is far below its requested amount.
- Nodes appear underutilized despite high resource requests.
Next Steps:
- Compare
kubectl top
metrics with pod requests and limits:
kubectl describe pod <pod-name> -n <namespace>
- Reduce over-allocated requests to free up resources for other workloads.
- Enable Vertical Pod Autoscaler (VPA) to adjust requests dynamically.
Advanced Usage of kubectl top
While kubectl top
is often used for quick insights into resource usage, advanced techniques can enhance its effectiveness in large-scale Kubernetes environments. Below are some advanced use cases:
1. Monitoring Resource Consumption Across Namespaces
By default, kubectl top
operates within a single namespace. To get a cluster-wide view of pod resource usage, use:
kubectl top pod --all-namespaces
This helps identify outliers across different workloads in a multi-tenant cluster.
2. Sorting and Filtering Resource Usage
The output of kubectl top
can be piped to sort
or awk
for more insightful analysis. For example, to find the most memory-intensive pods:
kubectl top pod --all-namespaces | sort -k3 -nr | head -10
Similarly, to list CPU-heavy nodes:
kubectl top node | sort -k3 -nr
3. Combining with watch
for Real-Time Monitoring
To continuously monitor resource usage, pair kubectl top
with watch
:
watch -n 5 kubectl top pod --all-namespaces
This updates the output every 5 seconds, helping track sudden resource spikes.
4. Identifying Resource Starvation and Bottlenecks
Comparing pod usage (kubectl top pod
) with node capacity (kubectl top node
) can help diagnose scheduling issues. If a node is at full capacity while pods request more resources, you might need to scale up or redistribute workloads.
5. Debugging Autoscaling Issues
The Kubernetes Horizontal Pod Autoscaler (HPA) relies on resource metrics. To verify if a pod is under stress before scaling kicks in:
kubectl top pod -n my-namespace
If CPU or memory usage remains high but scaling doesn’t occur, check HPA configurations with:
kubectl get hpa -n my-namespace
6. Monitoring System Daemon Resource Usage
DaemonSets, such as kube-proxy
or monitoring agents, can consume unexpected resources. To assess their footprint:
kubectl top pod -n kube-system
If certain system pods consume excessive CPU or memory, tuning requests and limits may be necessary.
7. Customizing Output with JSON/YAML for Automation
For integration with monitoring tools or custom scripts, kubectl top
can be combined with kubectl get
:
kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods | jq .
This retrieves detailed metrics in JSON format, which can be parsed programmatically for automated alerts or dashboarding.
8. Using kubectl top
with --containers
for Deep Insights
To drill down into per-container resource usage within a pod:
kubectl top pod my-pod --containers
This is useful when debugging multi-container pods, ensuring that resource-hungry containers are properly tuned.
Troubleshooting kubectl top
Issues
1. Metrics Server Not Installed or Running
If kubectl top
returns an error like:
error: Metrics API not available
Check if the Metrics Server is running:
kubectl get deployment metrics-server -n kube-system
If it's missing, install it as shown in the Prerequisites section.
2. Metrics Server Fails to Start
If the Metrics Server fails due to authentication issues, modify the deployment:
kubectl edit deployment metrics-server -n kube-system
Add the following argument under spec.containers.args
:
- --kubelet-insecure-tls
Then restart the Metrics Server:
kubectl rollout restart deployment metrics-server -n kube-system
3. Delayed or Missing Metrics
If kubectl top
returns empty or outdated data:
Confirm that the kubelet is exposing metrics:
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
Ensure the Metrics Server is collecting data by checking logs:
kubectl logs -n kube-system deployment/metrics-server
4. Sudden Performance Drops or High CPU Usage
Check for pods consuming excessive CPU:
kubectl top pod --all-namespaces | sort -k3 -nr | head -10
If a pod is consuming more than expected, inspect its logs and events:
kubectl logs <pod-name> -n <namespace>
kubectl describe pod <pod-name> -n <namespace>
- High Node Utilization Leading to Scheduling Failures
Monitor node resource usage to identify if a node is at capacity:
kubectl top node
If a node is over-utilized, consider scaling up or adjusting pod scheduling.
HPA Not Scaling Pods as Expected
If autoscaling is not occurring despite high resource usage, check both kubectl top
and HPA status:
kubectl top pod -n <namespace>
kubectl get hpa -n <namespace>
Verify that HPA targets align with the actual resource usage.
Best Practices for kubectl top
The kubectl top
command is a powerful tool for monitoring resource usage in a Kubernetes cluster. However, to use it effectively, follow these best practices:
1. Ensure the Metrics Server is Installed and Running
The kubectl top
command relies on the Metrics Server, which is not deployed by default in Kubernetes. Verify its presence with:
kubectl get deployment metrics-server -n kube-system
If it’s missing, install it using:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Ensure the Metrics Server is running properly to avoid "Metrics not available" errors.
2. Use --containers
to Get Granular Insights
By default, kubectl top pod
shows aggregate resource usage for pods. To analyze resource consumption at the container level within a pod, use:
kubectl top pod --containers
This helps identify performance bottlenecks in multi-container workloads.
3. Monitor Resource Usage Across All Namespaces
To get a cluster-wide view rather than being limited to the current namespace, always include:
kubectl top pod --all-namespaces
or
kubectl top node --all-namespaces
This prevents blind spots when troubleshooting resource usage.
4. Sort and Filter Results for Better Visibility
The default kubectl top
output isn’t sorted. Use shell commands like sort
and awk
to highlight the most resource-intensive workloads:
- Show top CPU-consuming pods:
kubectl top pod --all-namespaces | sort -k3 -nr | head -10
- Show nodes with the highest memory usage:
kubectl top node | sort -k4 -nr
5. Combine with watch
for Real-Time Monitoring
To track resource changes continuously, use:
watch -n 5 kubectl top pod --all-namespaces
This updates the output every 5 seconds, helping spot spikes or anomalies in resource usage.
6. Cross-check with Resource Requests and Limits
The kubectl top
command shows actual resource usage but does not indicate the resource requests or limits set for pods. To compare usage against limits, use:
kubectl describe pod <pod-name> -n <namespace>
If a pod is consuming close to its limit, Kubernetes might throttle it, causing performance degradation.
7. Use kubectl top
to Validate Autoscaling Behavior
The Horizontal Pod Autoscaler (HPA) scales workloads based on resource metrics. Ensure that the reported CPU or memory usage aligns with your HPA triggers:
kubectl top pod -n my-namespace
kubectl get hpa -n my-namespace
If scaling isn’t occurring despite high usage, verify if the HPA is correctly configured.
8. Monitor System Component Resource Usage
System components running in the kube-system
namespace, like kube-proxy
and CoreDNS
, can impact cluster performance. Use:
kubectl top pod -n kube-system
If any system pods consume excessive resources, consider tuning their requests and limits.
9. Automate Monitoring with JSON/YAML Output
For integration with monitoring tools, retrieve raw metrics via:
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes | jq .
This enables automation in dashboards, alerts, and performance reports.
10. Don’t Rely Solely on kubectl top
for Long-Term Analysis
Since kubectl top
provides real-time metrics but lacks historical data, combine it with persistent monitoring tools like Prometheus, Grafana, or Last9 for deeper insights.
The Role of kubectl top
in Production Monitoring
kubectl top
provides real-time CPU and memory usage metrics for nodes and pods by pulling data from the Kubernetes Metrics Server. It is best suited for:
- Quick spot checks to diagnose sudden performance issues
- Resource optimization by identifying high-consumption workloads
- Debugging the Horizontal Pod Autoscaler (HPA) to ensure it responds correctly to load
- Capacity planning by monitoring node utilization for scaling decisions
However, kubectl top
has limitations in production environments:
- It does not store historical data, making it unsuitable for trend analysis
- It only provides CPU and memory metrics, lacking deeper insights into network, disk I/O, or application performance
- Data collection intervals may not always match the precision required for detailed analysis
Conclusion
While kubectl top
is a valuable tool for quick resource monitoring in production, it should not be the only method of observability. For real-time insights, troubleshooting, and performance optimization, it should be combined with persistent monitoring solutions like Prometheus, OpenTelemetry, or Last9.