If you’ve ever needed to check how much CPU or memory a Docker container is using, docker stats
is the command for the job. It provides real-time resource usage metrics, helping you monitor and troubleshoot containers efficiently.
This guide covers everything you need to know about docker stats
: how to use it, what each metric means, and how to integrate it into a larger monitoring setup. We'll also explore advanced techniques for optimizing your workflow and using docker stats
enterprise-grade monitoring solutions.
What Is docker stats
and Why Should You Care?
The Basics
The docker stats
command provides live resource usage metrics for your running containers. Think of it as the Docker equivalent of Linux's top
command, but specifically designed for containers.
It shows critical metrics including:
- CPU usage: Both percentage and absolute values
- Memory consumption: Current usage, limits, and cache statistics
- Network activity: Incoming and outgoing traffic rates
- Disk I/O: Read/write operations and throughput
- Process information: Number of processes running inside the container
grep
.Why Resource Monitoring Is Critical
Unchecked containers can lead to several serious problems:
- System instability: Resource-hungry containers can starve other services
- Performance degradation: Slow applications due to resource contention
- Increased infrastructure costs: Inefficient resource usage leads to over-provisioning
- Application crashes: Out-of-Memory (OOM) kills when containers exceed memory limits
- Troubleshooting complexity: Difficulty pinpointing issues without visibility
Understanding docker stats
ensures that you can:
- Optimize resource allocation: Right-size your containers based on actual usage
- Prevent container crashes: Detect resource issues before they cause failures
- Identify bottlenecks: Spot performance issues early in your deployment
- Plan capacity: Make informed decisions about infrastructure scaling
- Validate container specifications: Ensure your resource limits are appropriate
How to Use docker stats
for Real-Time Container Monitoring
Basic Usage
To get started with basic monitoring, simply run:
docker stats
This will display a continuously updating table of all running containers along with their resource usage metrics in real time.
Monitoring Specific Containers
To focus on a specific container, use:
docker stats <container_id>
Or for monitoring multiple specific containers:
docker stats <container_id1> <container_id2>
You can use either container IDs or container names in these commands.
Using Container Name Patterns
You can also use name patterns with the --filter
option:
docker stats --filter "name=api-*"
This will show stats only for containers whose names start with "api-".
One-Time Snapshot vs. Continuous Monitoring
By default, docker stats
provides continuous updates. For a single snapshot:
docker stats --no-stream
This is particularly useful for scripts or when you want to capture a point-in-time measurement.
Breaking Down docker stats
Output: What Each Metric Means
When you run docker stats
, you'll see output similar to this:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
b3d5d9a8e123 my_app 2.55% 150MiB / 500MiB 30.0% 1.2MB / 800kB 0B / 0B 12
f7d6e9c8b123 database 35.20% 1.2GiB / 2GiB 60.0% 5.5MB / 1.2MB 250MB / 15MB 45
Let's examine each column in detail:
CONTAINER ID and NAME
- CONTAINER ID: The unique identifier for the container
- NAME: The human-readable name assigned to the container
CPU %
The percentage of host CPU resources used by the container. This value can exceed 100% on multi-core systems, as it represents the usage across all available cores.
- <10%: Generally indicates low CPU utilization
- 10-50%: Moderate utilization
- 50-80%: High utilization, may need attention
- >80%: Very high utilization, potential performance bottleneck
For multi-core systems, if your container is allowed to use all cores, a value of 200% would mean it's using the equivalent of 2 full CPU cores.
MEM USAGE / LIMIT
This shows current memory consumption and the maximum limit (if set):
- Usage: The actual memory being used by the container, including application memory and any buffers/cache
- Limit: The maximum memory allocation for the container (will show host total if no limit is set)
Memory values are displayed in binary units:
- KiB (kibibytes) = 1,024 bytes
- MiB (mebibytes) = 1,024 KiB
- GiB (gibibytes) = 1,024 MiB
MEM %
Memory usage as a percentage of the total limit available to the container. Critical thresholds:
- <50%: Healthy utilization
- 50-80%: Moderate utilization, monitor for trends
- >80%: High utilization, risk of OOM kills if usage spikes
NET I/O
Network traffic metrics shown as:
[data received] / [data transmitted]
This helps detect:
- Unexpected network traffic spikes
- Data exfiltration concerns
- Network bottlenecks
- Communication patterns between containers
BLOCK I/O
Disk read/write activity shown as:
[data read] / [data written]
High values might indicate:
- Excessive logging
- Inefficient storage operations
- Database write pressure
- File system issues
PIDS
Number of processes running inside the container. This is critical for detecting:
- Process leaks
- Fork bombs
- Inefficient process management
- Container complexity
Typical ranges:
- 1-10: Simple applications
- 10-50: Moderate complexity
- >50: Complex applications or potential issues
Advanced Filtering and Custom Output Formatting for docker stats
Customize Output Fields
The --format
flag allows you to specify exactly what information you want to see:
docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}"
Available format placeholders include:
.Container
: Container ID.Name
: Container name.ID
: Container ID.CPUPerc
: CPU percentage.MemUsage
: Memory usage.MemPerc
: Memory percentage.NetIO
: Network I/O.BlockIO
: Block I/O.PIDs
: Number of processes
JSON Output for Automation and Integration
For programmatic use or feeding into other tools:
docker stats --format "{{json .}}"
This produces structured JSON output for each container:
{"BlockIO":"0B / 0B","CPUPerc":"0.00%","Container":"b3d5d9a8e123","ID":"b3d5d9a8e123","MemPerc":"0.26%","MemUsage":"21.05MiB / 7.772GiB","Name":"my_app","NetIO":"726B / 0B","PIDs":"1"}
Create Custom Monitoring Scripts
You can combine docker stats
with jq
for powerful filtering:
docker stats --no-stream --format "{{json .}}" | jq 'select(.CPUPerc | rtrimstr("%") | tonumber > 10)'
This example shows only containers using more than 10% CPU.
Periodic Sampling with Timestamps
Combine with watch
and date
for timestamped samples:
watch -n 10 'date; docker stats --no-stream'
This captures stats every 10 seconds with timestamps.
How to Interpret Resource Usage Patterns
Understanding resource usage patterns can help you diagnose performance issues and optimize your containers. Here’s a quick breakdown of what different trends might indicate:
CPU Usage Patterns
- Steady high CPU → A compute-heavy task or an infinite loop.
- CPU spikes → Likely batch jobs, cron tasks, or periodic processing.
- Low but constant CPU → An idle service or a background process ticking along.
- Oscillating CPU → Possible resource contention or CPU throttling.
Memory Usage Patterns
- Steady increase → Could be a memory leak—watch out!
- Sawtooth pattern → Expected behavior from garbage collection.
- Sudden drops → The process might be restarting, or the system killed it (OOM event).
- Constant high memory → Inefficient caching or handling large datasets.
Network I/O Patterns
- Constant small transfers → Likely health checks or heartbeat signals.
- Large sudden spikes → Bulk data transfers happening.
- Consistently high bandwidth → This container is probably handling data streaming.
- Asymmetric I/O → Heavy uploads or downloads in progress.
docker logs --tail
to keep an eye on recent logs efficiently.Disk I/O Patterns
- High writes, low reads → Common in logging or data collection processes.
- High reads, low writes → Likely caching or content delivery workloads.
- Periodic spikes → Could be scheduled backups, cleanup tasks, or maintenance jobs.
- Constant high I/O → A database or file-intensive application is at work.
Comparing docker stats
vs. Other Monitoring Tools
While docker stats
is excellent for quick inspections, it has several limitations for production environments:
Feature | docker stats | cAdvisor | Prometheus+Grafana | Last9 |
---|---|---|---|---|
Historical data | No | Limited | Yes | Yes |
Alerting | No | No | Yes | Yes |
Visualization | Basic terminal | Basic UI | Advanced dashboards | Advanced dashboards |
API access | Limited | Yes | Yes | Yes |
Resource overhead | Very low | Low | Medium | Low |
Learning curve | Minimal | Low | Medium | Minimal |
When to Upgrade Your Monitoring Solution
Consider more robust monitoring when:
- You need historical data: For trend analysis and capacity planning
- You're running multiple hosts: For cluster-wide visibility
- You need alerting: For proactive issue detection
- You require custom dashboards: For executive/team visibility
- You need to correlate with application metrics: For end-to-end monitoring
Enterprise Monitoring Options
- cAdvisor: Google's container advisor provides deep insights into container resource usage
- Prometheus + Grafana: The open-source standard for metric collection and visualization
- Last9: An observability platform with structured container monitoring at scale
- Datadog: Commercial solution with extensive container and Kubernetes monitoring
- New Relic: Full-stack observability with container insights
- Sysdig: Container-native monitoring and security

Best Practices for Efficient Container Resource Management
Setting Resource Limits
Always define resource constraints when running containers:
docker run --memory=512m --memory-swap=1g --cpus=1.5 --pids-limit=50 my_image
Key limit types:
- --memory: Hard memory limit
- --memory-swap: Combined memory and swap limit
- --cpus: Number of CPU cores to allocate
- --pids-limit: Maximum number of processes
Logging Best Practices
- Use log rotation: Configure the Docker logging driver with size limits
- Consider volume-mounted logs: For high-throughput applications
- Set log level appropriately: Debug in development, info or warning in production
- Monitor log volume: Excessive logging impacts disk I/O
Example log configuration:
docker run --log-driver=json-file --log-opt max-size=10m --log-opt max-file=3 my_image
Dynamic Resource Management
Use docker update
to adjust resources without restarting containers:
docker update --memory 1G --cpus 2 my_container
This allows for:
- Addressing resource bottlenecks in real-time
- Scaling resources during high-traffic periods
- Reducing resource allocation for idle containers
- Testing resource configurations without downtime
Resource Reservation vs. Limits
Understand the difference between reservations and hard limits:
- Reservations: Guaranteed resources (--memory-reservation, --cpu-shares)
- Limits: Maximum allowed resources (--memory, --cpus)
For critical services, set both to ensure consistent performance.
Advanced docker stats
Techniques
Monitoring Container Groups
To monitor containers by label:
docker stats $(docker ps --filter "label=environment=production" -q)
Monitoring Based on Resource Usage
Combine with other commands to find containers based on resource usage:
# Find containers using more than 10% CPU
for id in $(docker stats --no-stream --format "{{if gt (trimSuffix \"%\" .CPUPerc) 10.0}}{{.Container}}{{end}}"); do
docker inspect --format '{{.Name}}' $id
done
Custom Monitoring Metrics
Extract advanced metrics not directly shown in docker stats
:
# Get detailed memory information
docker inspect $(docker ps -q) --format '{{.Name}}: {{.HostConfig.Memory}}'
Scheduled Resource Reports
Create a cron job for regular resource snapshots:
# Add to crontab
# */10 * * * * /usr/local/bin/docker-stats-report.sh >> /var/log/container-stats.log 2>&1
With a script like:
#!/bin/bash
echo "=== Container Stats $(date) ==="
docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}"
Troubleshooting Container Performance Issues
Identifying Memory Leaks
If memory consistently increases without returning to baseline after GC:
# Get container process details
docker top <container_id>
# Examine memory use by process within container
docker exec <container_id> ps -o pid,rss,command ax
Monitor memory usage over time:
watch -n 60 'docker stats --no-stream'
Dealing with CPU Throttling
Examine detailed CPU stats:
docker exec <container_id> cat /sys/fs/cgroup/cpu/cpu.stat
Check if containers are being throttled:
docker stats --no-stream | grep -E '[8-9][0-9]\.[0-9]%|100.0%'
Network Bottlenecks
For detailed network inspection:
docker exec <container_id> netstat -tuln
Check network I/O:
docker stats --format "{{.Name}}: {{.NetIO}}"
Disk I/O Issues
Check detailed block I/O stats:
docker exec <container_id> cat /sys/fs/cgroup/blkio/blkio.throttle.io_service_bytes
Identify containers with heavy disk activity:
docker stats --format "table {{.Name}}\t{{.BlockIO}}" | sort -k 2 -r
Integrating docker stats
with CI/CD Pipelines
Performance Testing
Include resource monitoring in your CI/CD pipelines:
# Example GitLab CI job
performance_test:
stage: test
script:
- docker-compose up -d
- sleep 10 # Wait for services to initialize
- docker stats --no-stream > baseline_stats.txt
- ./run-load-tests.sh
- docker stats --no-stream > load_stats.txt
- python3 compare_stats.py baseline_stats.txt load_stats.txt
artifacts:
paths:
- baseline_stats.txt
- load_stats.txt
- stats_comparison.html
Fail Builds on Resource Thresholds
Create a script to fail builds if containers exceed resource thresholds:
#!/bin/bash
# check_resource_usage.sh
MAX_CPU=80
MAX_MEM=75
docker stats --no-stream --format "{{.CPUPerc}},{{.MemPerc}},{{.Name}}" | while IFS=, read cpu mem name; do
cpu_val=$(echo $cpu | sed 's/%//')
mem_val=$(echo $mem | sed 's/%//')
if (( $(echo "$cpu_val > $MAX_CPU" | bc -l) )); then
echo "ERROR: Container $name exceeded CPU threshold: $cpu_val%"
exit 1
fi
if (( $(echo "$mem_val > $MAX_MEM" | bc -l) )); then
echo "ERROR: Container $name exceeded memory threshold: $mem_val%"
exit 1
fi
done
Conclusion
docker stats
is an invaluable tool for quick, real-time insights into container performance. While it serves as an excellent starting point for resource monitoring, production environments will benefit from more comprehensive solutions.
With the metrics provided by docker stats
you can:
- Optimize your container deployments for maximum efficiency
- Prevent outages caused by resource exhaustion
- Troubleshoot performance issues with data-driven decisions
- Plan capacity based on actual resource requirements
- Develop a monitoring strategy that scales with your container deployment
FAQs
1. How often does docker stats
update?
By default, docker stats
refreshes every second. You can control the refresh rate using the watch
command in Linux:
watch -n 5 docker stats
This would update every 5 seconds.
2. Does docker stats
affect container performance?
docker stats
has minimal overhead since it reads metrics from the cgroups filesystem, which Docker already uses for resource control. The impact is negligible even in production environments.
3. How can I monitor stopped containers?
docker stats
only works on running containers. For historical data on stopped containers:
- Use
docker inspect <container_id>
for configuration details - Check logs using
docker logs <container_id>
- Implement persistent monitoring with tools like Prometheus
4. How do I calculate the actual CPU cores used?
To convert from percentage to cores:
Cores used = (CPU percentage / 100) × Total available cores
For example, if a container shows 250% CPU usage on an 8-core system, it's using:
(250 / 100) = 2.5 cores
5. Why is my container using more memory than specified?
Several reasons might explain this:
- Kernel memory: Used by the container but not included in user limits
- Cache: File system cache that can be reclaimed if needed
- Buffer: Memory used for I/O operations
- Shared memory: Memory shared between processes
To see detailed memory breakdown:
docker exec <container_id> cat /sys/fs/cgroup/memory/memory.stat
6. How do I limit CPU or memory usage for a container?
Use resource constraints when running a container:
docker run --memory=512m --memory-reservation=256m --cpus=1 --cpu-shares=1024 my_image
For existing containers:
docker update --cpus=1.5 --memory=1G <container_id>
7. Why do I see high memory usage even after stopping a container?
Docker might not release memory immediately due to:
- Cached data in the host's page cache
- Kernel memory not being fully reclaimed
- Docker daemon holding references
Try restarting the Docker daemon:
systemctl restart docker
8. How can I export docker stats
data to CSV?
Use a combination of formatting and redirection:
docker stats --no-stream --format "{{.Name}},{{.CPUPerc}},{{.MemUsage}},{{.MemPerc}}" > container_stats.csv
Add headers with:
echo "Container,CPU %,Memory Usage,Memory %" > container_stats.csv && docker stats --no-stream --format "{{.Name}},{{.CPUPerc}},{{.MemUsage}},{{.MemPerc}}" >> container_stats.csv