If your RabbitMQ queues keep growing and you have no idea why, or if messages aren’t getting picked up like they should, logs can save you a lot of guesswork. They’re basically a detailed record of what’s happening behind the scenes.
This guide breaks down where to find RabbitMQ logs, how to set them up, and what to look for when things start acting up. Consider it your go-to cheat sheet for keeping RabbitMQ running smoothly.
Where to Find RabbitMQ Logs
You can't fix what you can't see. Here's where RabbitMQ keeps its secrets:
Default Log Locations
RabbitMQ writes logs to different places depending on your installation method and OS:
For Debian/Ubuntu systems:
/var/log/rabbitmq/rabbit@hostname.log
For RPM-based systems (RHEL, CentOS, Fedora):
/var/log/rabbitmq/rabbit@hostname.log
For Windows:
%APPDATA%\RabbitMQ\log\rabbit@hostname.log
For Docker containers:
stdout/stderr (unless you've configured volume mounts)
Not seeing logs where you expect? Run rabbitmqctl status
to check your current log location—the broker will tell you exactly where it's keeping notes.
Log File Naming Convention
RabbitMQ log files follow a specific naming pattern:
- The main log file is named
rabbit@[hostname].log
- Rotated logs get a timestamp suffix:
rabbit@[hostname].log.2023-03-15
- The upgrade log is separate:
rabbit@[hostname]_upgrade.log
- For nodes in a cluster, each node has its own log with its hostname
Accessing Logs via Management UI
If you've got the RabbitMQ management plugin enabled (which you absolutely should), you can also access logs directly from the web UI:
- Navigate to Admin > Logs tab
- Set your desired log level filter
- Click "Download" to get the full log file
This is super handy when you don't have direct server access but need to check what's happening.
Log Types and Their Messages
RabbitMQ isn't just keeping one log—it's actually tracking several aspects of its operation:
Connection Logs
These tell you who's connecting to your broker, from where, and when they disconnect:
2023-03-15 10:24:12.155 [info] <0.614.0> accepting AMQP connection <0.614.0> (192.168.1.42:56872 -> 192.168.1.100:5672)
2023-03-15 10:25:32.611 [info] <0.614.0> connection <0.614.0> (192.168.1.42:56872 -> 192.168.1.100:5672): user 'admin' authenticated and granted access to vhost '/'
What to watch for: Sudden connection drops, repeated connection attempts, or authentication failures may indicate client issues or network problems.
Queue Logs
These show queue creation, deletion, and consumer activity:
2023-03-15 11:05:23.155 [info] <0.723.0> Queue 'order_processing' declared by connection <0.614.0> (192.168.1.42:56872 -> 192.168.1.100:5672, vhost: '/', user: 'admin')
2023-03-15 11:06:12.421 [info] <0.723.0> Queue 'order_processing' in vhost '/' has 1 consumers, 0 messages ready, 0 messages unacknowledged
2023-03-15 11:10:45.879 [warning] <0.723.0> Queue 'order_processing' in vhost '/': message TTL expired for 5 messages
What to watch for: Messages about TTL expiration, queues with zero consumers, or rapidly increasing message counts are red flags.
Exchange Logs
Track exchange creation and binding changes:
2023-03-15 11:06:01.421 [info] <0.723.0> Exchange 'order_events' declared by connection <0.614.0> (192.168.1.42:56872 -> 192.168.1.100:5672, vhost: '/', user: 'admin')
2023-03-15 11:07:22.155 [info] <0.723.0> Binding 'order_events'->order_processing created by connection <0.614.0>
2023-03-15 11:08:15.982 [info] <0.723.0> Exchange 'order_events' published message to queue 'order_processing': routed
What to watch for: "Message was not routed" logs often indicate misconfigured bindings or routing keys.
Error Logs
The most valuable logs when things go wrong:
2023-03-15 11:35:45.879 [error] <0.890.0> Channel error on connection <0.614.0> (192.168.1.42:56872 -> 192.168.1.100:5672, vhost: '/', user: 'admin'): {amqp_error,not_found,"no queue 'missing_queue' in vhost '/'",none}
2023-03-15 12:15:33.421 [error] <0.956.0> Error on AMQP connection <0.614.0>: {socket_error,etimedout}
2023-03-15 13:22:45.156 [error] <0.1024.0> Supervisor {<0.1024.0>,rabbit_connection_sup} had child connection <0.1025.0> exit with reason {handshake_timeout,handshake} in context start_error
What to watch for: These are high-priority logs that typically require immediate action. Pay special attention to socket errors, supervisor failures, and channel errors.
Shovel and Federation Logs
If you're using RabbitMQ's Shovel or Federation plugins for inter-broker messaging:
2023-03-15 14:05:12.155 [info] <0.1156.0> Shovel 'order_replication' connected to both source and destination
2023-03-15 14:35:45.879 [error] <0.1156.0> Shovel 'order_replication' failed to connect to destination: {auth_failure,"Cannot authenticate user 'shovel_user'"}
What to watch for: Connection failures between brokers, authentication issues, or stalled transfers.
How to Configure Log Levels in RabbitMQ
Not all logs are created equal. Sometimes you need more details, sometimes less. Here's how to adjust what RabbitMQ tells you:
Available Log Levels
From most to least verbose:
- debug: Everything including detailed connection handling, queue operations, and internal processes (can generate HUGE log files)
- info: Normal operations like connections, queue declarations, and basic broker activities
- warning: Potential issues that haven't caused failures yet
- error: Actual failures and exceptions that need attention
- none: Turns off logging completely (not recommended except for specific categories)
3 Different Configuration Methods
Method 1: rabbitmq.conf file
# Global log level
log.file.level = info
# Category-specific levels
log.file.level.connection = warning
log.file.level.channel = warning
log.file.level.queue = info
log.file.level.mirroring = debug
# Console output settings (useful for containers)
log.console = true
log.console.level = warning
Method 2: Environment variables
# For Linux/macOS
export RABBITMQ_LOG_BASE=/path/to/logs
export RABBITMQ_LOGS=rabbit.log
export RABBITMQ_LOG=info
export RABBITMQ_LOG_CONNECTION=warning
# For Windows
set RABBITMQ_LOG_BASE=C:\path\to\logs
set RABBITMQ_LOGS=rabbit.log
set RABBITMQ_LOG=info
set RABBITMQ_LOG_CONNECTION=warning
Method 3: Runtime using rabbitmqctl
# Set global log level
rabbitmqctl set_log_level info
# Set category-specific level
rabbitmqctl set_log_level connection warning
rabbitmqctl set_log_level channel warning
Log Categories for Fine-Tuning
RabbitMQ supports granular logging control for specific components:
connection
: Connection lifecycle eventschannel
: Channel operationsqueue
: Queue operations and state changesmirroring
: Queue mirroring activities in clustersfederation
: Federation plugin eventsupgrade
: Upgrade and migration processesshovel
: Shovel plugin operations
A good strategy is to use warning as your default, then selectively enable debug for specific components you're investigating.
Troubleshooting Common Issues Using Logs
Now for the good stuff—using logs to fix real problems:
Connection Refused Issues
Log pattern to watch for:
2023-03-15 14:25:12.155 [error] <0.614.0> Error on AMQP connection <0.614.0> (192.168.1.42:56872 -> 192.168.1.100:5672, state: starting): {socket_error,econnrefused}
What it means: Your client can't reach the RabbitMQ server. Check network connectivity, firewall rules, and that RabbitMQ is actually running.
How to fix it:
- Verify RabbitMQ is running:
rabbitmqctl status
- Check listening ports:
sudo netstat -tulpn | grep 5672
- Test network connectivity:
telnet rabbitmq-server 5672
- Review firewall rules:
sudo iptables -L | grep 5672
- Check binding settings in
rabbitmq.conf
to ensure it's listening on the correct interfaces
Authentication Failures
Log pattern:
2023-03-15 15:10:23.421 [error] <0.723.0> HTTP access denied: user 'guest' - invalid credentials
2023-03-15 15:11:45.879 [error] <0.745.0> AMQP connection <0.745.0> (192.168.1.42:57890 -> 192.168.1.100:5672, vhost: 'production', user: 'app_user'): user 'app_user' can't access vhost 'production'
What it means: Wrong username/password or permissions issue. Double-check your client configuration against what's in RabbitMQ's user database.
How to fix it:
- List current users:
rabbitmqctl list_users
- Check permissions:
rabbitmqctl list_permissions -p /vhost_name
- Add permissions if needed:
rabbitmqctl set_permissions -p /vhost_name username ".*" ".*" ".*"
- For the default guest user, remember it can only connect from localhost unless you change the config
High Memory Watermark Reached
Log pattern:
2023-03-15 16:45:33.976 [warning] <0.123.0> Memory resource limit alarm set on node rabbit@hostname. Memory used: 3.8 GB. Memory limit: 4.0 GB.
2023-03-15 16:45:34.123 [warning] <0.123.0> Publishers will be blocked until this alarm clears
What it means: RabbitMQ is almost out of memory. It will start blocking publishers. Time to check for message backlogs or increase RAM.
How to fix it:
- Identify problematic queues:
rabbitmqctl list_queues name messages consumers memory
- Look for queues with lots of messages and no consumers
- Add consumers to process the backlog
- Consider increasing the memory limit temporarily:
rabbitmqctl set_vm_memory_high_watermark 0.7
- For long-term fix, implement proper queue TTLs and dead-letter exchanges
Channel Limit Exceeded
Log pattern:
2023-03-15 17:25:12.155 [warning] <0.1025.0> Connection <0.1025.0> (192.168.1.42:60123 -> 192.168.1.100:5672, vhost: '/', user: 'admin'): channel_max limit (1000) reached, closing connection
What it means: A client opened too many channels on a single connection. This often happens with poorly configured connection pooling.
How to fix it:
- Increase channel limit in config if appropriate:
channel_max = 2000
- Check client code for channel leaks (channels opened but never closed)
- Use connection pooling properly - most clients should reuse channels instead of creating new ones
- Monitor channel count:
rabbitmqctl list_channels pid connection name number
Queue Declaration Errors
Log pattern:
2023-03-15 18:15:33.421 [error] <0.956.0> Channel error on connection <0.614.0> (192.168.1.42:56872 -> 192.168.1.100:5672, vhost: '/', user: 'admin'): {precondition_failed,"inequivalent arg 'x-max-length' for queue 'work_queue' in vhost '/': received '1000', current is '500'"}
What it means: Trying to redeclare a queue with different properties than it was originally created with.
How to fix it:
- Make queue declarations consistent across all services
- Delete the queue if you need to change properties:
rabbitmqctl delete_queue work_queue
- Consider using queue configuration policies instead of client-side declarations
Log Rotation and Management
RabbitMQ logs can grow quickly. Here's how to keep them under control:
Built-in Log Rotation
RabbitMQ has built-in log rotation based on file size. Configure it like this in rabbitmq.conf:
# Size-based rotation
log.file.rotation.size = 10485760 # 10 MB
log.file.rotation.count = 5 # Keep 5 files
# Time-based rotation
log.file.rotation.period = daily # Options: minutely, hourly, daily, weekly, monthly
log.file.rotation.date = $D0 # Rotate at midnight
You can combine both approaches—RabbitMQ will rotate logs when either condition is met.
External Log Rotation (logrotate)
For Linux systems, you can use logrotate for more advanced rotation strategies:
/var/log/rabbitmq/*.log {
weekly
rotate 4
compress
delaycompress
missingok
notifempty
sharedscripts
maxsize 100M
dateext
dateformat -%Y-%m-%d
postrotate
invoke-rc.d rabbitmq-server rotate-logs > /dev/null
endscript
}
Log Compression Strategies
For long-term storage, consider these approaches:
- Immediate compression: Set
compress
and removedelaycompress
in logrotate - Archival compression: Use a scheduled job to tar.gz older logs and move them to cold storage
Log pruning: Set up a cron job to automatically delete logs older than X days:
find /var/log/rabbitmq/ -name "*.log.*" -type f -mtime +30 -delete
Log Disk Space Monitoring
Set up monitoring for RabbitMQ log directories to avoid disk space issues:
#!/bin/bash
LOG_DIR="/var/log/rabbitmq"
THRESHOLD=90
USAGE=$(df $LOG_DIR | grep -v Filesystem | awk '{print $5}' | sed 's/%//')
if [ $USAGE -gt $THRESHOLD ]; then
echo "CRITICAL: RabbitMQ log directory is $USAGE% full" | mail -s "RabbitMQ Log Alert" admin@example.com
fi
Log Shipping and Centralization
Flying solo with logs on each server is so 2010. Here's how to get your RabbitMQ logs into your central logging system:
Filebeat Configuration
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/rabbitmq/*.log
fields:
service: rabbitmq
environment: production
component: messaging
team: platform
multiline:
pattern: '^\d{4}-\d{2}-\d{2}'
negate: true
match: after
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
output.elasticsearch:
hosts: ["elasticsearch:9200"]
index: "rabbitmq-%{+yyyy.MM.dd}"
pipeline: "rabbitmq-parsing"
Fluentd Configuration
<source>
@type tail
path /var/log/rabbitmq/*.log
pos_file /var/log/td-agent/rabbitmq.pos
tag rabbitmq
read_from_head true
<parse>
@type regexp
expression /^(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}) \[(?<level>\w+)\] (?<pid><[^>]+>) (?<message>.*)$/
time_format %Y-%m-%d %H:%M:%S.%L
</parse>
</source>
<filter rabbitmq>
@type parser
key_name message
reserve_data true
remove_key_name_field true
<parse>
@type regexp
expression /connection (?<connection_id><[^>]+>) \((?<client_ip>[^:]+):(?<client_port>\d+) -> (?<server_ip>[^:]+):(?<server_port>\d+)/
</parse>
</filter>
<match rabbitmq>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix rabbitmq
flush_interval 5s
</match>
Vector Configuration
Vector is a lightweight, high-performance log collector:
[sources.rabbitmq_logs]
type = "file"
include = ["/var/log/rabbitmq/*.log"]
multiline.start_pattern = '^\d{4}-\d{2}-\d{2}'
ignore_older_secs = 86400
[transforms.parse_rabbitmq]
type = "regex_parser"
inputs = ["rabbitmq_logs"]
patterns = ['^(?P<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}) \[(?P<level>\w+)\] (?P<pid><[^>]+>) (?P<message>.*)
```bash
grep -E "Queue '.*' (declared|deleted)" /var/log/rabbitmq/rabbit@hostname.log
Keep track of queues popping in and out of existence—especially useful when hunting down dynamic queue leaks.
Example 3: Tracking High Memory Usage
grep "Memory resource limit alarm" /var/log/rabbitmq/rabbit@hostname.log
This shows when RabbitMQ hit memory limits—key for capacity planning.
How Can Custom Formatting and Structured Logging Improve Your Debugging
If you want to make your logs more machine-readable you can customize the format:
JSON Logging
In rabbitmq.conf:
log.file.formatter = json
This gives you structured logs like:
{"timestamp":"2023-03-15T17:22:45.123Z","level":"info","message":"Connection accepted","pid":"<0.684.0>","peer":"192.168.1.42:56872"}
Much easier to parse and ship to systems like Elasticsearch or Splunk.
Advanced JSON Configuration
You can further customize your JSON logging:
# Include additional fields in every log entry
log.file.formatter.json.field_names.time = timestamp
log.file.formatter.json.field_names.msg = message
# Add static fields
log.file.formatter.json.additional_fields.environment = production
log.file.formatter.json.additional_fields.service_name = rabbitmq
log.file.formatter.json.additional_fields.host = ${HOSTNAME}
Syslog Integration
For environments that use centralized syslog:
# Enable syslog output
log.syslog = true
log.syslog.level = warning
log.syslog.identity = rabbitmq
# RFC5424 structured data
log.syslog.structured_data = true
Colorized Console Logs
For local development or debugging:
# Enable colorized console output (useful for containers)
log.console.formatter = colored
Format Comparison Table
Format | Pros | Cons | Best for |
---|---|---|---|
Plain Text | Human readable, standard | Hard to parse automatically | Development, small deployments |
JSON | Machine parsable, structured | Less human readable | Production, ELK/Splunk integration |
Syslog | Works with existing syslog | Limited customization | Enterprise environments |
Colored | Visual distinction | Only for console, not files | Local debugging |
Log Format Migration Strategies
When switching log formats, consider these approaches:
- Parallel logging: Configure both formats simultaneously during transition
- Format converter: Use tools like
jq
to convert between formats as needed - Staged rollout: Change format on one node at a time in your cluster
Log Integration Best Practices
To make the most of your RabbitMQ logs, follow these integration best practices:
Correlation with Application Logs
Use correlation IDs across your distributed system:
# Python example with Pika client
properties = pika.BasicProperties(
correlation_id=str(uuid.uuid4()),
app_id="order-service",
message_id=str(uuid.uuid4()),
timestamp=int(time.time())
)
channel.basic_publish(exchange=exchange, routing_key=routing_key, properties=properties, body=message)
These IDs will appear in RabbitMQ logs, allowing you to trace message flows end-to-end.
Log Aggregation Strategy
For comprehensive visibility:
- Unified dashboard: Create a Grafana dashboard that displays:
- RabbitMQ operational metrics (queue depths, publish rates)
- Log-derived metrics (connection errors, routing failures)
- Application-level metrics (processing times, error rates)
- Alert correlation: Set up alerts that combine multiple signals:
- High queue depth + low consumer count = processing bottleneck
- Connection spikes + increased error logs = client configuration issue
- Network partition logs + increased latency = infrastructure problem
- Log retention policy:
- Hot storage (7-14 days): All logs at info level
- Warm storage (30-90 days): Warnings and errors only
- Cold storage (1 year+): Error logs only
Conclusion
Remember: in the world of message brokers, good logging practices are the difference between a quick fix and an all-night debugging session. Set them up right now, before you need them.
FAQs
Q: How do I check if RabbitMQ is actually writing logs?
A: Run this command to see the last 10 log entries:
tail -n 10 $(rabbitmqctl status | grep Log | grep -oE '/[^}]*')
Q: Can I have different log levels for different RabbitMQ plugins?
A: Yes! Use category-specific configuration:
# In rabbitmq.conf
log.file.level.connection = warning
log.file.level.channel = warning
log.file.level.federation = debug
log.file.level.shovel = debug
Q: How much disk space should I allocate for RabbitMQ logs?
A: For a busy production broker, allocate at least 1GB per node for logs with a rotation strategy. With debug-level logging, this could easily grow to 10GB+ per day.
Q: How can I tell if my queues are being properly mirrored in a cluster?
A: Look for synchronization logs:
grep -i "synchronizing" /var/log/rabbitmq/rabbit@*.log
Q: My RabbitMQ server isn't starting. Where should I look first?
A: Check the startup logs:
# For systemd-based systems
journalctl -u rabbitmq-server.service -n 100
# Direct log file
cat /var/log/rabbitmq/startup_log
cat /var/log/rabbitmq/startup_err
Q: How can I see which clients are publishing the most messages?
A: Enable channel statistics and check the management UI, or use this command:
rabbitmqctl list_channels connection pid peer_host user messages_published
Q: Can I redirect specific types of logs to different files?
A: Not directly with RabbitMQ's built-in logging, but you can use syslog facility with different priorities and then configure syslog to route them appropriately.
Q: How do I completely disable console logging for RabbitMQ?
A: In rabbitmq.conf:
log.console = false
Q: Is there a way to trace a specific message through the RabbitMQ broker?
A: Enable firehose mode to a specific exchange, then consume from it to see all messages:
rabbitmqctl trace_on
rabbitmqctl set_user_tags your_user administrator monitoring
Note that this has performance implications and should only be used temporarily.
Q: Can I send RabbitMQ logs directly to Slack for critical errors?
A: Use a tool like Logstash with the Slack output plugin:
output {
if [log_level] == "error" and [service] == "rabbitmq" {
slack {
url => "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
channel => "#rabbitmq-alerts"
format => "RabbitMQ Error on %{host}: %{message}"
}
}
}
Q: How can I correlate RabbitMQ logs with application logs in ELK?
To effectively correlate RabbitMQ logs with application logs in ELK, follow these steps:
1. Ensure Logs Have a Common Identifier
- Add correlation IDs to both your application logs and RabbitMQ logs.
- This helps in linking related log entries across different services.
2. Use Logstash or Vector for Parsing
- Extract relevant connection details from RabbitMQ logs using regex parsing:
field = "message"
[transforms.extract_connection_info]
type = "regex_parser"
inputs = ["parse_rabbitmq"]
patterns = ['connection (?P<connection_id><[^>]+>) ((?P<client_ip>[^:]+):(?P<client_port>\d+) -> (?P<server_ip>[^:]+):(?P<server_port>\d+)']
3. Store Logs in Elasticsearch
- Send parsed logs to an Elasticsearch index for easy querying:
[sinks.elasticsearch]
type = "elasticsearch"
inputs = ["extract_connection_info"]
endpoint = "http://elasticsearch:9200"
index = "rabbitmq-%F"
4. Visualize in Kibana
- Use Kibana’s search and visualization tools to filter logs based on the correlation ID field.
- Create dashboards to track RabbitMQ message flow and application events together.
This setup ensures a structured way to trace messages across your system while maintaining visibility in ELK.