When your Linux server starts acting up at 3 AM, you don't need a philosophy lesson—you need answers. Fast. That's where journalctl last
comes in, the command-line equivalent of having a time machine for your system's events.
If you've been piecing together log information like some digital detective with a cork board and string, it's time to upgrade your toolkit. Let's cut through the noise and get you the intel you need, when you need it.
What is journalctl last?
journalctl last
isn’t a standalone command—it’s a way to use journalctl
with time-based filters to view recent log entries.
It’s especially useful when you need to check what happened just before an issue occurred. Think of journalctl
as a record of system events, and this approach as a way to quickly look at the moments leading up to a crash or error.
Here's the basic syntax that most sysadmins mean when they talk about "journalctl last":
journalctl --since="1 hour ago"
This command pulls all system journal entries from the last hour—perfect when you're troubleshooting a fresh issue.
7 Time-Saving journalctl last Commands You Should Know
When things are breaking and your boss is breathing down your neck, these commands will make you look like the Linux whisperer:
1. See Only the Most Recent Boot
journalctl -b -0
This shows logs from your current boot only. Your system could have been up for months, but you'll only see what's happened since the last restart.
2. View Logs from a Specific Time Window
journalctl --since="2023-10-15 14:30:00" --until="2023-10-15 15:00:00"
Perfect for when a user says, "Everything broke around 2:45 PM." Now you can see exactly what happened in that timeframe.
3. Check What Happened in the Last Minutes Before a Crash
journalctl --since="10 minutes ago" | grep -i error
This combo shows you recent errors right before everything went sideways.
4. Get the Last 100 Lines of Journal Entries
journalctl -n 100
This is the true "journalctl last 100 lines" command that many admins search for. It displays exactly 100 of the most recent log entries, regardless of time. Perfect for a quick overview of what's been happening on your system. You can adjust the number to show more or fewer lines:
journalctl -n 50 # Last 50 lines
journalctl -n 200 # Last 200 lines
You can also combine it with other filters:
journalctl -u nginx.service -n 100 # Last 100 lines from nginx only
journalctl -p err -n 100 # Last 100 error messages
This approach is much faster than time-based filtering when you just need a quick glimpse of recent activity.
5. Monitor Live Errors as They Happen
journalctl -f -p err
The -f
flag follows the log like tail -f
, but only shows errors and above. It's like having your ear to the ground as issues develop.
5. Find Out What a Specific Service Has Been Up To
journalctl -u nginx.service --since today
Replace nginx.service
with whatever's giving you trouble. This shows you everything that service has logged since midnight.
6. See Who's Been Logging In Recently
journalctl _COMM=sshd --since="24 hours ago"
Great for checking unusual login activity when you suspect something fishy.
7. Check What Happened During the Last Reboot
journalctl --since="$(date -d '1 hour ago')" -g "starting|stopping|Started|Stopped"
This shows service starts and stops in the last hour—perfect for seeing what changed during a recent deployment or update.
java.lang.OutOfMemoryError
, our guide on Java OutOfMemoryError covers why it happens and how to resolve it.journalctl last
Troubleshooting Scenarios
Let's step away from the theory and see how journalctl last
commands save your bacon in real situations.
Scenario 1: The Mysterious Service Crash
Your monitoring system alerts you that a critical service died 5 minutes ago. Here's your play-by-play response:
See if anything else failed at the same time:
journalctl --since="15 minutes ago" -p err
Look for resource issues around the same time:
journalctl --since="15 minutes ago" | grep -E 'memory|cpu|disk|full'
Check what happened right before the crash:
journalctl -u critical-service.service --since="15 minutes ago"
This three-command combo often reveals the culprit in under a minute.
Scenario 2: The "It Was Working Yesterday" Problem
A developer swears their app was fine yesterday, but now it's throwing 503 errors. Time to find out what changed:
Check for disk space issues:
journalctl --since yesterday | grep -i "space\|disk\|full\|quota"
Look for service restarts:
journalctl --since yesterday -g "restart"
Check for overnight system changes:
journalctl --since yesterday --until today | grep -i "upgrade\|install\|removed"
Scenario 3: The Security Incident Investigation
Security team says there might have been an intrusion attempt. You need answers ASAP:
Check for unusual service starts:
journalctl --since="24 hours ago" -g "started" | grep -v "regular\|scheduled\|expected"
Look for privilege escalation:
journalctl --since="24 hours ago" | grep -i "sudo\|su\|permission"
Check for suspicious logins:
journalctl _COMM=sshd --since="24 hours ago" | grep -i "fail\|invalid\|error"
5 Journalctl Last Output Format Tricks That Make Logs Readable
The default journalctl output is... functional, but not exactly winning beauty contests. These formatting tricks make the output more useful:
1. Get Colorful Output
journalctl --since="1 hour ago" -o json-pretty
This formats the output as color-coded JSON, making it easier to spot fields that matter.
2. Just the Message, Please
journalctl --since="1 hour ago" -o cat
Strips away timestamps and metadata, giving you just the log messages themselves.
3. See Exact Timestamps
journalctl --since="1 hour ago" --output=short-precise
Shows microsecond-level timestamps, perfect for sequence-of-events analysis.
4. Export Logs for Analysis
journalctl --since="1 day ago" -u web-app.service -o json > webapp-logs.json
Dumps logs to a file you can analyze with other tools or send to the development team.
5. Multi-Line Magic
journalctl --since="1 hour ago" -o verbose
Shows all fields for each log entry in a multi-line format, revealing hidden metadata.
How to Filter journalctl last
Results
Getting all the logs from the last hour is a good start, but the real skill is filtering out the noise. Here's how to zero in on what matters:
Filter Type | Example Command | What It Does |
---|---|---|
By Service | journalctl -u apache2 --since="30 minutes ago" |
Only Apache logs from last 30 min |
By Priority | journalctl -p err --since today |
Only error-level and above from today |
By User | journalctl _UID=1000 --since yesterday |
Only logs from UID 1000 since yesterday |
By Process ID | journalctl _PID=1234 --since="1 hour ago" |
Logs from PID 1234 in last hour |
By Kernel Logs | journalctl -k --since="20 minutes ago" |
Only kernel logs from last 20 min |
By Host | journalctl -D /var/log/journal/remote/ |
Logs from remote hosts |
How to Create journalctl --last
Aliases for Faster Log Analysis
Stop typing out complex journalctl commands. Add these to your .bashrc
or .zshrc
and thank me later:
# Show errors from the last hour
alias jcerr='journalctl -p err --since="1 hour ago"'
# Show logs from current boot only
alias jcboot='journalctl -b -0'
# Show recent logs for a service (usage: jcserv nginx)
alias jcserv='function _f() { journalctl -u $1.service --since="30 minutes ago"; }; _f'
# Show all logs from the last 10 minutes
alias jclast='journalctl --since="10 minutes ago"'
# Follow new errors live
alias jctail='journalctl -f -p err'
With these aliases, you can go from alert to investigation in seconds.
journalctl --last
Performance Considerations
When your journal grows to gigabytes, some queries get slow. These tips keep things snappy:
1. Vacuum Old Journals Regularly
journalctl --vacuum-time=2weeks
This removes journals older than two weeks, keeping your queries fast.
2. Use the Journal Directory Parameter
Instead of scanning everything, point directly to where the relevant logs are:
journalctl -D /var/log/journal/specific-machine/
3. Limit Output Size
journalctl --since="1 day ago" -n 1000
This shows only the 1000 most recent matching entries, preventing terminal overload.
4. Index Your Journals
If you're on a newer systemd version:
journalctl --verify --file=/var/log/journal/*/system.journal
Verify and rebuild journal indexes for faster searches.
Integrating Journalctl Last with Monitoring Tools
When an issue arises, manually checking logs can slow down your response time. Instead, your monitoring system can trigger journalctl
commands automatically to capture relevant log entries.
This helps correlate logs with alerts, making troubleshooting more efficient. Below are examples of how to integrate journalctl --last
with different monitoring tools like Last9, Prometheus, and Nagios/Icinga.
For Last9:
For Last9, you can use a script that gathers relevant logs when an alert is triggered.
Script: last9_journalctl_context.sh
#!/bin/bash
# Filename: last9_journalctl_context.sh
SERVICE_NAME=$1
TIME_WINDOW=$2
LOG_PATH="/var/log/last9/context-logs/"
mkdir -p $LOG_PATH
# Get relevant logs for the service that triggered the alert
journalctl -u $SERVICE_NAME --since="$TIME_WINDOW minutes ago" -p warning > $LOG_PATH/${SERVICE_NAME}_context.log
# Add system-wide errors that might be related
journalctl -p err --since="$TIME_WINDOW minutes ago" >> $LOG_PATH/${SERVICE_NAME}_context.log
# Tag the log file for Last9 to collect with other metrics
touch $LOG_PATH/${SERVICE_NAME}_context.log.last9
echo "Contextual logs collected for Last9 at $LOG_PATH/${SERVICE_NAME}_context.log"
How This Script Works
- Accepts Two Inputs: The script takes in
SERVICE_NAME
(the service being monitored) andTIME_WINDOW
(how far back to look for logs). - Creates a Log Directory: It ensures the
/var/log/last9/context-logs/
directory exists for storing logs. - Captures Service-Specific Logs: Uses
journalctl -u $SERVICE_NAME
to fetch logs related to the affected service within the specified time window, filtering them bywarning
level and above. - Includes System-Wide Errors: Any critical system-wide errors (
-p err
) from the same time frame are added to the log for additional context. - Prepares Logs for Last9 Collection: A
.last9
tag is added to the log file to signal Last9's agent to collect and correlate it with other metrics.
This ensures that when an alert is raised, relevant logs are automatically gathered and stored, making them available for analysis within Last9's observability platform.
For Prometheus Alerting:
For Prometheus, you can modify your alert response script to capture recent logs whenever a specific alert is triggered.
Script Snippet for Prometheus Alerts
# Add to your alert response script
if [ "$ALERT_NAME" = "ServiceDown" ]; then
journalctl -u $SERVICE_NAME --since="10 minutes ago" | \
grep -i error > /tmp/alert-context.log
# Send the context log with your alert
fi
How This Works
- Triggers on "ServiceDown" Alerts: When Prometheus detects that a service is down, the script is executed.
- Fetches Recent Logs: Uses
journalctl -u $SERVICE_NAME --since="10 minutes ago"
to get logs from the last 10 minutes. - Filters for Errors: The
grep -i error
command ensures only error messages are included in the log file. - Stores Logs for Alerting: The output is saved in
/tmp/alert-context.log
, which can be attached to the alert notification for better visibility.
For Nagios/Icinga Check Scripts:
For Nagios or Icinga, you can create a check script that monitors if a service has encountered errors in the last few minutes.
Script: check_service_errors.sh
#!/bin/bash
# Check if a service had recent errors
ERRORS=$(journalctl -u $1 -p err --since="5 minutes ago" | wc -l)
if [ $ERRORS -gt 0 ]; then
echo "CRITICAL: $ERRORS recent errors found"
exit 2
else
echo "OK: No recent errors"
exit 0
fi
How This Works
- Takes the Service Name as Input: The script expects a service name (
$1
) to check for errors. - Counts Recent Errors: It uses
journalctl -u $1 -p err --since="5 minutes ago"
to count how many errors have occurred in the last 5 minutes. - Returns Status to Nagios/Icinga:
- If errors are found (
> 0
), it returnsCRITICAL
with exit code2
, triggering an alert. - If no errors are found, it returns
OK
with exit code0
.
- If errors are found (
This allows Nagios/Icinga to continuously monitor service health and trigger alerts if recent errors are detected.
Conclusion
journalctl last
techniques aren't just commands—they're your secret weapon for:
- Finding the root cause faster than your colleagues
- Providing evidence when the blame game starts
- Building a timeline during post-mortems
- Spotting patterns before they become outages
FAQs
How far back does journalctl keep logs by default?
It varies by distribution, but most systems retain logs based on size rather than time—typically around 10% of available disk space in /var/log/journal/
.
Can I use journalctl last to check logs from previous boots?
Yes! Use journalctl -b -1
for the previous boot, -2
for the one before that, and so on.
Is there a way to save journalctl output for later analysis?
Absolutely. Use journalctl --since yesterday > yesterday-logs.txt
to save to a text file, or use the --output=json
option for structured data.
Why aren't my journalctl last commands showing any results?
Check if journald is actually storing persistent logs with ls -la /var/log/journal/
. Some systems need persistent logging explicitly enabled in /etc/systemd/journald.conf
.
How can I see logs from a specific container or pod in Kubernetes?
Use the container ID: journalctl CONTAINER_ID=3bfa9290f75f --since="10 minutes ago"
.
Does journalctl last work with custom log files?
No—journalctl only reads from the systemd journal. For custom logs, you'll need to use traditional tools like grep
, tail
, and awk
.
Can I filter journalctl output by multiple services at once?
Yes! Use multiple -u
flags: journalctl -u nginx.service -u php-fpm.service --since today
.
Does using journalctl last put a heavy load on my system?
For most queries, the impact is minimal. However, very broad date ranges on large journals can cause significant disk I/O.