Vibe monitoring with Last9 MCP: Ask your agent to fix production issues! Setup →
Last9 Last9

May 12th, ‘25 / 14 min read

Ubuntu Cron Logs: A Complete Guide for Engineers

A practical guide to Ubuntu cron logs—where to find them, how to read them, and how to set up logging that actually helps during failures.

Ubuntu Cron Logs: A Complete Guide for Engineers

Troubleshooting failed cron jobs without proper logging can be frustrating. Ubuntu cron logs record the execution of scheduled tasks, helping you identify what's working and what isn't.

This guide covers what engineers need to know about Ubuntu cron logs – from finding them to analyzing their contents and setting up effective monitoring solutions.

Where Ubuntu Stores Cron Execution Records

Let's start with the basics – finding your cron logs. Ubuntu doesn't make this immediately obvious, which is why many DevOps engineers spend precious time hunting them down.

By default, Ubuntu cron logs are written to:

  • /var/log/syslog – The main system log that contains cron entries mixed with other system messages
  • /var/log/cron.log – A dedicated cron log file (if enabled)

To check if you have a dedicated cron log file:

ls -l /var/log/cron.log

Don't see it? That's normal – many Ubuntu installations don't create this file by default. Your cron activities are likely being recorded in syslog instead.

💡
For more on how Ubuntu handles system-wide logging, ubuntu-var-log-messages breaks down what lives in /var/log/messages and how it's used.

How to Extract Cron-Specific Entries From System Logs

Since cron logs are typically mixed with other system messages, you'll need to filter them out. Here's how to quickly pull cron-related entries from syslog:

grep CRON /var/log/syslog

For a continuous view of incoming cron log entries:

tail -f /var/log/syslog | grep CRON

Want to see today's cron activity only?

grep CRON /var/log/syslog | grep "$(date '+%b %d')"

How to Create a Separate Cron Log File

Having a dedicated cron log file makes life easier. Here's how to set it up:

Edit the rsyslog configuration:

sudo nano /etc/rsyslog.d/50-default.conf

Add or uncomment this line:

cron.*                          /var/log/cron.log

Restart the rsyslog service:

sudo systemctl restart rsyslog

Now your cron jobs will write to a dedicated log file, making them much easier to track.

Decoding the Structure of Cron Log Messages

Cron logs might look cryptic at first, but they follow a consistent pattern. Here's a breakdown of a typical entry:

Jan 15 12:00:01 server CRON[1234]: (username) CMD (/path/to/script.sh)

This includes:

  • Date and time (Jan 15 12:00:01)
  • Hostname (server)
  • Process name and ID (CRON[1234])
  • User who ran the job (username)
  • Command that was executed (/path/to/script.sh)

A successful job usually shows just the command execution, while a failed job often includes error messages.

💡
Now, fix cron job log issues instantly—right from your IDE, with AI and Last9 MCP. Bring real-time production context—logs, metrics, and traces—into your local environment to debug and resolve job failures faster.

Troubleshooting Guide: Interpreting Common Cron Error Messages

Let's decode some frequent error messages you'll spot in Ubuntu cron logs:

Mail Transfer Agent Missing: Resolving "No MTA installed" Errors

Jan 15 12:00:01 server CRON[1234]: (username) MAIL (mailed 1 byte of output but got status 0x004b...)

This error occurs because cron is designed to email the output of jobs by default. When there's no email system configured, you'll see this error. The cron job itself may have executed correctly, but the system couldn't deliver the output via email.

This means Cron tried to email the output but couldn't find a mail transfer agent. You have two main options to resolve this:

Suppress the output: If you don't need email notifications, redirect the output to a file or discard it entirely:

0 12 * * * /path/to/script.sh > /dev/null 2>&1

This redirects both standard output (stdout) and error output (stderr) to /dev/null, effectively discarding all output and preventing cron from trying to email it.

Install an MTA: If you want to receive email notifications, install a mail transfer agent like Postfix:

sudo apt update
sudo apt install postfix

During installation, select "Internet Site" for a full mail server or "Satellite system" if you want to relay through another mail server.

The second option is often preferred in automated environments where you're using other logging mechanisms.

💡
If you're also tracking overall system health, ubuntu-performance-monitoring covers tools and techniques to monitor CPU, memory, disk, and more.

Access Control Issues: Fixing "Permission denied" Errors

Jan 15 12:00:01 server CRON[1234]: (username) CMD (/path/to/script.sh)
Jan 15 12:00:01 server CRON[1234]: (CRON) error (Permission denied)

This error indicates that the cron daemon doesn't have sufficient permissions to execute your script. This typically happens because the script file lacks the execute bit, but it can also occur if the cron user doesn't have permission to access directories in the path.

To diagnose and fix permission issues:

Inspect the script for internal permission issues: The script itself might be trying to access resources without proper permissions. Try running it manually as the same user to identify these issues:

sudo -u username /path/to/script.sh

Check script ownership: If the script is running as a specific user, ensure that the user owns the file or has appropriate permissions:

chown username:username /path/to/script.sh

Verify directory permissions: Make sure all directories in the path are readable and executable by the user running the cron job:

chmod +rx /path /path/to

Check script execution permissions: Ensure your script has the execute permission bit set:

chmod +x /path/to/script.sh

Permission issues are among the most common problems with cron jobs, especially in environments with strict security policies.

Environment Configuration: Solving "Command not found" Problems

Jan 15 12:00:01 server CRON[1234]: (username) CMD (python3 /path/to/script.py)
Jan 15 12:00:01 server CRON[1234]: (CRON) error (sh: 1: python3: not found)

This error occurs because cron runs with a minimal environment, including a very limited PATH variable. Commands that work perfectly when run from your shell might fail when run from cron because the system can't find them.

Here's how to resolve "command not found" errors:

Use the user's profile: Load the user's environment by sourcing their profile:

0 12 * * * . $HOME/.profile; python3 /path/to/script.py

Create a wrapper script: For complex environment setups, create a wrapper shell script that sets up the environment properly:

#!/bin/bash
# set-env-and-run.sh
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export PYTHONPATH=/path/to/python/modules
# Other environment variables as needed

/usr/bin/python3 /path/to/script.py

Then call this wrapper from cron:

0 12 * * * /path/to/set-env-and-run.sh

Define PATH in the crontab: You can set an expanded PATH at the top of your crontab:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
0 12 * * * python3 /path/to/script.py

Use absolute paths for all commands: The most reliable solution is to specify the full path to every command in your crontab:

0 12 * * * /usr/bin/python3 /path/to/script.py

To find the absolute path of a command, use which:

which python3  # Outputs something like /usr/bin/python3

Understanding how cron's environment differs from your normal shell environment is crucial for reliable job execution.

💡
To understand where Ubuntu logs crashes and how to read them, ubuntu-crash-logs offers a clear breakdown of the key files and troubleshooting steps.

Output Capture Strategies: Recording Script Results in Log Files

By default, cron only logs that a job ran – not what it did. To capture actual script output in your logs:

0 12 * * * /path/to/script.sh >> /var/log/myscript.log 2>&1

This redirects both standard output and errors to a custom log file.

For more structured logging, add timestamps:

0 12 * * * (date; /path/to/script.sh) >> /var/log/myscript.log 2>&1

Capturing the output of your cron jobs is essential for troubleshooting and monitoring. The default behavior of cron is to capture the output and attempt to email it to the user. When you want more control, you can implement various output capture strategies:

Log rotation awareness: When implementing logging, consider how logs will be rotated. Using the system's designated log directories (like /var/log/) usually means your logs will be included in the system's standard log rotation.

Use structured logging: For more advanced logging, have your script output in a structured format like JSON:

echo "{\"timestamp\":\"$(date -Iseconds)\",\"event\":\"backup_started\"}" >> /var/log/script.json
# Script logic
echo "{\"timestamp\":\"$(date -Iseconds)\",\"event\":\"backup_completed\",\"status\":\"success\"}" >> /var/log/script.json

This makes it easier to parse logs programmatically and integrate with monitoring systems.

Create separate logs for stdout and stderr:

0 12 * * * /path/to/script.sh >> /var/log/myscript.out 2>> /var/log/myscript.err

This approach keeps normal output and errors in separate files for easier analysis.

Add timestamps to entries: Wrap the command with a date command to timestamp each run:

0 12 * * * (date; /path/to/script.sh) >> /var/log/myscript.log 2>&1

This inserts the current date and time before the script output.

Basic output redirection: Send all output (both standard output and standard error) to a log file:

0 12 * * * /path/to/script.sh >> /var/log/myscript.log 2>&1

The >> appends to the log file rather than overwriting it, 2>&1 redirects stderr to the same location as stdout.

Proper output capture prevents the dreaded situation of having cron jobs fail silently with no trace of what went wrong.

Implement Rotation to Prevent Disk Space Issues

Unmanaged logs can grow until they fill your disk. Set up log rotation to keep them in check:

Create a log rotation configuration:

sudo nano /etc/logrotate.d/custom-cron

Add these lines:

/var/log/cron.log {
    weekly
    rotate 4
    compress
    missingok
    notifempty
}

This rotates your cron log weekly, keeps 4 weeks of archives, and compresses old logs.

Log files that grow indefinitely can cause serious problems, including system crashes when disks fill up. Implementing proper log rotation is a crucial practice for any production environment. Here's a detailed explanation of how to set up and customize log rotation for your cron logs:

  1. Understand logrotate: Ubuntu uses the logrotate utility to manage log file rotation. It's typically run daily via a system cron job and processes configuration files in /etc/logrotate.d/.
  2. Additional useful options:
    • size 10M: Rotate when the log reaches 10MB, regardless of time
    • daily: Rotate logs daily instead of weekly
    • delaycompress: Compress logs on the next rotation cycle
    • maxage 60: Delete rotated logs older than 60 days
  3. Force immediate rotation: To test the actual rotation process:
sudo logrotate -f /etc/logrotate.d/custom-cron
  1. Testing your configuration: Verify your logrotate configuration without actually rotating files:
sudo logrotate -d /etc/logrotate.d/custom-cron

This dry run shows what would happen without making changes.

  1. Multiple log paths: You can specify multiple log files in one configuration:
/var/log/cron.log /var/log/custom_jobs/*.log {
    weekly
    rotate 4
    compress
}
  1. Configuration options explained:
/var/log/cron.log {
    weekly             # Rotate logs once per week
    rotate 4           # Keep 4 rotated log files before deleting
    compress           # Compress rotated logs with gzip
    missingok          # Don't error if the log file is missing
    notifempty         # Don't rotate empty log files
    create 0640 root adm # Create new log files with these permissions
    dateext            # Add date extension to rotated logs
    postrotate         # Commands to run after rotation
        systemctl reload rsyslog >/dev/null 2>&1 || true
    endscript
}
  1. Create a custom rotation configuration: For specialized logs like those from your cron jobs, create a dedicated configuration:
sudo nano /etc/logrotate.d/custom-cron

Properly configured log rotation balances the need for historical log data with system resource constraints, ensuring your logs remain useful without becoming a liability.

💡
Last9 offers full monitoring support, including alerts and notifications. But no matter what tool you use, alerting often runs into the same issues: gaps in coverage, alert fatigue, and stale rules. These aren’t problems with quick fixes—but ones worth solving thoughtfully.

Configuring Real-Time Alerts for Failed Jobs

For important tasks, prompt notification when something goes wrong can save valuable troubleshooting time.

Email Notification Configuration: Setting Up Job Status Emails

Add this to your crontab for job alerts:

MAILTO=your@email.com
0 12 * * * /path/to/script.sh

To disable email notifications completely:

MAILTO=""
0 12 * * * /path/to/script.sh

Cron has a built-in email notification system that can alert you when jobs produce output or errors. Here's a detailed explanation of how to configure and customize these notifications:

Understanding cron's email behavior: By default, cron collects any output (both stdout and stderr) from your jobs and emails it to the user who owns the crontab. This happens only if there is any output - silent jobs don't trigger emails.

Filtering emails with custom subjects: Some MTAs allow you to set a custom subject for the notification emails:

MAILTO=your@email.com
CONTENT_TYPE=text/plain; charset=utf-8
SUBJECT="Cron Job Alert: Database Backup"
0 12 * * * /path/to/backup.sh

Note that not all MTAs support the SUBJECT variable.

Testing email notifications: To verify your email setup, create a simple test job:

* * * * * echo "Cron email test at $(date)"

This will send an email every minute, which you can cancel after confirmation.

Configuring the mail system: For email notifications to work, your system needs a properly configured Mail Transfer Agent (MTA) like Postfix or Sendmail. On minimal server installations, you may need to install and configure one:

sudo apt update
sudo apt install postfix
sudo dpkg-reconfigure postfix  # For interactive configuration

Disabling email notifications: If you're using another notification method or logging system, you can disable emails entirely:

MAILTO=""
0 12 * * * /path/to/script.sh

Scope of MAILTO: The MAILTO setting applies to all jobs that follow it in the crontab until another MAILTO is defined. To use different email addresses for different jobs:

MAILTO=admin@example.com
0 12 * * * /path/to/critical-script.sh

MAILTO=developer@example.com
0 14 * * * /path/to/development-script.sh

Specifying a custom email recipient: Set the MAILTO variable at the top of your crontab to direct notifications to a specific address:

MAILTO=your@email.com
0 12 * * * /path/to/script.sh

You can use multiple email addresses separated by commas.

Email notifications provide a simple way to stay informed about job failures without having to actively check logs, making them ideal for critical tasks that require immediate attention.

Job Tracking Implementation: Creating a Status-Reporting Wrapper Script

Create a wrapper script that reports job status:

#!/bin/bash
# cron_wrapper.sh

LOG_FILE="/var/log/cron_jobs.log"
JOB_NAME="$1"
COMMAND="${@:2}"

echo "[$(date)] Starting job: $JOB_NAME" >> $LOG_FILE

start_time=$(date +%s)
$COMMAND
exit_code=$?
end_time=$(date +%s)
duration=$((end_time - start_time))

if [ $exit_code -eq 0 ]; then
    echo "[$(date)] Job $JOB_NAME completed successfully in ${duration}s" >> $LOG_FILE
else
    echo "[$(date)] Job $JOB_NAME FAILED with exit code $exit_code after ${duration}s" >> $LOG_FILE
    # Add notification logic here
fi

exit $exit_code

Then use it in your crontab:

0 12 * * * /path/to/cron_wrapper.sh "Daily Backup" /path/to/backup.sh

For more sophisticated monitoring, creating a wrapper script gives you precise control over how job execution is tracked and reported. This approach allows you to implement custom logging, timing, alerting, and even retry logic for your cron jobs.

Let's break down the benefits and implementation details of a status-reporting wrapper script:

  1. Enhanced logging: The wrapper creates structured log entries that include:
    • Job name for easy identification
    • Timestamps for start and completion
    • Duration of execution
    • Exit status (success or failure)
    • Error codes for debugging
  2. Execution time tracking: By recording the start and end times, you can monitor job performance trends over time and identify jobs that are taking longer than expected.
  3. Consistent error handling: The wrapper provides a standard way to detect and report failures across all your cron jobs, regardless of how the underlying scripts are written.
  4. Resource usage tracking: Extend the wrapper to monitor resource utilization:
# Record peak memory usage
/usr/bin/time -f "Memory: %M KB" -o /tmp/resource_$$ $COMMAND
peak_memory=$(grep "Memory" /tmp/resource_$$ | cut -d' ' -f2)
echo "[$(date)] Job $JOB_NAME used $peak_memory KB of memory" >> $LOG_FILE
  1. Integrating with monitoring systems: Add metrics submission to monitoring platforms:
# Send metrics to Prometheus Pushgateway
echo "cron_job_duration_seconds{job=\"$JOB_NAME\"} $duration" | \
curl --data-binary @- http://pushgateway:9091/metrics/job/cron

# Record job status (0 for success, 1 for failure)
echo "cron_job_status{job=\"$JOB_NAME\"} $exit_code" | \
curl --data-binary @- http://pushgateway:9091/metrics/job/cron
  1. Advanced usage - retry logic: Enhance the wrapper to automatically retry failed jobs:
max_retries=3
retry_count=0

while [ $retry_count -lt $max_retries ]; do
    $COMMAND
    exit_code=$?
    
    if [ $exit_code -eq 0 ]; then
        break
    fi
    
    retry_count=$((retry_count + 1))
    echo "[$(date)] Job $JOB_NAME failed, retry $retry_count of $max_retries" >> $LOG_FILE
    sleep 60  # Wait before retrying
done
  1. Customizable notifications: You can add custom notification logic to the wrapper script:
if [ $exit_code -ne 0 ]; then
    # Send Slack notification
    curl -X POST -H 'Content-type: application/json' \
    --data "{\"text\":\"❌ Job $JOB_NAME FAILED with exit code $exit_code\"}" \
    https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK
    
    # Or send email
    echo "Job $JOB_NAME failed with exit code $exit_code" | \
    mail -s "Cron Job Failure Alert" admin@example.com
fi

A well-designed wrapper script transforms basic cron scheduling into a robust job execution framework with proper observability, making it much easier to maintain reliable scheduled tasks in production environments.

💡
ZFS can be a powerful addition to your Ubuntu setup—an in depth guide on Ubuntu ZFS guide covers installation, configuration, and best practices in detail.

Syslog Integration: Using the logger Command for Centralized Logging

The logger command writes directly to syslog, making it ideal for cron jobs:

0 12 * * * /path/to/script.sh 2>&1 | logger -t mycronjob

This tags all output with "mycronjob" for easy filtering.

Integrating cron job output with your system's central syslog provides several advantages, especially in environments where you're already aggregating and monitoring syslog data. The logger the a command provides a simple way to inject custom messages and script output into the syslog stream.

Here's a detailed explanation of how to effectively use a logger with cron jobs:

    • user.info: Normal informational messages
    • user.notice: Important but normal events
    • user.warning: Warning conditions
    • user.err: Error conditions
    • user.crit: Critical conditions that require immediate attention

Remote syslog integration: When configured with a remote syslog server, your logger messages will automatically be forwarded, creating a centralized record of all job executions across your infrastructure.

Network logging: Logger can send directly to a remote syslog server:

logger -t backup --tcp -n logserver.example.com -P 1514 "Remote backup completed"

Viewing logger output: To see your logged messages:

# View all logs with your tag
grep mycronjob /var/log/syslog

# For systemd-based systems, use journalctl
journalctl -t mycronjob

# Filter by priority level
journalctl -t mycronjob -p err

Using logger within scripts: For more granular logging, use logger throughout your script:

#!/bin/bash

logger -t mybackup "Starting database backup process"

if pg_dump -U postgres mydb > /backup/mydb.sql; then
    logger -t mybackup -p user.notice "Database backup successful"
else
    logger -t mybackup -p user.err "Database backup failed with code $?"
fi

Structured logging with logger: Add key-value pairs for better parsing:

logger -t backup --id=$$ --structured-data="[job@example name=\"database-backup\" type=\"full\"]" "Backup completed successfully"

Structured data makes automated processing of logs much easier.

Customizing priority levels: Logger supports different severity levels that can help with filtering and alerting:

# For normal operations
0 12 * * * /path/to/script.sh 2>&1 | logger -t mycronjob -p user.info

# For critical scripts where failures should trigger alerts
0 0 * * * /path/to/backup.sh || logger -t backup -p user.crit "Backup failed!"

Common priority levels include:

Basic usage with cron: Pipe all script output to the logger:

0 12 * * * /path/to/script.sh 2>&1 | logger -t mycronjob

The -t option adds a tag to your syslog entries, making them easier to identify and filter.

Using a logger with cron jobs integrates your scheduled tasks into your existing log management infrastructure, providing consistency with how other system events are logged and monitored.

How to Connect Cron Logs to Observability Platforms

For production environments, manual log checking doesn't scale well. Connecting your cron logs to an observability platform gives you better visibility into job runs, failures, and performance issues.

Last9 is one option that integrates with OpenTelemetry and Prometheus, helping you monitor metrics, logs, and traces from your scheduled jobs alongside other systems. Other popular tools include:

  • Grafana Loki – for aggregating and querying logs efficiently.
  • Fluent Bit + Elasticsearch – lightweight log shipping combined with powerful search.
  • Datadog – provides log collection, analysis, and alerting with built-in cron monitoring features.

To send your cron logs to any observability platform:

  1. Install the appropriate agent or collector (e.g., OTLP Collector, Fluent Bit).
  2. Configure it to monitor your cron logs (e.g., /var/log/syslog, /var/log/cron, or custom log files).
  3. Set up alerts for failed jobs, missed runs, or unusual patterns.

This approach helps centralize and automate monitoring for cron jobs across all your environments—no more guesswork when scheduled tasks fail silently.

Conclusion

Cron jobs are easy to set and forget—until something fails and you’re left digging through unclear logs. Setting up proper logging isn’t about perfection; it’s about making problems easier to spot when they happen.

💡
And if you’d like to talk through anything further, our Discord community is open—there’s a dedicated channel where you can discuss your specific use case with other developers.

FAQs

How long are cron logs kept by default?
On Ubuntu, log rotation usually keeps syslog files for 7–14 days. You can adjust this using logrotate.

Can I format cron logs in JSON or other structured formats?
Yes. Use a wrapper script to log in JSON. Example:

echo "{ \"timestamp\": \"$(date -Iseconds)\", \"job\": \"$JOB_NAME\", \"status\": \"started\" }" >> $LOG_FILE

Why aren’t my cron job variables working as expected?
Cron jobs run in a limited environment. To debug, add this to your crontab:

* * * * * env > /tmp/cron_env.txt

Compare it with your normal shell to spot missing variables.

Why do some cron jobs not show up in logs?
Make sure the cron service is running and check your crontab syntax. Invalid entries are often ignored without warnings.

How can I correlate cron logs with application logs?
Use a shared job ID across both logs. Example:

JOB_ID=$(date +%s)
echo "[JOB:$JOB_ID] Starting job" >> $CRON_LOG
/path/to/script.sh --job-id=$JOB_ID

How can I tell which user ran a cron job?
Cron logs include the username:

CRON[1234]: (root) CMD (/path/to/script.sh)
CRON[1235]: (appuser) CMD (/path/to/other.sh)

Ensure syslog captures entries from all users for full visibility.

Contents


Newsletter

Stay updated on the latest from Last9.