Vibe monitoring with Last9 MCP: Ask your agent to fix production issues! Setup →
Last9 Last9

Mar 26th, ‘25 / 10 min read

Ubuntu Crash Logs: Find, Fix, and Prevent System Failures

Learn how to find and use Ubuntu crash logs to troubleshoot issues, prevent future failures, and keep your system running smoothly.

Ubuntu Crash Logs: Find, Fix, and Prevent System Failures

If your system keeps crashing and you have no clue why, Ubuntu’s crash logs might have the answers. Whether you’re running a production server or just trying to keep your personal setup stable, these logs tell you exactly what went wrong.

Instead of sifting through endless system logs, Ubuntu gives you focused crash reports—kind of like a security camera that only records when something breaks. Let’s break down where to find these logs and how to make sense of them.

What Are Ubuntu Crash Logs?

Ubuntu crash logs are detailed records generated when an application or system component fails unexpectedly. Think of them as your system's black box flight recorder – they capture the moment things went south, including what processes were running, memory usage, and the specific error that triggered the crash.

These logs typically land in /var/crash and come packaged as .crash files. Each one contains the forensic evidence you need to solve your system mysteries.

Ubuntu uses a service called Apport to handle crash detection and reporting. When a program crashes, Apport jumps into action by:

  1. Capturing the program state at the moment of failure
  2. Collecting system information relevant to the crash
  3. Creating a structured .crash file with all this data
  4. Notifying you about the crash (those pop-ups you might see)

The resulting crash files aren't just simple text logs – they're structured reports containing binary data, stack traces, memory maps, and more. They're designed to give both humans and automated systems the info needed to diagnose what went wrong.

💡
If you're working with RabbitMQ, knowing how to monitor and troubleshoot its logs can save you from unexpected message queue issues. Read more.

Why You Should Care About Crash Logs

You're busy. We get it. But here's why these logs deserve your attention:

  • Fix problems faster – Stop guessing what went wrong and start knowing
  • Prevent future crashes – Spot patterns before they become persistent headaches
  • Improve your apps – Developers can pinpoint exact failure points
  • Keep your systems running – Proactive maintenance beats reactive firefighting

For DevOps teams, these logs are your early warning system. For everyday Ubuntu users, they're your ticket to a smoother computing experience.

Where to Find Ubuntu Crash Logs

Your crash data isn't hiding – you just need to know where to look:

Location Description When to Check
/var/crash/ Primary crash file storage First stop for recent crashes
/var/log/apport.log Crash reporting service logs To verify if crashes were detected
~/.crash/ User-specific crash reports For user application crashes
/var/log/syslog System-wide log file For context around crashes
/var/log/kern.log Kernel log file For kernel-related crashes
~/.xsession-errors X session errors For GUI application crashes

The most common crash files you'll encounter include:

  • _usr_bin_program-name.1000.crash – Application crashes
  • _usr_lib_program.1000.upload – Crash reports ready to be uploaded
  • linux-image-[version].[timestamp].crash – Kernel crashes (these are gold for debugging)

The naming convention for these files follows a pattern:

  • Underscores replace the forward slashes in the path to the executable
  • The number (like 1000) is the user ID that ran the program
  • The extension indicates the processing stage (.crash, .upload, etc.)

When Apport is enabled (it's on by default in desktop installations), these files are automatically generated. In server environments, you might need to explicitly enable crash reporting with:

sudo systemctl enable apport.service
sudo systemctl start apport.service

To check if Apport is running:

sudo systemctl status apport.service

For Docker containers or other specialized environments, you might need custom configuration to capture crash logs properly.

💡
If you're digging through logs to figure out what went wrong, understanding system logs is a good place to start. They capture everything from errors to performance issues. Read more.

How to Read Ubuntu Crash Logs

Crash logs can look intimidating at first glance. Here's your decoder ring:

# View a crash report summary
apport-cli -c /var/crash/_usr_bin_firefox.1000.crash

# Extract the full crash report to text
apport-unpack /var/crash/_usr_bin_firefox.1000.crash /tmp/crash-report
cat /tmp/crash-report/CoreDump

For a more detailed analysis:

# Install the crash analysis tools if you haven't already
sudo apt install apport-retrace

# Get a detailed stack trace with debugging symbols
sudo apport-retrace --stdout /var/crash/_usr_bin_firefox.1000.crash

The key sections to focus on:

  1. ExecutablePath – What program crashed
  2. Signal – The error signal (SIGSEGV, SIGABRT, etc.)
  3. Stacktrace – The sequence of function calls leading to the crash
  4. ProcMaps – Memory layout at crash time
  5. ProcStatus – Process state information
  6. Package – The package version that crashed
  7. Dependencies – Other packages that might be involved
  8. ProcEnviron – Environment variables at the time of the crash
  9. UserGroups – User permissions that might affect the application
  10. CoreDump – The actual memory dump (binary data) from the crash

A typical crash file contains several dozen fields, but these are your starting points for investigation. The real gold is often in the stack trace, which shows exactly what the program was doing when it crashed.

Let's look at a sample stack trace snippet and break it down:

#0  0x00007f9d5b5c1428 in ?? () from /usr/lib/x86_64-linux-gnu/libgtk-3.so.0
#1  0x00007f9d5b5c16c9 in gtk_main_do_event () from /usr/lib/x86_64-linux-gnu/libgtk-3.so.0
#2  0x00007f9d5ae9477c in ?? () from /usr/lib/x86_64-linux-gnu/libgdk-3.so.0
#3  0x00007f9d58091097 in g_main_context_dispatch () from /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
#4  0x00007f9d580912f0 in ?? () from /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0

Reading from bottom to top:

  • The crash originated in a GLib function (#4)
  • It propagated through GTK libraries (#0-#2)
  • The "??" means the debug symbols aren't available for that function

For a more readable stack trace, install debug symbols:

sudo apt install ubuntu-dbgsym-keyring
sudo tee /etc/apt/sources.list.d/ddebs.list << EOF
deb http://ddebs.ubuntu.com $(lsb_release -cs) main restricted universe multiverse
deb http://ddebs.ubuntu.com $(lsb_release -cs)-updates main restricted universe multiverse
EOF
sudo apt update
sudo apt install firefox-dbgsym  # Example for Firefox

Common Crash Signals and What They Mean

When your system throws these signals, here's what it's trying to tell you:

Signal Name Common Cause
SIGSEGV (11) Segmentation Fault Program tried accessing invalid memory
SIGABRT (6) Abort Program detected an issue and terminated itself
SIGILL (4) Illegal Instruction Program tried to run an invalid CPU instruction
SIGBUS (7) Bus Error Memory alignment problem
SIGFPE (8) Floating Point Exception Division by zero or similar math error
💡
If you're managing logs across multiple systems, a syslog server can make life easier by centralizing everything in one place. Read more.

Practical Troubleshooting with Crash Logs

Let's get practical. Here's your playbook for turning those crash logs into solutions:

Step 1: Check If You Have Recent Crashes

ls -la /var/crash/

Step 2: Examine the Crash Details

# For a human-readable summary
apport-cli -c /var/crash/your-crash-file.crash

# For developers who want the raw details
gdb /path/to/executable /tmp/unpacked-crash/CoreDump

Step 3: Look for Patterns

Are you seeing the same program crash repeatedly? Is it happening at specific times? Check your system journal for context:

journalctl --since="1 hour ago" | grep crashed

Step 4: Find Known Issues

Once you identify the package causing trouble, check if others have faced the same problem:

ubuntu-bug /var/crash/your-crash-file.crash

This command doesn't just report the bug – it shows you similar issues others have reported.

Advanced Crash Log Analysis

Ready to level up? These techniques will put you in the power-user category:

Using GDB for Deep Dives

The GNU Debugger is your best friend for serious crash analysis:

# Install GDB if you haven't already
sudo apt install gdb

# Analyze a core dump
gdb /usr/bin/program-name /tmp/crash-report/CoreDump

# At the GDB prompt, get a backtrace
(gdb) bt full

# Examine variables at the crash point
(gdb) frame 0
(gdb) info locals

# Check the actual assembly at the crash point
(gdb) disassemble

GDB commands that are particularly useful for crash analysis:

  • info registers – See CPU register values at crash time
  • x/20x $esp – Examine memory at the stack pointer
  • thread apply all bt – Show backtraces for all threads
  • print variable_name – View the value of a specific variable
  • set pagination off – Avoid the "—Type <return> to continue" prompts

Core Dump Analysis with ABRT

For systems with ABRT (Automatic Bug Reporting Tool) installed:

# Install ABRT tools
sudo apt install abrt abrt-cli

# List crash reports
abrt-cli list

# Get detailed info on a specific crash
abrt-cli info -d /var/spool/abrt/ccpp-2023-04-15-12:34:56.123456

Enabling More Detailed Crash Information

Make your crash logs even more useful:

# Edit the Apport configuration
sudo nano /etc/apport/crashdb.conf

# Set this to True to get more details
'problem_types': ['Bug', 'Package', 'Crash', 'KernelCrash', 'KernelOops'],

For production server environments, you might want to adjust kernel core dump behavior:

# Edit sysctl configuration
sudo nano /etc/sysctl.conf

# Add or modify these lines for more detailed core dumps
kernel.core_pattern = /var/crash/core.%e.%p.%t
kernel.core_uses_pid = 1
fs.suid_dumpable = 2

Then apply the changes:

sudo sysctl -p

Automated Analysis with Crash-Analysis Tools

For large-scale environments:

# Install crash-analysis
sudo apt install crash-analysis

# Run automated analysis on a crash file
crash-analysis-report /var/crash/your-crash-file.crash

This tool can identify common patterns, suggest fixes, and even correlate similar crashes across your infrastructure.

Memory Leak Detection with Valgrind

Many crashes stem from memory issues that happen long before the actual crash:

# Install Valgrind
sudo apt install valgrind

# Run your program under Valgrind to detect memory issues
valgrind --leak-check=full --show-leak-kinds=all --track-origins=yes --verbose program-name

While this doesn't analyze crash logs directly, it can help you understand why they're happening in the first place.

💡
If you're dealing with logs daily, understanding what log data actually is can help you make sense of all that information. Read more.

Preventing Future Crashes

Now you're a crash log detective, but prevention is better than cure:

Set up proactive crash monitoring

# Create a simple crash monitoring script
sudo nano /usr/local/bin/crash-monitor.sh

Add this content:

#!/bin/bash
CRASH_COUNT=$(ls -1 /var/crash/ | wc -l)
if [ $CRASH_COUNT -gt 0 ]; then
  echo "Warning: $CRASH_COUNT crash files found in /var/crash/"
  ls -la /var/crash/
fi

Make it executable and add to cron:

sudo chmod +x /usr/local/bin/crash-monitor.sh
echo "0 * * * * /usr/local/bin/crash-monitor.sh | mail -s 'Crash Report' admin@example.com" | sudo tee -a /etc/crontab

Run stress tests on critical systems

# Install stress-testing tools
sudo apt install stress-ng

# Test CPU stability
stress-ng --cpu 8 --timeout 60s

# Test memory stability
stress-ng --vm 2 --vm-bytes 2G --timeout 60s

Use systemd's coredump collection – More reliable than traditional methods

# Enable systemd-coredump
sudo systemctl enable systemd-coredump
sudo systemctl start systemd-coredump

# View collected dumps
coredumpctl list

# Examine a specific dump
coredumpctl info PID
coredumpctl debug PID

Enable core dump limits in security settings

# Edit security limits
sudo nano /etc/security/limits.conf

# Add these lines
* soft core unlimited
* hard core unlimited

Check your hardware – Run a memory test if crashes persist

sudo apt install memtest86+
# Reboot and select memtest from the GRUB menu

Monitor system resources – Crashes often happen when you're low on memory or disk space

# Install better monitoring tools
sudo apt install htop iotop sysstat

# Check memory usage
htop

# Check disk space
df -h

# Monitor I/O operations
sudo iotop

# Set up ongoing performance monitoring
sudo systemctl enable sysstat
sudo systemctl start sysstat

Keep your system updated – Many crashes come from outdated packages

sudo apt update && sudo apt upgrade

When to Call for Backup

Some crashes need more than just your attention:

  • Security-related crashes – Report these to the Ubuntu security team through their security portal at ubuntu.com/security
  • Hardware driver issues – Check with the hardware manufacturer or consult the Ubuntu Hardware Certification database
  • Mission-critical systems – Consider Canonical support for enterprise environments
  • Kernel panics – These severe crashes often need specialized knowledge; the Ubuntu kernel team maintainers may need to get involved
  • Database corruption crashes – Data recovery specialists might be needed alongside software troubleshooting

For enterprise users, Canonical offers several support tiers:

  • Ubuntu Advantage for Infrastructure
  • Ubuntu Pro
  • Extended Security Maintenance (ESM)

Each provides different levels of access to Canonical engineers who can help with severe crash debugging.

💡
If your logs are piling up faster than you can manage, having a solid log retention strategy is key to keeping things efficient. Read more.

Practical Troubleshooting Examples

Here are some crash scenarios you can solve using these techniques:

Case 1: The Mysterious Apache Crash

Symptoms: Apache crashes every few days with no pattern
Crash logs showed: SIGSEGV in a third-party module
Solution: Crash logs revealed an outdated PHP module trying to access freed memory. Updating the module fixed the issue.

Case 2: The Resource-Hungry Container

Symptoms: Docker containers crashing randomly

Crash logs showed: Kernel OOM (Out of Memory) killer terminating processes

Solution: The logs pointed to memory limits being too restrictive. Adjusting cgroup resources solved it.

Case 3: The Corrupted Dependency

Symptoms: Multiple applications crash with the same error

Crash logs showed: Crashes in shared library functions

Solution: A system library had become corrupted during an interrupted update. Crash logs identified the specific package that needed reinstalling.

Crash Logs in CI/CD Pipelines

For DevOps teams, integrating crash log analysis into your pipelines can catch issues before they hit production:

# Example Jenkins pipeline step
stage('Crash Analysis') {
  steps {
    sh '''
      # Run the test suite
      ./run_tests.sh
      
      # Check for crash files
      if [ $(ls -1 /var/crash/ | wc -l) -gt 0 ]; then
        echo "Tests generated crash files:"
        ls -la /var/crash/
        exit 1
      fi
    '''
  }
}

Wrap Up

The path from crash detection to solution isn't always straight, but with these tools in your toolkit, you'll rarely be left guessing what went wrong.

💡
Do crash logs that are still giving you headaches? Join our Discord community and share your toughest crash puzzles, and let's crack them together.

FAQs

How do I disable crash reporting if I don't want it?

You can disable Apport with these commands:

sudo systemctl stop apport.service
sudo systemctl disable apport.service

Or make the change permanent by editing the configuration file:

sudo nano /etc/default/apport

Change enabled=1 to enabled=0 and save the file.

Are my crash logs sent to Canonical automatically?

No. While Ubuntu has a crash reporting system, it always asks for your permission before sending any data. You'll see a dialog asking if you want to report the problem when a crash is detected.

How long are crash logs kept on my system?

By default, Ubuntu keeps crash logs until you manually remove them or until they're reported. Some configurations automatically clean up crash logs after 7 days.

To manually remove all crash logs:

sudo rm /var/crash/*

Can I use Ubuntu crash logs to get a refund for buggy software?

That's not typically how software works, especially in the open-source world. However, crash logs can be extremely valuable when submitting bug reports to developers, which helps improve the software for everyone.

Why does my system slow down right after a crash?

Apport, Ubuntu's crash detection system, consumes resources to collect information about the crash. This can temporarily slow your system. The slowdown usually resolves once the crash report is generated.

Can I analyze Ubuntu crash logs on Windows or Mac?

Yes, but it's more challenging. You'd need to transfer the crash files to a Linux system or set up a Linux virtual machine. Some specialized tools like Ubuntu's apport-retrace only run on Linux systems.

How do I report crashes for proprietary software?

The process varies by vendor. Some proprietary software uses Ubuntu's crash reporting system, while others have their own. Check the vendor's support documentation for specific instructions.

Do container crashes generate Ubuntu crash logs?

Not by default. Containers are isolated environments, so crashes inside containers don't typically generate host-level crash logs. You'll need to configure your container runtime to map crash reporting from the container to the host system.

Contents


Newsletter

Stay updated on the latest from Last9.

Authors
Preeti Dewani

Preeti Dewani

Technical Product Manager at Last9

X
Topics