Debugging is rarely anyone's idea of a good time. You're cruising along, building something cool, when suddenly your code breaks and you're stuck digging through console outputs that look like they were written by a robot having an existential crisis.
Enter Loguru – the Python logging library that feels like it was built for humans, not machines.
What Makes Loguru Different?
Right off the bat, Loguru takes a fresh approach to logging. Unlike Python's standard logging module with its clunky configuration and formatter classes, Loguru works straight out of the box:
from loguru import logger
logger.debug("That's how simple it is!")
That's it. No handlers, no formatters, no stress.
The Setup is Pretty Simple
With the standard library, you'd be writing something like:
import logging
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[logging.FileHandler("debug.log"), logging.StreamHandler()]
)
logger = logging.getLogger(__name__)
logger.debug("This took way too much code")
But with Loguru? Check this out:
from loguru import logger
logger.add("debug.log")
logger.debug("Done. That's it.")
Less code. Less headaches. More time to grab coffee.
Installation? That's Easy Too
Getting started with Loguru takes just one command:
pip install loguru
No extra dependencies, and no complex setup. You'll be up and running in seconds.
Color-Coded Clarity That Makes Sense
Your terminal doesn't have to look like the Matrix. Loguru brings color-coded logs that actually help you spot issues fast:
The colors aren't just for show – they actually help you scan through logs and catch errors without losing your mind.
Each log level gets its own color scheme:
- DEBUG: Blue – for when you're tracking variables
- INFO: Green – for normal application flow
- WARNING: Yellow – for things that look fishy
- ERROR: Red – for when things go wrong
- CRITICAL: Bold red – for when everything's on fire
You can even customize these colors to match your personal preferences or terminal theme.
Log Levels That Actually Make Sense
Loguru gives you familiar log levels, but makes them much more intuitive to use:
logger.trace("For uber-detailed info") # Even more detailed than debug
logger.debug("For development detective work")
logger.info("For confirming things are working")
logger.success("For celebrating small wins") # This one's unique to Loguru!
logger.warning("For potential issues")
logger.error("For actual problems")
logger.critical("For 'wake me up at 3 AM' problems")
That extra success
level is a nice touch – sometimes you want to highlight when things go right, not just when they go wrong.
Exception Tracking That Doesn't Suck
Here's where Loguru really shines. When exceptions happen (and let's be honest, they always do), Loguru gives you the full picture:
from loguru import logger
@logger.catch
def divide(a, b):
return a / b
divide(1, 0) # This would normally crash your program
Instead of a cryptic error message, you get:
- The exact function where things went wrong
- A clean traceback with syntax highlighting
- Variable values at the time of the crash
- The full stack trace with pretty formatting
It's like having a detective show up at the scene of the crime with all the evidence already bagged and tagged.
The @logger.catch Decorator is Magic
That @logger.catch
decorator is worth its weight in gold. It:
- Catches any exception in the decorated function
- Logs it with beautiful formatting
- Re-raises it (optional) or returns None
- Can be configured to catch only specific exception types
@logger.catch(ValueError, message="Something went wrong with those values")
def parse_config(config_string):
# Your code here
pass
This means you can wrap critical functions with minimal code and get incredible debugging information when things break.
Sink Configuration That's Actually Flexible
Want to send your logs to multiple places? Easy:
# Console output
logger.add(sys.stdout, level="INFO")
# JSON file for machine processing
logger.add("app.json", serialize=True)
# Rotating file that won't fill your disk
logger.add("app.log", rotation="500 MB", compression="zip")
# Only errors and above to a special file
logger.add("errors.log", level="ERROR")
# Send critical errors to Slack
logger.add(
webhook_sink,
level="CRITICAL",
format="{message}"
)
Each destination is called a "sink" in Loguru terminology, and you can have as many as you need, each with its own configuration.
Structured Logging for When You Need to Get Serious
Sometimes you need your logs machine-readable for analysis. Loguru has you covered:
logger.add("app.json", serialize=True)
logger.info("This will be JSON structured")
Now you've got JSON logs that tools like ELK Stack or Datadog can parse without breaking a sweat.
The serialized output includes:
- Timestamp (with timezone)
- Log level
- Message
- Module, function, and line number
- Process and thread IDs
- Exception info if present
- Custom attributes you add
Speaking of custom attributes...
Adding Extra Context is a Breeze
Want to include extra info in every log message? Try this:
# Add context to all subsequent logs
logger.configure(extra={"app_name": "MyAwesomeApp", "environment": "production"})
# These will now include your extra fields
logger.info("Application starting up")
logger.warning("Running low on memory")
Or add context for just a specific section of code:
with logger.contextualize(user_id=123, request_id="abc-456"):
logger.info("Processing user request") # Will include the user_id and request_id
This is incredibly useful for tracing requests through complex systems.
Performance That Won't Slow You Down
Library | Calls per second | Relative Speed | Memory Usage |
---|---|---|---|
Loguru | ~170,000 | 1x (baseline) | Low |
Standard logging | ~80,000 | 0.47x | Medium |
Print statements | ~210,000 | 1.23x | Minimal |
Rich (another logging lib) | ~60,000 | 0.35x | High |
Loguru sits right in that sweet spot – faster than standard logging and not much slower than raw print statements, but with way more features.
And for the performance-obsessed, Loguru even lets you disable logging below certain levels at compile time, giving you zero overhead for debug logs in production.
Async Support for Modern Applications
Working with async/await? Loguru plays nice:
logger.add("async.log", enqueue=True) # Non-blocking logs
async def main():
logger.info("Starting async operation")
await some_coroutine()
logger.success("Async operation complete")
The enqueue=True
parameter makes logging non-blocking, which is crucial in high-performance async applications.
Format String Customization That's Actually Readable
Want to change how your logs look? The format string is human-readable:
logger.add(sys.stderr, format="{time} | {level} | {message} | {extra}")
Available format fields include:
{time[:format]}
- Timestamp (with optional format){level}
- Log level{message}
- The log message{name}
,{function}
,{line}
- Where the log was called{thread}
,{process}
- Thread and process info{exception}
- Exception info if present{extra}
- Any extra contextual data- Custom attributes via
{extra[attribute_name]}
Context Managers for Temporary Tweaks
Need to ramp up logging just for a specific section of code? Loguru makes it clean:
# Temporarily change log level
with logger.level("DEBUG"):
logger.debug("This will be shown")
# Add context to logs
with logger.contextualize(user_id=123):
logger.info("This log will include the user_id")
logger.info("This log won't have the context")
This beats adding and removing extra variables in your log calls.
Practical Examples for Real-World Use
Web Application Logging
Here's how you might set up Loguru in a web app:
from loguru import logger
import sys
import time
from flask import Flask, request, g
app = Flask(__name__)
# Setup logging
logger.remove()
logger.add(sys.stdout, colorize=True, level="INFO")
logger.add("app.log", rotation="10 MB", retention="1 week", level="DEBUG")
logger.add("errors.log", rotation="10 MB", level="ERROR")
@app.before_request
def before_request():
g.start_time = time.time()
request_id = request.headers.get('X-Request-ID', str(random.randint(1000000, 9999999)))
logger.configure(extra={"request_id": request_id})
logger.info(f"Request started: {request.method} {request.path}")
@app.after_request
def after_request(response):
diff = time.time() - g.start_time
logger.info(f"Request completed in {diff:.2f}s with status {response.status_code}")
return response
@app.errorhandler(Exception)
@logger.catch
def handle_exception(e):
logger.error("Unhandled exception")
return "Server error", 500
@app.route('/')
def index():
logger.debug("Rendering index page")
return "Hello World"
Data Processing Pipeline
For a data processing job:
from loguru import logger
import pandas as pd
logger.add("data_pipeline.log", rotation="1 day")
logger.add("data_errors.log", level="ERROR", filter=lambda record: "data error" in record["message"].lower())
@logger.catch
def process_file(filename):
logger.info(f"Processing file: {filename}")
try:
df = pd.read_csv(filename)
logger.debug(f"Loaded {len(df)} rows")
# Data validation
if df.isnull().any().sum() > 0:
logger.warning(f"Found {df.isnull().sum().sum()} missing values")
# Processing steps
df = transform_data(df)
# Save results
output_file = filename.replace('.csv', '_processed.csv')
df.to_csv(output_file, index=False)
logger.success(f"File processed successfully, saved to {output_file}")
return df
except pd.errors.ParserError as e:
logger.error(f"Data error: Failed to parse {filename}: {str(e)}")
raise
How to Make Loguru Work for Your Workflow
Here's a setup that works great for most projects:
from loguru import logger
import sys
import os
# Remove default handler
logger.remove()
# Environment-based config
DEBUG = os.getenv("DEBUG", "false").lower() == "true"
ENV = os.getenv("ENVIRONMENT", "development")
# Add stderr handler with custom format
logger.add(
sys.stderr,
format="<green>{time:YYYY-MM-DD HH:mm:ss}</green> | <level>{level}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>",
colorize=True,
level="DEBUG" if DEBUG else "INFO"
)
# Add file handler for debug level
logger.add(
f"{ENV}_debug_{{time}}.log",
rotation="500 MB",
retention="10 days",
level="DEBUG",
compression="zip"
)
# Add file handler for errors only
logger.add(
f"{ENV}_error_{{time}}.log",
rotation="100 MB",
retention="1 month",
level="ERROR",
backtrace=True,
diagnose=True
)
# Configure app-wide context
logger.configure(extras={"environment": ENV, "app_version": "1.2.3"})
This setup gives you:
- Clean console output with appropriate level based on environment
- Rotating debug logs so you don't fill your disk
- Separate error logs for quick problem identification
- Compressed old logs to save space
- Environment-specific filenames
- Global context for all log messages
Integration with Other Libraries
Loguru plays well with other libraries too:
Replacing the Standard Logging
Many libraries use the standard logging module. You can intercept these logs:
import logging
from loguru import logger
# Create an interceptor
class InterceptHandler(logging.Handler):
def emit(self, record):
# Get corresponding Loguru level if it exists
try:
level = logger.level(record.levelname).name
except ValueError:
level = record.levelno
# Find caller from where originated the logged message
frame, depth = logging.currentframe(), 2
while frame.f_code.co_filename == logging.__file__:
frame = frame.f_back
depth += 1
logger.opt(depth=depth, exception=record.exc_info).log(
level, record.getMessage()
)
# Replace all handlers with the intercept
logging.basicConfig(handlers=[InterceptHandler()], level=0)
# Now libraries using standard logging will go through Loguru
import requests # Uses standard logging internally
requests.get("https://example.com") # These logs will go through Loguru
Working with Django
For Django projects:
# settings.py
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"handlers": {
"console": {
"class": "logging.StreamHandler",
},
},
"root": {
"handlers": ["console"],
"level": "WARNING",
},
}
# Then in your main app's __init__.py or elsewhere
import logging
from loguru import logger
# (Add the InterceptHandler as shown above)
# Intercept Django's logs
logging.basicConfig(handlers=[InterceptHandler()], level=0)
for name in logging.root.manager.loggerDict.keys():
if name.startswith('django'):
logging.getLogger(name).handlers = []
When to Skip Loguru
Loguru isn't always the right choice. Skip it when:
- You're working on tiny scripts where print() does the job
- Your team has heavily invested in a different logging ecosystem
- You need ultra-specialized logging features that only exist in another library
- You're in an environment with strict dependencies where adding a new library isn't an option
- You need absolute maximum performance and every nanosecond counts
Common Gotchas to Watch Out For
Even a great library has its quirks:
The Default Logger is a Singleton
There's only one logger
instance, unlike the standard library where you create separate loggers. This is by design, but it can surprise you if you're used to the standard approach.
File Permissions Matter
When adding file sinks, make sure your application has permission to write to the specified directories.
Thread Safety Considerations
The logger.configure()
method is not thread-safe. If you need to configure the logger in a multi-threaded environment, do it before spawning threads.
The Bottom Line
Logging shouldn't be the hardest part of your day. Loguru removes the friction and lets you focus on building stuff that matters.
It gives you better info when things break, looks good doing it, and doesn't require a PhD in configuration management to set up.