Aug 30th, โ€˜24/12 min read

Python Logging Best Practices: The Ultimate Guide

This guide covers setting up logging, avoiding common mistakes, and applying advanced techniques to improve your debugging process, whether youโ€™re working with small scripts or large applications.

Python Logging Best Practices: The Ultimate Guide

Logging is an essential part of Python development. It helps developers track application behavior and troubleshoot issues. This guide covers key logging practices to improve your code's observability and make debugging easier.

We'll explore setting up logging, common pitfalls to avoid, and advanced techniques for handling logs in larger projects. Whether building a small script or a complex application, you'll find practical tips to enhance your logging approach.

What is Python logging?

Python logging is like the Swiss Army knife of software development. It's a powerful feature in the Python standard library that allows you to track events, debug issues, and monitor the health of your applications.

Think of it as your application's diary, but instead of teenage angst, it's full of valuable insights that can save you countless hours of head-scratching and keyboard-smashing.

But why is logging so crucial?

Imagine trying to solve a murder mystery without any clues - that's what debugging or troubleshooting without proper logging feels like.

Good logging practices can:

  1. Help you grasp your application's flow
  2. Provide crucial information for debugging
  3. Notify you of potential issues before they escalate
  4. Offer insights into user behavior and application performance

Python logging module

The logging module is the heart of Python's logging system. It's like the command center for all your logging operations, providing a flexible framework for generating log messages from your Python programs.

Let's understand how to use it effectively.

import logging
# Basic configuration
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')


# Creating a logger
logger = logging.getLogger(__name__)


# Logging messages
logger.debug("This is a debug message")
logger.info("This is an info message")
logger.warning("This is a warning message")
logger.error("This is an error message")
logger.critical("This is a critical message")

In this example, we're using basicConfig() to set up a basic configuration for our logging. We're setting the default level to INFO and specifying a format for our log messages. Then, we're creating a logger and using it to log messages at different levels.

Python loggers

Loggers are the storytellers of your application. They're responsible for capturing events and routing them to the appropriate handlers. Think of them as the journalists of your code, always ready to report on what's happening.

Creating a new logger

Creating a logger is simple:

logger = logging.getLogger(__name__)

Using the name as the logger name is a common practice. It allows you to have a unique logger for each module in your application, which can be incredibly helpful when trying to track down issues in a large codebase.

Threshold logging level

The threshold level determines which messages get recorded. It's like a bouncer for your logs, deciding which messages are important enough to make it into the VIP section (your log files or console).

logger.setLevel(logging.DEBUG)

This sets the logger to capture all messages at the DEBUG level and above. Any messages below this level (if there were any) would be ignored.

Log levels

Python provides several log levels, each serving a different purpose:

  • DEBUG: Detailed information useful primarily for diagnosing issues.
  • INFO: Confirmation that the application is functioning as expected.
  • WARNING: A sign that something unexpected occurred or a potential problem might arise soon.
  • ERROR: Indicates a significant issue that has prevented certain functions from executing.
  • CRITICAL: A severe error suggesting that the program may be unable to continue running.

Here's how you might use these in practice:

def divide(x, y):
    logger.debug(f"Dividing {x} by {y}")
    if y == 0:
        logger.error("Attempted to divide by zero!")
        return None
    return x / y
result = divide(10, 2)
logger.info(f"Division result: {result}")
result = divide(10, 0)
logger.warning("Division operation failed")

In the above example, we're using different log levels to provide context about what's happening in our function.

The DEBUG message gives us detailed information about the operation, the ERROR message alerts us to a serious problem, the INFO message confirms that the operation was successful, and the WARNING message indicates that something went wrong, but it's not necessarily a critical error.

๐Ÿ“
Check out our blog for a deep dive into log aggregation tools!

Printing vs logging

While print() statements might seem tempting for quick debugging, logging offers several advantages:

  • Granular control over output: Easily adjust the verbosity of your output without altering your code.
  • Easy to disable or redirect output: Disable logging or redirect it to a file with ease, no code changes are required.
  • Includes contextual information: Automatically includes details like timestamps, line numbers, and function names in your logs.
  • Thread-safe: Unlike print statements, logging is thread-safe, making it essential for multi-threaded applications.

Let's compare:

# Using print
def some_function(x, y):
    print(f"some_function called with args: {x}, {y}")
    result = x + y
    print(f"Result: {result}")
    return result
    
# Using logging
import logging
logger = logging.getLogger(__name__)
def some_function(x, y):
    logger.debug(f"some_function called with args: {x}, {y}")
    result = x + y
    logger.info(f"Result: {result}")
    return result

The logging version gives us more flexibility and provides more context out of the box.

Python logging examples

Let's look at some more advanced examples to really get our logging muscles flexing.

Snippet 1: Creating a logger with a handler and a formatted

import logging
def setup_logger():
    logger = logging.getLogger(__name__)
    logger.setLevel(logging.DEBUG)


    # Create console handler and set level to debug
    ch = logging.StreamHandler()
    ch.setLevel(logging.DEBUG)



    # Create formatter
    formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
    # Add formatter to ch
    ch.setFormatter(formatter)
    # Add ch to logger
    logger.addHandler(ch)
    return logger

logger = setup_logger()
logger.debug("This is a debug message")

This setup gives us a logger that outputs to the console with a specific format.

Snippet 2: Logging to a file

import logging

def setup_file_logger():
    logger = logging.getLogger(__name__)
    logger.setLevel(logging.DEBUG)

    # Create a file handler that logs even debug messages
    fh = logging.FileHandler('spam.log')
    fh.setLevel(logging.DEBUG)

    # Create formatter
    formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
    fh.setFormatter(formatter)

    # Add the handler to the logger
    logger.addHandler(fh)
    return logger

logger = setup_file_logger()
logger.debug("This debug message will be written to 'spam.log'")

This setup will write all our log messages to a file named 'spam.log'.

Snippet 3: Using logging in a class

import logging

class MyClass:
    def __init__(self):  # Corrected from 'init' to '__init__'
        self.logger = logging.getLogger(self.__class__.__name__)
        self.logger.setLevel(logging.DEBUG)
        handler = logging.StreamHandler()
        formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
        handler.setFormatter(formatter)
        self.logger.addHandler(handler)

    def do_something(self):
        self.logger.debug("Doing something...")
        # Do something here
        self.logger.info("Something done!")

obj = MyClass()  # Corrected the instantiation
obj.do_something()

This example shows how you can set up logging within a class, which can be very useful for object-oriented programming.

๐Ÿ“–
Here's a guide to using kubectl logs for viewing Kubernetes pod logs.

Types of Python logging methods

Python's logging module offers various methods to suit different needs:

  • Basic logging: Use logging.basicConfig() for simple setups.
  • Logger objects: Create and configure Logger instances for more control.
  • Handler objects: Utilize different handlers (e.g., StreamHandler, FileHandler) to direct log output.
  • Formatter objects: Customize the format of log messages with Formatter objects.

Let's look at an example that combines these:

import logging
from logging.handlers import RotatingFileHandler

def setup_logger():
    logger = logging.getLogger('my_app')
    logger.setLevel(logging.DEBUG)

    # Create a rotating file handler
    file_handler = RotatingFileHandler('my_app.log', maxBytes=100000, backupCount=5)
    file_handler.setLevel(logging.DEBUG)

    # Create a console handler
    console_handler = logging.StreamHandler()
    console_handler.setLevel(logging.INFO)

    # Create a formatter and add it to the handlers
    formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
    file_handler.setFormatter(formatter)
    console_handler.setFormatter(formatter)

    # Add the handlers to the logger
    logger.addHandler(file_handler)
    logger.addHandler(console_handler)

    return logger

logger = setup_logger()

# Now we can use the logger
logger.debug("This will go to the log file")
logger.info("This will go to both the log file and console")
logger.warning("This is a warning!")

This setup uses a RotatingFileHandler to manage log files (creating new files when the current one gets too large) and a StreamHandler for console output. It also uses different log levels for file and console output.

How to get started with Python logging

  • Import the logging module: import logging
  • Create a logger: logger = logging.getLogger(__name__)
  • Set the logging level: logger.setLevel(logging.DEBUG)
  • Configure handlers and formatters (as shown in previous examples)
  • Start logging!

Remember, the key to effective logging is consistency. Establish logging conventions for your project and stick to them.

Advantages:

  • Flexible and Customizable: Tailor logging to fit your needs, from log levels to formats.
  • Built-in to Python: No need for external libraries or dependencies.
  • Supports Multiple Output Destinations: Log events can be directed to consoles, files, networks, and more.
  • Thread-Safe: Suitable for applications with multiple threads.
  • Hierarchical: Organize loggers in a hierarchy for precise control over log events.

Disadvantages:

  • Complex Setup for Advanced Use Cases: The extensive configuration options can lead to complexity.
  • Potential Performance Impact: Improper configuration or excessive logging can slow down your application.
  • Overwhelming Volume: Logging too much information can make it challenging to pinpoint relevant log events.

Python logging platforms

While Python's built-in logging is powerful, sometimes you need more. Here are some popular logging platforms:

  • ELK Stack (Elasticsearch, Logstash, Kibana): A powerful suite for collecting, processing, and visualizing logs.
  • Splunk: An advanced platform for searching, monitoring, and analyzing machine-generated data.
  • Graylog: Open-source log management and analysis platform.
  • Loggly: Cloud-based log management and analytics service.

These platforms can help you aggregate logs from multiple sources, perform advanced searches, and create visualizations.

๐Ÿ“ƒ
Check out our updated insights on the best cloud monitoring tools for 2024.

Basic Python logging concepts

  • Loggers: The entry point for logging operations. They expose the interface that the application code directly uses.
  • Handlers: Determine where log messages go (console, file, email, etc.).
  • Formatters: Define the structure and content of log messages.
  • Filters: Provide fine-grained control over which log records to output.

Here's how these concepts work together:
Custom Filter Example:

import logging

# Create a custom filter
class MyFilter(logging.Filter):
    def filter(self, record):
        return 'important' in record.msg.lower()

# Set up the logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)

# Create a handler
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)

# Create a formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)

# Add the filter to the handler
handler.addFilter(MyFilter())

# Add the handler to the logger
logger.addHandler(handler)

# Now let's log some messages
logger.debug("This is a debug message")  # This won't be logged
logger.info("This is an important message")  # This will be logged
logger.warning("Another message")  # This won't be logged

In this example, we've created a custom filter that only allows messages containing the word 'important'. This demonstrates how you can use filters to have fine-grained control over your logs.

Python Logging Configuration

import logging

def configure_logging():
    logging.basicConfig(
        level=logging.DEBUG,
        format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
        datefmt='%m-%d %H:%M',
        filename='myapp.log',
        filemode='w'
    )

    # Define a Handler which writes INFO messages or higher to the sys.stderr
    console = logging.StreamHandler()
    console.setLevel(logging.INFO)

    # Set a format that is simpler for console use
    formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
    console.setFormatter(formatter)

    # Add the handler to the root logger
    logging.getLogger('').addHandler(console)

configure_logging()

# Now, we can log to the root logger, or any other logger. The root logger is the parent of all loggers.
logging.info('Jackdaws love my big sphinx of quartz.')

# Log messages to child loggers
logger1 = logging.getLogger('myapp.area1')
logger2 = logging.getLogger('myapp.area2')

logger1.debug('Quick zephyrs blow, vexing daft Jim.')
logger1.info('How quickly daft jumping zebras vex.')
logger2.warning('Jail zesty vixen who grabbed pay from quack.')
logger2.error('The five boxing wizards jump quickly.')

This configuration sets up logging to both a file and the console, with different formats and levels for each.

Custom Logger Factory Function

import logging

def create_logger(name, log_file, level=logging.INFO):
    formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')

    handler = logging.FileHandler(log_file)
    handler.setFormatter(formatter)

    logger = logging.getLogger(name)
    logger.setLevel(level)
    logger.addHandler(handler)
    return logger

# Usage
logger = create_logger('my_module', 'my_module.log', logging.DEBUG)
logger.debug('This is a debug message')

This approach allows you to easily create consistent loggers throughout your application.

Configuring Using Configparse-Format Files

logging.conf

[loggers]
keys=root,simpleExample

[handlers]
keys=consoleHandler

[formatters]
keys=simpleFormatter

[logger_root]
level=DEBUG
handlers=consoleHandler

[logger_simpleExample]
level=DEBUG
handlers=consoleHandler
qualname=simpleExample
propagate=0

[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)

[formatter_simpleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s

Python Code to Load Configuration

import logging
import logging.config
import sys

logging.config.fileConfig('logging.conf')

# Create logger
logger = logging.getLogger('simpleExample')

# Use the logger
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')

This approach allows you to keep your logging configuration separate from your code, making it easier to modify logging behavior without changing your application code.

๐Ÿ†
Probo Cuts Monitoring Costs by 90% with Last9! We've written everything from their growing pains to how they achieved their goals in our case study!
Customer Testimonial Probo

Python logging performance 

While logging is crucial for understanding and debugging your application, it's important to implement it in a way that doesn't significantly impact your application's performance. Here are some tips to keep your logging efficient:

Configuration-Based Considerations

  • Use appropriate log levels: In production, you probably don't need DEBUG level logs. Set the log level appropriately to reduce the amount of log data generated.
if app.config['ENV'] == 'production':
    logging.getLogger().setLevel(logging.ERROR)
else:
    logging.getLogger().setLevel(logging.DEBUG)
  • Implement log rotation: Use RotatingFileHandler or TimedRotatingFileHandler to manage log file sizes and prevent them from consuming too much disk space.
from logging.handlers import RotatingFileHandler
handler = RotatingFileHandler('app.log', maxBytes=10000, backupCount=3)
logger.addHandler(handler)
  • Consider using asynchronous logging for high-volume scenarios: Libraries like concurrent-log-handler can help manage logging in high-throughput applications.
from concurrent_log_handler import ConcurrentRotatingFileHandler
handler = ConcurrentRotatingFileHandler('app.log', 'a', 512*1024, 5)
logger.addHandler(handler)

Code-Based Considerations

  • Do not execute inactive logging statements: Use lazy logging to avoid unnecessary performance hits.
if logger.isEnabledFor(logging.DEBUG):
    logger.debug(f"User {get_user_name(user_id)} logged in from {get_ip_address()}")

This way, expensive function calls are only made if the debug level is actually enabled.

  • Use % formatting or str.format() instead of f-strings for log messages: While f-strings are convenient, they are always evaluated, even if the log level is not enabled.
# Instead of this:
logger.debug(f"Processing item {item} with options {options}")

# Do this:
logger.debug("Processing item %s with options %s", item, options)
  • Batch logging calls in critical paths: If you have a loop that's logging on each iteration, consider batching the log calls.
# Instead of this:
for item in items:
    process(item)
    logger.debug(f"Processed {item}")

# Do this:
processed = []
for item in items:
    process(item)
    processed.append(item)
logger.debug("Processed items: %s", processed)
  • Use sampling for high-volume logs: In some cases, you might want to log only a sample of events to reduce volume.
import random

def log_sample(message, sample_rate=0.1):
    if random.random() < sample_rate:
        logger.debug(message)

for item in large_list_of_items:
    process(item)
    log_sample(f"Processed {item}")

This will log approximately 10% of the items processed.

Advanced Logging Techniques

  • Contextual logging with the extra parameter: Add extra contextual information to your logs.
import logging

logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)

handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s - %(user)s')
handler.setFormatter(formatter)
logger.addHandler(handler)

def process_request(user_id, data):
    logger.info("Processing request", extra={'user': user_id})
    # Process the data
    logger.debug("Request processed", extra={'user': user_id})

process_request('12345', {'key': 'value'})

This includes the user ID in each log message, making it easier to trace actions related to specific users.

  • Using LoggerAdapter for adding context: For more complex scenarios, use LoggerAdapter to consistently add context to log messages.
import logging

class CustomAdapter(logging.LoggerAdapter):
    def process(self, msg, kwargs):
        return '[%s] %s' % (self.extra['ip'], msg), kwargs

logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
logger.addHandler(handler)

adapter = CustomAdapter(logger, {'ip': '123.45.67.89'})
adapter.debug('This is a debug message')
adapter.info('This is an info message')

This prefixes each log message with the IP address.

  • Logging exceptions: Include the traceback when logging exceptions.
import logging

logger = logging.getLogger(__name__)

try:
    1 / 0
except Exception:
    logger.exception("An error occurred")

The exception() method automatically includes the stack trace in the log message.

๐Ÿ“‘
Check out our checklist for selecting a monitoring system.

Best practices for Python logging

  • Be consistent: Use the same logging format and conventions throughout your application.
  • Log at the appropriate level: Use DEBUG for detailed information, INFO for general information, WARNING for unexpected events, ERROR for serious problems, and CRITICAL for fatal errors.
  • Include context: Log relevant details that will help you understand the state of your application when the log was created.
  • Use structured logging: Consider using JSON or another structured format for your logs to make them easier to parse and analyze.
  • Don't log sensitive information: Be careful not to log passwords, API keys, or other sensitive data.
  • Configure logging as early as possible: Set up your logging configuration at the start of your application to ensure all modules use the same configuration.
  • Use logger hierarchies: Take advantage of Python's logger hierarchy to control logging behavior across your application.
  • Review and rotate logs: Regularly review your logs and implement log rotation to manage log file sizes.
  • Use unique identifiers: Include unique identifiers (like request IDs) in your logs to help trace requests across multiple services.
  • Test your logging: Make sure your logging works as expected, especially in error scenarios.

Conclusion

Logging is an essential part of any robust Python application. When done right, it can be your best friend in understanding your application's behavior, debugging issues, and maintaining your sanity as a developer.

Remember, the goal of logging is to provide visibility into your application's operations. Good logging practices can save you hours of debugging time and help you catch issues before they become critical problems.

As you continue your Python journey, keep refining your logging practices. Experiment with different configurations, try out various logging platforms and always be on the lookout for ways to make your logs more informative and efficient.

Happy logging! May your logs be informative, your debug sessions be short, and your production deploys be uneventful. Now go forth and log like a pro!

Newsletter

Stay updated on the latest from Last9.

Authors

Anjali Udasi

Helping to make the tech a little less intimidating. I love breaking down complex concepts into easy-to-understand terms.

Handcrafted Related Posts