Sep 13th, β€˜24/10 min read

Golang Logging: A Comprehensive Guide for Developers

Our blog covers practical insights into Golang logging, including how to use the log package, popular third-party libraries, and tips for structured logging

Golang Logging: A Comprehensive Guide for Developers

(Update: Included slog and added comparison table)

Listen up, Gophers! Today we're going to talk about logging in Go, and why most of you are probably doing it wrong. Don't worry, it's not (entirely) your fault. Logging seems simple on the surface, but it's a dark art that can make or break your application faster than you can say "panic: runtime error".

I've seen things you wouldn't believe. Log files on fire off the shoulder of /var/log. I watched 10GB log lines glitter in the dark near the TannhΓ€user Gate. All those moments will be lost in time, like tears in the rain... unless you learn how to log properly.

So grab your favorite caffeinated beverage, settle into that ergonomic chair you splurged on during the pandemic, and let's dive into the world of Golang logging. By the time we're done, you'll be logging in like a pro, and your future self (the one debugging production issues at 3 AM) will thank you.

Table of Contents

  • Introduction to Golang Logging
  • The Standard Library: log Package
    How I Learned to Stop Worrying and Love fmt.Println()
  • Popular Third-Party Logging Libraries
    Because Reinventing the wheel is so 2000s
  • Structured Logging in Go
    JSON: Its whats for dinner
  • Configuring Log Levels and Output Formats
    Choosing your adventure
  • Integrating with Observability Platforms
    Because logs are lonely without metrics and traces
  • Best Practices and Performance Considerations
    How to not shoot yourself in the foot
  • Real-World Examples
    I really have used this stuff myself
  • Conclusion
    Log everything; but log it right with a schema (Otel)
  • Introduction to Golang Logging

    Now, you might be thinking, "Do I really need to read an entire article about logging? Can't I just sprinkle some fmt.Println() calls and call it a day?"

    Well, if you want to spend your nights combing through gigabytes of unstructured log files, be my guest. But if you want to sleep soundly knowing your logs are informative, performant, and won't make you want to gouge your eyes out when you need them most, keep reading.

    In this guide, we'll cover everything from the basics of Go's standard log package (spoiler alert: it's about as exciting as watching paint dry) to the fancy world of structured logging and integration with observability platforms. 

    We'll look at popular logging libraries, because let's face it, someone else has probably already solved your logging problems better than you can.

    By the end of this guide, you'll know your zerolog from your zap, your INFO from your DEBUG, and you'll never again be tempted to log sensitive information (I'm looking at you, developer who logged a password in plaintext. You know who you are).

    So, without further ado, let's go through the land of Golang logging. Remember: with great power comes great responsibility, and with great logging comes... well, slightly less painful debugging sessions.

    πŸ“–
    Explore our PromQL Cheat Sheet for Essential Queries!

    The Standard Library: log Package

    How I Learned to Stop Worrying and Love fmt.Println()

    Go's standard library provides a simple logging package called log. While it's basic, it's a good starting point for understanding logging in Go.

    Here's a simple example:

    package main
    
    import (
        "log"
    )
    
    func main() {
        log.Println("This is a log message")
        log.Printf("Hello, %s!", "Gopher")
        log.Fatal("This is a fatal error")
    }
    

    The log package is straightforward but lacks features like log levels and structured logging. For more complex applications, you'll likely want to use a third-party library.

    slog Package

    Go 1.21 introduced the slog package, bringing structured logging capabilities to the standard library. This addition is significant for developers who want to implement structured logging without relying on third-party libraries.

    Here's a basic example of using slog:

    import (
        "log/slog"
        "os"
    )
    
    func main() {
        logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
        logger.Info("User logged in",
            "username", "gopher",
            "user_id", 123,
            "login_attempt", 1)
    }

    Golang Logging using slog

    This will output JSON-formatted logs:

    {"time":"2023-09-16T10:30:00Z","level":"INFO","msg":"User logged in","username":"gopher","user_id":123,"login_attempt":1}

    Key features of slog:

    • Structured logging out of the box
    • Support for different output formats (JSON, text)
    • Customizable through Handlers
    • Integrates well with existing Go programs

    While slog may not have all the features of some third-party libraries, its inclusion in the standard library makes it an attractive option for projects that want to minimize external dependencies.

    Because Reinventing the wheel is so 2000s

    There are several popular logging libraries in the Go ecosystem. Let's look at three of the most widely used:

    1. Zerolog: Known for its zero-allocation JSON logging
    2. Zap: Uber's blazing-fast, structured logger
    3. Logrus: A structured logger with hooks

    Here's a quick comparison:

    // Zerolog
    logger := zerolog.New(os.Stdout).With().Timestamp().Logger()
    logger.Info().Str("library", "zerolog").Msg("This is a log message")
    
    // Zap
    logger, _ := zap.NewProduction()
    defer logger.Sync()
    logger.Info("This is a log message", zap.String("library", "zap"))
    
    // Logrus
    logrus.WithFields(logrus.Fields{
        "library": "logrus",
    }).Info("This is a log message")

    In my experience, Zerolog and Zap offer the best performance, while Logrus provides a more familiar interface for developers coming from other languages.

    πŸ“
    Learn more about OpenTelemetry Protocol (OTLP) and its role in observability in our blog!

    Structured Logging in Go

    JSON: Its whats for dinner

    Structured logging is a game-changer when it comes to log analysis. Instead of parsing unstructured text, you can work with JSON objects that are easily searchable and filterable.

    Here's an example using Zerolog:

    package main
    
    import (
        "os"
        "github.com/rs/zerolog"
        "github.com/rs/zerolog/log"
    )
    
    func main() {
        zerolog.TimeFieldFormat = zerolog.TimeFormatUnix
        log.Logger = log.Output(zerolog.ConsoleWriter{Out: os.Stderr})
    
        log.Info().
            Str("foo", "bar").
            Int("n", 123).
            Msg("hello world")
    }
    

    This will output:

    <timestamp> INF hello world foo=bar n=123
    

    Structured logging has saved me countless hours when debugging production issues. Being able to quickly filter and analyze logs based on specific fields is invaluable.

    Comparison of Logging Options

    When choosing a logging solution for your Go project, consider the following comparison:

    Here's a quick feature comparison:

    FeaturelogslogZapZerologLogrus
    Structured LoggingNoYesYesYesYes
    PerformanceGoodGoodExcellentExcellentGood
    Type-safe APINoPartialYesYesPartial
    Dependency-freeYesYesNoNoNo
    Advanced FeaturesNoLimitedYesYesYes
    Log RotationNoNoVia ExtensionVia ExtensionBuilt-in
    HooksNoNoYesYesYes
    Widely AdoptedYesNewYesYesYes

    Choosing the Right Logger

    • If you're working on a small project or want to avoid external dependencies, the standard log package or slog might be sufficient.
    • For high-performance applications where every nanosecond counts, consider Zap or Zerolog.
    • If you need a feature-rich logger with wide community adoption, Logrus could be a good choice.
    • For new projects starting with Go 1.21 or later, slog offers a good balance of features and zero external dependencies.

    Choose the logging solution that best fits your project's needs, considering factors like performance requirements, dependency management, and team familiarity.

    Configuring Log Levels and Output Formats

    Choosing your adventure

    Most logging libraries support different log levels (e.g., DEBUG, INFO, WARN, ERROR) and output formats (e.g., JSON, console-friendly).

    Here's how you might configure Zerolog:

    zerolog.SetGlobalLevel(zerolog.InfoLevel)
    if os.Getenv("DEBUG") != "" {
        zerolog.SetGlobalLevel(zerolog.DebugLevel)
    }
    
    log.Logger = log.Output(zerolog.ConsoleWriter{Out: os.Stderr})

    This setup allows you to control the log level via an environment variable, which is handy for toggling debug logs in different environments.

    πŸ“ƒ
    Explore our tutorial on instrumenting Golang apps with OpenTelemetry, including best practices.

    Integrating with Observability Platforms

    Because logs are lonely without metrics and traces

    In production environments, you'll want to integrate your logs with observability platforms like ELK (Elasticsearch, Logstash, Kibana) or cloud-based solutions like Google Cloud Logging.

    Here's a simple example of how you might send logs to Elasticsearch using the olivere/elastic package:

    import (
        "context"
        "github.com/olivere/elastic/v7"
        "github.com/rs/zerolog"
        "github.com/rs/zerolog/log"
    )
    
    func main() {
        client, err := elastic.NewClient(elastic.SetURL("<http://localhost:9200>"))
        if err != nil {
            log.Fatal().Err(err).Msg("Failed to create Elasticsearch client")
        }
    
        hook := zerolog.HookFunc(func(e *zerolog.Event, level zerolog.Level, message string) {
            _, err := client.Index().
                Index("app-logs").
                BodyJson(e).
                Do(context.Background())
            if err != nil {
                log.Error().Err(err).Msg("Failed to send log to Elasticsearch")
            }
        })
    
        log.Logger = zerolog.New(os.Stdout).Hook(hook).With().Timestamp().Logger()
    
        log.Info().Str("foo", "bar").Msg("This log will be sent to Elasticsearch")
    }
    

    This setup sends every log message to Elasticsearch, allowing you to use Kibana for powerful log analysis and visualization.

    Real-World Examples

    I really have used this stuff myself

    Let me share a genuine problem I encountered in a production Go service, and how improved logging helped us solve it.

    The Problem

    We had a Go-based API service that handled user authentication for a suite of web applications. This service was experiencing intermittent 503 errors (Service Unavailable) that we couldn't easily reproduce or debug. Here's what we knew initially:

    1. The 503 errors were occurring seemingly at random, affecting about 2% of authentication attempts.
    2. The errors didn't correlate with any specific time of day or traffic patterns.
    3. Our existing logs only showed that a 503 error was returned, without any additional context.

    Our initial logging was basic and unhelpful:

    func handleAuthentication(w http.ResponseWriter, r *http.Request) {
        user, err := authenticateUser(r)
        if err != nil {
            log.Printf("Authentication failed: %v", err)
            http.Error(w, "Service Unavailable", http.StatusServiceUnavailable)
            return
        }
    
    // ... rest of the handler
    }
    
    func authenticateUser(r *http.Request) (*User, error) {
    // ... authentication logic
    }
    
    

    These logs didn't provide enough context to understand why authentication was failing or why we were returning 503 errors instead of 401 (Unauthorized) for failed authentications.

    πŸ“–
    Check out our guide on real-time canary deployment tracking with Argo CD and Levitate change events.

    The Solution

    We decided to implement a more comprehensive logging strategy using Zap:

    1. We added structured logging with request IDs, user IDs (when available), and the authentication method used.
    2. We included timing information for each step of the authentication process.
    3. We added more granular error logging, including specific error types.

    Here's how we improved our logging:

    package main
    
    import (
        "net/http"
        "time"
    
        "go.uber.org/zap"
        "github.com/google/uuid"
    )
    
    var logger *zap.Logger
    
    func init() {
        var err error
        logger, err = zap.NewProduction()
        if err != nil {
            panic(err)
        }
    }
    
    func handleAuthentication(w http.ResponseWriter, r *http.Request) {
        requestID := uuid.New().String()
        startTime := time.Now()
    
        logger := logger.With(
            zap.String("request_id", requestID),
            zap.String("method", r.Method),
            zap.String("path", r.URL.Path),
        )
    
        logger.Info("Starting authentication process")
    
        user, err := authenticateUser(r, logger)
        if err != nil {
            logger.Error("Authentication failed",
                zap.Error(err),
                zap.Duration("duration", time.Since(startTime)),
            )
    
            if err == ErrDatabaseTimeout {
                http.Error(w, "Service Unavailable", http.StatusServiceUnavailable)
            } else {
                http.Error(w, "Unauthorized", http.StatusUnauthorized)
            }
            return
        }
    
        logger.Info("Authentication successful",
            zap.String("user_id", user.ID),
            zap.Duration("duration", time.Since(startTime)),
        )
    
    // ... rest of the handler
    }
    
    func authenticateUser(r *http.Request, logger *zap.Logger) (*User, error) {
        authStartTime := time.Now()
    
    // Extract credentials
        username, password, ok := r.BasicAuth()
        if !ok {
            logger.Warn("No authentication credentials provided")
            return nil, ErrNoCredentials
        }
    
    // Check user in database
        user, err := getUserFromDB(username)
        if err != nil {
            if err == ErrDatabaseTimeout {
                logger.Error("Database timeout during authentication",
                    zap.Error(err),
                    zap.Duration("db_query_time", time.Since(authStartTime)),
                )
                return nil, ErrDatabaseTimeout
            }
            logger.Warn("User not found", zap.String("username", username))
            return nil, ErrUserNotFound
        }
    
    // Verify password
        if !verifyPassword(user, password) {
            logger.Warn("Invalid password", zap.String("username", username))
            return nil, ErrInvalidPassword
        }
    
        logger.Debug("User authenticated successfully",
            zap.String("username", username),
            zap.Duration("auth_duration", time.Since(authStartTime)),
        )
    
        return user, nil
    }
    
    

    The Outcome

    With these enhanced logs, we were able to identify the root cause of the problem:

    1. The 503 errors were occurring due to database timeouts, not actual authentication failures.
    2. These timeouts were happening when the database connection pool was exhausted.
    3. The connection pool exhaustion was caused by a separate batch job that was holding onto connections for too long.

    Armed with this information, we were able to:

    1. Increase the size of our database connection pool.
    2. Optimize the batch job to release connections more quickly.
    3. Implement a circuit breaker for database operations to fail fast when the database is overloaded.
    The result? Our 503 error rate dropped from 2% to 0.01%, and we were able to properly distinguish between service unavailability and actual authentication failures.

    This example demonstrates the power of effective logging. By including crucial context (request IDs, error types, timing information) and using structured logging with Zap, we were able to quickly identify and resolve a significant issue that was impacting our users.

    Some key takeaways from this experience:

    1. Log at the appropriate level: Use Error for exceptional circumstances, Warn for important but expected issues, Info for general operational events, and Debug for detailed troubleshooting information.
    2. Include timing information: Logging durations for key operations can help identify performance bottlenecks.
    3. Use structured logging: It makes it much easier to filter and analyze logs, especially when aggregating them in a centralized logging system.
    4. Log context, not just errors: Including relevant context (like request IDs or user IDs) in your logs makes it much easier to trace issues across different parts of your system.
    5. Be specific about errors: Instead of generic error messages, log specific error types. This makes it easier to distinguish between different failure modes.

    Remember, logs are not just for debugging errors – they're a powerful tool for understanding your application's behavior and performance in production. Invest time in setting up comprehensive logging, and you'll thank yourself later when troubleshooting complex issues.

    🎊
    I love logging so much, that I ended up writing a similar Python Guide for Logging

    Best Practices and Performance Considerations

    How to not shoot yourself in the foot
    1. Use log levels appropriately: Reserve ERROR for exceptional circumstances and use INFO for routine operations.
    2. Include context: Always include relevant context in your logs, such as request IDs or user IDs.
    3. Be mindful of sensitive data: Never log sensitive information like passwords or API keys.
    4. Use sampling for high-volume logs: In high-traffic services, consider sampling your DEBUG logs to reduce overhead.
    5. Benchmark your logging: Use Go's benchmarking tools to measure the performance impact of your logging.

    Here's a simple benchmark comparing string concatenation vs using fields:

    func BenchmarkLoggingConcat(b *testing.B) {
        logger := zerolog.New(ioutil.Discard)
        for i := 0; i < b.N; i++ {
            logger.Info().Msg("value is " + strconv.Itoa(i))
        }
    }
    
    func BenchmarkLoggingFields(b *testing.B) {
        logger := zerolog.New(ioutil.Discard)
        for i := 0; i < b.N; i++ {
            logger.Info().Int("value", i).Msg("")
        }
    }
    
    

    In my tests, using fields consistently outperforms string concatenation, especially at high volumes.

    Conclusion

    Log everything; but log it right with a schema (Otel)

    Let's recap, shall we? 

    We've learned that logging is not just about sprinkling fmt.Println() calls like fairy dust throughout your code. No, proper logging is an art form, a science, and sometimes a dark ritual all rolled into one. We've explored the standard library (yawn), dived into the exciting world of third-party libraries (Zerolog and Zap, oh my!), and even tackled the beast known as structured logging.

    Remember, logs are like toilet paper. You don't think about them much, but boy oh boy, do you miss them when they're not there (especially at 3 AM when production is on fire).

    So go forth and log! Log like the wind! Log like your job depends on it (because, let's face it, it probably does). And the next time someone asks you about your logging strategy, you can smile smugly and say, "Oh, I use a structured, leveled logging system with context-rich messages and high-performance serialization." Then walk away slowly as their jaw hits the floor.

    Class dismissed. Now, if you'll excuse me, I have some logs to analyze. These errors won't debug themselves... or is there some AI helping with it already? (Spoiler: They won't. That's why we log.)

    We’re excited to hear about your experiences with SRE and your views on reliability, observability, and monitoring. Join our SRE Discord community to connect with others who share your interests!

    Newsletter

    Stay updated on the latest from Last9.

    Authors

    Prathamesh Sonpatki

    Prathamesh works as an evangelist at Last9, runs SRE stories - where SRE and DevOps folks share their stories, and maintains o11y.wiki - a glossary of all terms related to observability.

    Preeti Dewani

    Technical Product Manager at Last9

    Handcrafted Related Posts