Golang Logging: A Comprehensive Guide for Developers
Our blog covers practical insights into Golang logging, including how to use the log package, popular third-party libraries, and tips for structured logging
Listen up, Gophers! Today we're going to talk about logging in Go, and why most of you are probably doing it wrong. Don't worry, it's not (entirely) your fault. Logging seems simple on the surface, but it's a dark art that can make or break your application faster than you can say "panic: runtime error".
I've seen things you wouldn't believe. Log files on fire off the shoulder of /var/log. I watched 10GB log lines glitter in the dark near the TannhΓ€user Gate. All those moments will be lost in time, like tears in the rain... unless you learn how to log properly.
So grab your favorite caffeinated beverage, settle into that ergonomic chair you splurged on during the pandemic, and let's dive into the world of Golang logging. By the time we're done, you'll be logging in like a pro, and your future self (the one debugging production issues at 3 AM) will thank you.
Conclusion Log everything; but log it right with a schema (Otel)
Introduction to Golang Logging
Now, you might be thinking, "Do I really need to read an entire article about logging? Can't I just sprinkle some fmt.Println() calls and call it a day?"
Well, if you want to spend your nights combing through gigabytes of unstructured log files, be my guest. But if you want to sleep soundly knowing your logs are informative, performant, and won't make you want to gouge your eyes out when you need them most, keep reading.
In this guide, we'll cover everything from the basics of Go's standard log package (spoiler alert: it's about as exciting as watching paint dry) to the fancy world of structured logging and integration with observability platforms.
We'll look at popular logging libraries, because let's face it, someone else has probably already solved your logging problems better than you can.
By the end of this guide, you'll know your zerolog from your zap, your INFO from your DEBUG, and you'll never again be tempted to log sensitive information (I'm looking at you, developer who logged a password in plaintext. You know who you are).
So, without further ado, let's go through the land of Golang logging. Remember: with great power comes great responsibility, and with great logging comes... well, slightly less painful debugging sessions.
How I Learned to Stop Worrying and Love fmt.Println()
Go's standard library provides a simple logging package called log. While it's basic, it's a good starting point for understanding logging in Go.
Here's a simple example:
package main
import (
"log"
)
func main() {
log.Println("This is a log message")
log.Printf("Hello, %s!", "Gopher")
log.Fatal("This is a fatal error")
}
The log package is straightforward but lacks features like log levels and structured logging. For more complex applications, you'll likely want to use a third-party library.
slog Package
Go 1.21 introduced the slog package, bringing structured logging capabilities to the standard library. This addition is significant for developers who want to implement structured logging without relying on third-party libraries.
While slog may not have all the features of some third-party libraries, its inclusion in the standard library makes it an attractive option for projects that want to minimize external dependencies.
Popular Third-Party Logging Libraries
Because Reinventing the wheel is so 2000s
There are several popular logging libraries in the Go ecosystem. Let's look at three of the most widely used:
Zerolog: Known for its zero-allocation JSON logging
// Zerolog
logger := zerolog.New(os.Stdout).With().Timestamp().Logger()
logger.Info().Str("library", "zerolog").Msg("This is a log message")
// Zap
logger, _ := zap.NewProduction()
defer logger.Sync()
logger.Info("This is a log message", zap.String("library", "zap"))
// Logrus
logrus.WithFields(logrus.Fields{
"library": "logrus",
}).Info("This is a log message")
In my experience, Zerolog and Zap offer the best performance, while Logrus provides a more familiar interface for developers coming from other languages.
Structured logging is a game-changer when it comes to log analysis. Instead of parsing unstructured text, you can work with JSON objects that are easily searchable and filterable.
Structured logging has saved me countless hours when debugging production issues. Being able to quickly filter and analyze logs based on specific fields is invaluable.
Comparison of Logging Options
When choosing a logging solution for your Go project, consider the following comparison:
Here's a quick feature comparison:
Feature
log
slog
Zap
Zerolog
Logrus
Structured Logging
No
Yes
Yes
Yes
Yes
Performance
Good
Good
Excellent
Excellent
Good
Type-safe API
No
Partial
Yes
Yes
Partial
Dependency-free
Yes
Yes
No
No
No
Advanced Features
No
Limited
Yes
Yes
Yes
Log Rotation
No
No
Via Extension
Via Extension
Built-in
Hooks
No
No
Yes
Yes
Yes
Widely Adopted
Yes
New
Yes
Yes
Yes
Choosing the Right Logger
If you're working on a small project or want to avoid external dependencies, the standard log package or slog might be sufficient.
For high-performance applications where every nanosecond counts, consider Zap or Zerolog.
If you need a feature-rich logger with wide community adoption, Logrus could be a good choice.
For new projects starting with Go 1.21 or later, slog offers a good balance of features and zero external dependencies.
Choose the logging solution that best fits your project's needs, considering factors like performance requirements, dependency management, and team familiarity.
Configuring Log Levels and Output Formats
Choosing your adventure
Most logging libraries support different log levels (e.g., DEBUG, INFO, WARN, ERROR) and output formats (e.g., JSON, console-friendly).
Because logs are lonely without metrics and traces
In production environments, you'll want to integrate your logs with observability platforms like ELK (Elasticsearch, Logstash, Kibana) or cloud-based solutions like Google Cloud Logging.
Here's a simple example of how you might send logs to Elasticsearch using the olivere/elastic package:
import (
"context"
"github.com/olivere/elastic/v7"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
)
func main() {
client, err := elastic.NewClient(elastic.SetURL("<http://localhost:9200>"))
if err != nil {
log.Fatal().Err(err).Msg("Failed to create Elasticsearch client")
}
hook := zerolog.HookFunc(func(e *zerolog.Event, level zerolog.Level, message string) {
_, err := client.Index().
Index("app-logs").
BodyJson(e).
Do(context.Background())
if err != nil {
log.Error().Err(err).Msg("Failed to send log to Elasticsearch")
}
})
log.Logger = zerolog.New(os.Stdout).Hook(hook).With().Timestamp().Logger()
log.Info().Str("foo", "bar").Msg("This log will be sent to Elasticsearch")
}
This setup sends every log message to Elasticsearch, allowing you to use Kibana for powerful log analysis and visualization.
Real-World Examples
I really have used this stuff myself
Let me share a genuine problem I encountered in a production Go service, and how improved logging helped us solve it.
The Problem
We had a Go-based API service that handled user authentication for a suite of web applications. This service was experiencing intermittent 503 errors (Service Unavailable) that we couldn't easily reproduce or debug. Here's what we knew initially:
The 503 errors were occurring seemingly at random, affecting about 2% of authentication attempts.
The errors didn't correlate with any specific time of day or traffic patterns.
Our existing logs only showed that a 503 error was returned, without any additional context.
These logs didn't provide enough context to understand why authentication was failing or why we were returning 503 errors instead of 401 (Unauthorized) for failed authentications.
With these enhanced logs, we were able to identify the root cause of the problem:
The 503 errors were occurring due to database timeouts, not actual authentication failures.
These timeouts were happening when the database connection pool was exhausted.
The connection pool exhaustion was caused by a separate batch job that was holding onto connections for too long.
Armed with this information, we were able to:
Increase the size of our database connection pool.
Optimize the batch job to release connections more quickly.
Implement a circuit breaker for database operations to fail fast when the database is overloaded.
The result? Our 503 error rate dropped from 2% to 0.01%, and we were able to properly distinguish between service unavailability and actual authentication failures.
This example demonstrates the power of effective logging. By including crucial context (request IDs, error types, timing information) and using structured logging with Zap, we were able to quickly identify and resolve a significant issue that was impacting our users.
Some key takeaways from this experience:
Log at the appropriate level: Use Error for exceptional circumstances, Warn for important but expected issues, Info for general operational events, and Debug for detailed troubleshooting information.
Include timing information: Logging durations for key operations can help identify performance bottlenecks.
Use structured logging: It makes it much easier to filter and analyze logs, especially when aggregating them in a centralized logging system.
Log context, not just errors: Including relevant context (like request IDs or user IDs) in your logs makes it much easier to trace issues across different parts of your system.
Be specific about errors: Instead of generic error messages, log specific error types. This makes it easier to distinguish between different failure modes.
Remember, logs are not just for debugging errors β they're a powerful tool for understanding your application's behavior and performance in production. Invest time in setting up comprehensive logging, and you'll thank yourself later when troubleshooting complex issues.
Use log levels appropriately: Reserve ERROR for exceptional circumstances and use INFO for routine operations.
Include context: Always include relevant context in your logs, such as request IDs or user IDs.
Be mindful of sensitive data: Never log sensitive information like passwords or API keys.
Use sampling for high-volume logs: In high-traffic services, consider sampling your DEBUG logs to reduce overhead.
Benchmark your logging: Use Go's benchmarking tools to measure the performance impact of your logging.
Here's a simple benchmark comparing string concatenation vs using fields:
func BenchmarkLoggingConcat(b *testing.B) {
logger := zerolog.New(ioutil.Discard)
for i := 0; i < b.N; i++ {
logger.Info().Msg("value is " + strconv.Itoa(i))
}
}
func BenchmarkLoggingFields(b *testing.B) {
logger := zerolog.New(ioutil.Discard)
for i := 0; i < b.N; i++ {
logger.Info().Int("value", i).Msg("")
}
}
In my tests, using fields consistently outperforms string concatenation, especially at high volumes.
Conclusion
Log everything; but log it right with a schema (Otel)
Let's recap, shall we?
We've learned that logging is not just about sprinkling fmt.Println() calls like fairy dust throughout your code. No, proper logging is an art form, a science, and sometimes a dark ritual all rolled into one. We've explored the standard library (yawn), dived into the exciting world of third-party libraries (Zerolog and Zap, oh my!), and even tackled the beast known as structured logging.
Remember, logs are like toilet paper. You don't think about them much, but boy oh boy, do you miss them when they're not there (especially at 3 AM when production is on fire).
So go forth and log! Log like the wind! Log like your job depends on it (because, let's face it, it probably does). And the next time someone asks you about your logging strategy, you can smile smugly and say, "Oh, I use a structured, leveled logging system with context-rich messages and high-performance serialization." Then walk away slowly as their jaw hits the floor.
Class dismissed. Now, if you'll excuse me, I have some logs to analyze. These errors won't debug themselves... or is there some AI helping with it already? (Spoiler: They won't. That's why we log.)
Weβre excited to hear about your experiences with SRE and your views on reliability, observability, and monitoring. Join our SRE Discord community to connect with others who share your interests!
Prathamesh works as an evangelist at Last9, runs SRE stories - where SRE and DevOps folks share their stories, and maintains o11y.wiki - a glossary of all terms related to observability.