As Node.js applications grow in complexity, basic console.log
statements become insufficient for effective debugging and monitoring. Proper logging libraries provide structured data, searchable records, and integration with monitoring tools—all critical for production applications.
This guide covers the most effective Node.js logging libraries available today, with practical implementation examples and selection criteria based on different project requirements.
How to Choose the Best Node.js Logging Library for Your Project
Your choice of logging library affects how you'll debug issues and monitor your application. Each library has different strengths that suit specific use cases.
Here's a comparison of popular Node.js logging libraries:
Library | GitHub Stars | Key Strength | Best For |
---|---|---|---|
Winston | 20k+ | Highly configurable | Complex apps |
Pino | 12k+ | Performance-focused | High-throughput systems |
Bunyan | 7k+ | JSON logging | Microservices |
Morgan | 9k+ | HTTP request logging | Express.js apps |
debug | 10k+ | Lightweight debugging | Small projects |
log4js | 5k+ | Familiar API for Java devs | Teams with Java background |
loglevel | 2k+ | Minimal but flexible | Browser-compatible apps |
roarr | 1k+ | Serialization performance | High-scale services |
tracer | 1k+ | Colorized output | Development environments |
Here's what each library offers:
Winston
Winston offers many configuration options. You can send logs to multiple destinations at once (files, databases, external services). It has plugins for most logging destinations. Winston prioritizes flexibility over speed.
Pino
Pino focuses on performance. It moves formatting and processing to separate processes to minimize impact on your application. Benchmarks show Pino handling 10,000+ logs per second with minimal overhead. Use Pino when speed matters most.
Bunyan
Bunyan uses a JSON-first approach. Each log entry is a JSON object with a consistent structure, making automated analysis easier. Its serializers convert common objects (like Error instances) to JSON-friendly formats. Bunyan includes tools for reading and formatting logs during development.
Morgan
Morgan specializes in HTTP request logging for Express applications. It comes with preset formats for access logs (common, combined, dev) and works as Express middleware. Morgan works well alongside general loggers like Winston or Pino.
debug
The debug library offers a simple approach for temporary debugging that's easy to turn on or off. It uses environment variables to control which components produce logs, allowing selective debugging without code changes. Its small size makes it suitable for libraries and small applications.
log4js
Teams coming from Java often prefer log4js for its familiar interface, similar to log4j. It provides hierarchical loggers and configuration that Java developers will recognize. While not the fastest option, the familiar API helps teams with Java experience.
loglevel
For applications running in both Node.js and browsers, loglevel provides consistent logging across environments. Its API works the same on both server and client, adapting to each environment automatically. This consistency helps with universal JavaScript applications.
roarr
Roarr takes a different approach to performance—it's always enabled but only processes logs when requested via environment variables. This design eliminates the performance cost of checking log levels at runtime. Consider roarr for very high-throughput applications.
tracer
Tracer focuses on developer-friendly console output with colors, timestamps, and automatic stack traces. It's useful during development when you need clearly visible logs that are easy to read in the terminal.
When choosing a library, consider:
- Performance needs: High-throughput applications work best with Pino or roarr
- Data structure: Bunyan and Winston provide consistent JSON formatting
- Team experience: Java teams may prefer log4js, while frontend developers might choose loglevel
- Integration requirements: Look for libraries with connections to your monitoring tools
- Application type: HTTP-heavy applications benefit from Morgan alongside a general logger
How to Install and Configure Winston for Powerful Structured Logging
Winston has been the go-to Node.js logging library for years—and for good reason. It's like the Swiss Army knife of logging.
const winston = require('winston');
// Create a logger
const logger = winston.createLogger({
level: 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
transports: [
new winston.transports.File({ filename: 'error.log', level: 'error' }),
new winston.transports.File({ filename: 'combined.log' })
]
});
// Add console output in development
if (process.env.NODE_ENV !== 'production') {
logger.add(new winston.transports.Console({
format: winston.format.simple()
}));
}
// Use it in your code
logger.info('Server started on port 3000');
logger.error('Database connection failed', { reason: 'timeout', attempt: 3 });
What makes Winston shine:
- Multiple transport options: Send logs to files, console, HTTP services, or databases
- Log levels: Filter by severity (error, warn, info, verbose, debug, silly)
- Custom formats: Structure your logs exactly how you need them
Winston excels in complex applications where you need granular control over your logging pipeline.
Speed Up Your Node.js Apps with Pino's High-Performance Logging System
When every millisecond counts, Pino steps up. It's built for speed first, with minimal overhead.
const pino = require('pino');
// Create a logger - keeps it simple
const logger = pino({
level: process.env.NODE_ENV === 'production' ? 'info' : 'debug',
transport: {
target: 'pino-pretty',
options: {
colorize: true
}
}
});
// Log structured events
logger.info({ user: 'john', action: 'login' }, 'User logged in');
logger.error({ err: new Error('Boom!') }, 'Something went wrong');
// Child loggers for context
const orderLogger = logger.child({ module: 'orders' });
orderLogger.info({ orderId: '12345' }, 'New order received');
Pino's strengths:
- Blazing fast: Minimal impact on your app's performance
- Async logging: Writes happen off the main thread
- JSON by default: Ready for log aggregation services
- Low overhead: About 5x faster than Winston in benchmarks
Pino works best when performance is non-negotiable, like high-traffic APIs or resource-constrained environments.
Why Bunyan's JSON-First Approach Makes Log Analysis Much Easier
Bunyan takes a strong stance: logs should be JSON objects, not strings. This makes parsing and analyzing them much easier.
const bunyan = require('bunyan');
// Create a logger
const log = bunyan.createLogger({
name: 'myapp',
streams: [
{
level: 'info',
path: './logs/app.log'
},
{
level: 'error',
path: './logs/error.log'
},
{
level: 'debug',
stream: process.stdout
}
],
serializers: bunyan.stdSerializers
});
// Log with context
log.info({ req_id: '123abc' }, 'Starting request processing');
log.error({ err: new Error('Failed to fetch data') }, 'API request failed');
Why developers choose Bunyan:
- Built-in serializers: Special handlers for common objects like errors and HTTP requests
- Child loggers: Inherit settings while adding context to all logs
- CLI tool: Parse and pretty-print JSON logs for human readability
- Clean structure: Consistent log format makes analysis straightforward
Bunyan shines in microservice architectures where structured logging across services is crucial.
Building Your Own Custom Node.js Logging Solution from Scratch
Sometimes off-the-shelf libraries don't quite fit. Building your own logging wrapper gives you complete control.
// logger.js
const fs = require('fs');
const path = require('path');
class Logger {
constructor(options = {}) {
this.appName = options.appName || 'app';
this.logDir = options.logDir || './logs';
this.logLevel = options.logLevel || 'info';
// Create log directory if it doesn't exist
if (!fs.existsSync(this.logDir)) {
fs.mkdirSync(this.logDir, { recursive: true });
}
this.levels = {
error: 0,
warn: 1,
info: 2,
debug: 3
};
}
_log(level, message, meta = {}) {
if (this.levels[level] > this.levels[this.logLevel]) return;
const timestamp = new Date().toISOString();
const logEntry = {
timestamp,
level,
message,
appName: this.appName,
...meta
};
const logString = JSON.stringify(logEntry) + '\n';
// Write to console
console.log(logString);
// Write to file
const logFile = path.join(this.logDir, `${this.appName}-${level}.log`);
fs.appendFileSync(logFile, logString);
}
error(message, meta) { this._log('error', message, meta); }
warn(message, meta) { this._log('warn', message, meta); }
info(message, meta) { this._log('info', message, meta); }
debug(message, meta) { this._log('debug', message, meta); }
}
module.exports = Logger;
Usage:
const Logger = require('./logger');
const logger = new Logger({
appName: 'payment-service',
logLevel: process.env.NODE_ENV === 'production' ? 'info' : 'debug'
});
logger.info('Application started', { port: 3000 });
Custom loggers work well when:
- You need specific formatting not supported by existing libraries
- Your team wants a simplified API tailored to your use cases
- You want to reduce dependencies in your project
How to Connect Your Node.js Logs to Modern Observability Platforms
Logs are most valuable when they're part of a broader observability strategy. Here's how to connect your Node.js logs to monitoring platforms:
// Winston + Last9 example
const winston = require('winston');
const { Last9Transport } = require('winston-last9');
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
defaultMeta: { service: 'user-service' },
transports: [
new winston.transports.Console(),
new Last9Transport({
apiKey: process.env.LAST9_API_KEY,
source: 'backend-api'
})
]
});
// Now logs flow to Last9 for correlation with metrics and traces
logger.info('User created', { userId: '123', plan: 'premium' });
Connecting your logs to an observability platform like Last9 brings several benefits:
- Context: Correlate logs with metrics and traces
- Search: Query across all your logs with sophisticated filters
- Alerts: Get notified when important log patterns emerge
- Visualization: Spot trends that raw logs might hide
For teams running in production, this connection is essential for quick troubleshooting and performance optimization.

6 Essential Node.js Logging Best Practices Every Developer Should Follow
The right library is just the start. These practices will level up your logging game:
- Use log levels properly
- ERROR: When something breaks
- WARN: For concerning but non-fatal issues
- INFO: Key application events
- DEBUG: For development insights
Implement log rotation
const { createLogger, format, transports } = require('winston');
require('winston-daily-rotate-file');
const fileRotateTransport = new transports.DailyRotateFile({
filename: 'app-%DATE%.log',
datePattern: 'YYYY-MM-DD',
maxFiles: '14d',
maxSize: '20m'
});
const logger = createLogger({
format: format.combine(format.timestamp(), format.json()),
transports: [
fileRotateTransport,
new transports.Console()
]
});
Sanitize sensitive data
const sensitiveKeys = ['password', 'token', 'credit_card'];
function sanitize(obj) {
const result = { ...obj };
for (const key of Object.keys(result)) {
if (sensitiveKeys.includes(key.toLowerCase())) {
result[key] = '[REDACTED]';
} else if (typeof result[key] === 'object' && result[key] !== null) {
result[key] = sanitize(result[key]);
}
}
return result;
}
logger.info('User data', sanitize(userData));
Include timing information for performance tracking
function logTimings(name, fn) {
return async (...args) => {
const start = performance.now();
try {
return await fn(...args);
} finally {
const duration = performance.now() - start;
logger.info(`${name} completed`, { durationMs: duration });
}
};
}
const fetchUsers = logTimings('fetchUsers', async () => {
// database query here
});
Add request IDs for tracking flows
app.use((req, res, next) => {
req.id = crypto.randomUUID();
req.logger = logger.child({ requestId: req.id });
next();
});
app.get('/users', (req, res) => {
req.logger.info('Fetching users');
// ...
});
Structure your logs as JSON
// Instead of this:
logger.info('User with ID 123 purchased plan premium');
// Do this:
logger.info('User purchased plan', { userId: '123', plan: 'premium' });
Following these practices makes your logs more useful when you actually need them.
Node.js Logging Scenarios: Error Handling, API Requests, and Database Queries
Let's tackle some real-world logging situations:
Error Handling
process.on('uncaughtException', (error) => {
logger.error('Uncaught exception', {
error: {
message: error.message,
stack: error.stack,
name: error.name
}
});
// Give logger time to flush before exiting
setTimeout(() => process.exit(1), 1000);
});
process.on('unhandledRejection', (reason, promise) => {
logger.error('Unhandled rejection', {
reason: reason instanceof Error ? {
message: reason.message,
stack: reason.stack,
name: reason.name
} : reason,
promise
});
});
API Request Logging
// Express middleware for request logging
app.use((req, res, next) => {
const start = Date.now();
// Log when the request completes
res.on('finish', () => {
const duration = Date.now() - start;
logger.info('Request processed', {
method: req.method,
url: req.url,
statusCode: res.statusCode,
durationMs: duration,
userAgent: req.get('User-Agent'),
ip: req.ip
});
});
next();
});
Database Query Logging
// Mongoose middleware example
const mongoose = require('mongoose');
const logger = require('./logger');
mongoose.set('debug', (collectionName, method, query, doc) => {
logger.debug(`MongoDB ${method}`, {
collection: collectionName,
query: JSON.stringify(query),
doc: JSON.stringify(doc),
queryTime: Date.now()
});
});
// Sequelize logging example
const sequelize = new Sequelize('database', 'username', 'password', {
logging: (sql, timing) => {
logger.debug('Executed SQL', { sql, durationMs: timing });
}
});
These patterns give you visibility into different parts of your application, making troubleshooting much easier.
How to Fix Common Node.js Logging Problems: Memory Leaks, Permissions, and Circular References
Memory Leaks
Even logging itself can have problems. Here's how to handle common issues:
Memory Leaks
// Problem: Creating new transports on each request
app.get('/api', (req, res) => {
// DON'T DO THIS
const logger = winston.createLogger({
transports: [new winston.transports.File({ filename: 'app.log' })]
});
});
// Solution: Create logger once and reuse
const logger = winston.createLogger({
transports: [new winston.transports.File({ filename: 'app.log' })]
});
app.get('/api', (req, res) => {
// Use the existing logger
logger.info('API request received');
});
Log File Permission Issues
// Check and handle permission errors
try {
const logger = winston.createLogger({
transports: [new winston.transports.File({ filename: '/var/log/app.log' })]
});
} catch (error) {
console.error('Failed to create logger, falling back to console', error);
const logger = winston.createLogger({
transports: [new winston.transports.Console()]
});
}
Handling Circular References
// Problem: Circular references cause JSON.stringify to fail
const obj1 = { name: 'Object 1' };
const obj2 = { name: 'Object 2', ref: obj1 };
obj1.ref = obj2;
// This will throw an error
logger.info('Processing objects', { obj1, obj2 });
// Solution: Use a circular-safe stringify
const safeStringify = require('json-stringify-safe');
const options = {
format: winston.format.printf(info => {
const logObj = { ...info };
return safeStringify(logObj);
})
};
Being aware of these issues helps you set up more reliable logging systems.
Advanced Node.js Logging Techniques for Distributed Systems
When your application grows beyond a single service, logging becomes more complex. Here's how to handle logging in distributed environments:
Implement Correlation IDs for Request Tracing
Correlation IDs help you follow a request across multiple services:
const express = require('express');
const { v4: uuidv4 } = require('uuid');
const winston = require('winston');
const app = express();
// Create a logger
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
transports: [new winston.transports.Console()]
});
// Middleware to add correlation IDs
app.use((req, res, next) => {
// Use existing correlation ID from upstream service or create new one
req.correlationId = req.headers['x-correlation-id'] || uuidv4();
// Add correlation ID to response headers
res.setHeader('x-correlation-id', req.correlationId);
// Create request-specific logger with correlation ID
req.logger = logger.child({ correlationId: req.correlationId });
req.logger.info('Request received', {
method: req.method,
path: req.path,
ip: req.ip
});
// Track response
const start = Date.now();
res.on('finish', () => {
req.logger.info('Response sent', {
statusCode: res.statusCode,
durationMs: Date.now() - start
});
});
next();
});
// Usage in route handlers
app.get('/api/users', async (req, res) => {
req.logger.info('Fetching users from database');
try {
// When calling another service, pass the correlation ID
const response = await fetch('http://auth-service/validate', {
headers: {
'x-correlation-id': req.correlationId
}
});
// Log the result
req.logger.info('Auth service responded', {
status: response.status
});
// Rest of handler...
res.json({ users: [] });
} catch (error) {
req.logger.error('Error fetching users', {
error: error.message
});
res.status(500).json({ error: 'Failed to fetch users' });
}
});
With correlation IDs, you can trace a request's journey through your entire system, making debugging much easier.
Configure Environment-Specific Logging Formats
Different environments need different log formats:
const winston = require('winston');
// Determine environment
const env = process.env.NODE_ENV || 'development';
// Configure format based on environment
let logFormat;
if (env === 'development') {
// Human-readable, colorized logs for development
logFormat = winston.format.combine(
winston.format.timestamp(),
winston.format.colorize(),
winston.format.printf(({ level, message, timestamp, ...meta }) => {
return `${timestamp} ${level}: ${message} ${Object.keys(meta).length ? JSON.stringify(meta, null, 2) : ''}`;
})
);
} else {
// JSON logs for production (better for log aggregation)
logFormat = winston.format.combine(
winston.format.timestamp(),
winston.format.json()
);
}
// Create logger with environment-specific format
const logger = winston.createLogger({
level: env === 'production' ? 'info' : 'debug',
format: logFormat,
transports: [
new winston.transports.Console(),
// Add file transports in production
...(env === 'production' ? [
new winston.transports.File({ filename: 'error.log', level: 'error' }),
new winston.transports.File({ filename: 'combined.log' })
] : [])
]
});
Set Up Logging for Serverless Functions
Serverless environments require special handling:
// AWS Lambda example
const winston = require('winston');
// Create logger optimized for Lambda
const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
defaultMeta: {
service: 'user-api',
runtime: 'aws-lambda'
},
transports: [
new winston.transports.Console()
]
});
// Lambda handler
exports.handler = async (event, context) => {
// Add request context to all logs
const contextLogger = logger.child({
awsRequestId: context.awsRequestId,
functionName: context.functionName,
functionVersion: context.functionVersion,
// Add correlation ID if available
correlationId: event.headers?.['x-correlation-id'] || context.awsRequestId
});
contextLogger.info('Lambda invocation', { event });
try {
// Function logic...
const result = { message: 'Success' };
contextLogger.info('Lambda completed successfully');
return result;
} catch (error) {
contextLogger.error('Lambda execution failed', {
error: error.message,
stack: error.stack
});
throw error;
}
};
Explore More Specialized Logging Libraries
Beyond the major libraries we've covered, these specialized options serve specific needs:
log4js: For Developers Coming from Java
If your team has experience with log4j in Java, log4js offers a familiar API:
const log4js = require('log4js');
log4js.configure({
appenders: {
console: { type: 'console' },
file: { type: 'file', filename: 'app.log' },
errors: { type: 'file', filename: 'errors.log' }
},
categories: {
default: { appenders: ['console', 'file'], level: 'info' },
errors: { appenders: ['errors'], level: 'error' }
}
});
const logger = log4js.getLogger();
const errorLogger = log4js.getLogger('errors');
logger.info('Application started');
errorLogger.error('Connection failed', new Error('Database timeout'));
loglevel: Lightweight and Browser-Compatible
For applications that run in both Node.js and browsers:
const log = require('loglevel');
// Set the log level (trace/debug/info/warn/error)
log.setLevel(process.env.NODE_ENV === 'production' ? 'warn' : 'info');
// Use like console
log.info('User logged in', { userId: '123' });
log.error('Failed to process payment', { orderId: '456' });
// Create named loggers for different modules
const dbLogger = log.getLogger('database');
dbLogger.setLevel('debug');
dbLogger.debug('Connection established');
This approach is perfect for isomorphic JavaScript applications that need consistent logging between client and server.
Wrapping Up
Effective logging is a key part of monitoring and troubleshooting any application —but the right setup depends on your project and how your team works. At Last9, we bring observability data together in one place, so your team can focus on fixing issues, not switching between tools.
By integrating with OpenTelemetry and Prometheus, we optimize performance, cost, and real-time insights — making it easier to correlate metrics, logs, and traces for smarter monitoring and alerting.
Talk to us to know more about the platform capabilities or get started for free today!
FAQs
Which Node.js logging library is the fastest?
Pino consistently ranks as the fastest Node.js logging library in benchmarks. It minimizes CPU overhead by doing less work on the main thread and focusing on async operations, making it 5-10 times faster than alternatives like Winston when handling high volumes of logs.
Should I log to files or use a logging service?
For production environments, use both. Log to local files as a fallback, but stream logs to a service like Last9 for better search, analysis, and alerting capabilities. For development, console logs with proper formatting are usually sufficient.
How do I handle logs in a microservices architecture?
Add correlation IDs to trace requests across services. Use a consistent logging format across all services. Centralize log collection with a tool like Last9 that supports high-cardinality data, which is critical when dealing with many microservices generating logs simultaneously.
What's the difference between logging and application monitoring?
Logging captures discrete events as they happen, while monitoring tracks the health and performance metrics of your application over time. They complement each other—logs help you understand what happened at a specific moment, while monitoring helps you spot trends and anomalies.
How do I log sensitive information securely?
Never log passwords, tokens, or personal identifiable information. Create sanitization functions that automatically redact sensitive fields. If you must log related information, use reference IDs instead of the actual sensitive data.
Can logging impact my application's performance?
Yes, especially with synchronous logging or verbose log levels in production. To minimize impact: use asynchronous logging, implement log batching, choose performant libraries like Pino, and adjust log levels in production to only capture what's necessary.