Vibe monitoring with Last9 MCP: Ask your agent to fix production issues! Setup →
Last9 Last9

Mar 13th, ‘25 / 13 min read

How to Set Up Logging in Node.js (Without Overthinking It)

Set up logging in Node.js without the headache—learn the essentials, pick the right tools, and keep it simple yet effective.

How to Set Up Logging in Node.js (Without Overthinking It)

Logging in Node.js might not be the most exciting part of development, but it’s one of the most important. Whether you're troubleshooting bugs or keeping track of how your app is running, good logs make life easier. Let’s break down how to set up logging the right way.

Why Robust Node Logging is Critical for Production Applications

Let's cut to the chase – proper node log implementation isn't just nice to have, it's your lifeline when things go sideways at 3 AM.

Node.js applications can be chatty beasts, throwing out information constantly. Without a proper logging strategy, you're basically trying to have a conversation in a packed nightclub – good luck making sense of anything when it matters.

Here's what solid node log practices bring to the table:

  • Faster debugging – find and fix issues before your users even notice
  • Performance insights – spot bottlenecks before they turn into firestorms
  • Security awareness – catch suspicious activity early
  • Better visibility – understand what's actually happening in your app
  • Historical context – track how your system behaves over time
  • Operational intelligence – make data-driven decisions about your infrastructure

Beyond just catching errors, a well-implemented node log strategy transforms raw data into actionable insights that help you proactively manage your applications.

💡
If you're looking for a fast and efficient logging library for Node.js, Pino is a great choice. Here’s how to get started: Read more.

Setting Up Your First Professional Node.js Logger

Starting with node logging is like cooking a decent meal – you don't need to be a Michelin-star chef to make something satisfying.

Choosing the Right Logging Library for Your Environment

First things first, you need to choose a logging library. While console.log might seem tempting (we've all been there), it's like bringing a knife to a gunfight when running in production.

Some solid options include:

Logger Best For What Makes It Cool Performance Impact Learning Curve
Winston Flexibility Custom transports, multiple destinations Moderate Gentle
Pino Performance Lightning fast, low overhead Very Low Moderate
Bunyan Structured Data JSON logging, CLI viewer included Low Gentle
Morgan HTTP Logging Express-friendly, request tracking Low Very Gentle
Log4js Java Developers Familiar API for Java devs Moderate Depends on background

Winston tends to be the Swiss Army knife of the bunch, but Pino is gaining serious traction for high-performance needs.

Here's how to set up Winston with multiple transports:

const winston = require('winston');

// Create a logger
const logger = winston.createLogger({
  level: 'info',
  format: winston.format.combine(
    winston.format.timestamp(),
    winston.format.json()
  ),
  defaultMeta: { service: 'user-service' }

// Export a factory function
module.exports = function createStandardLogger(options) {
  return new StandardLogger(options);
};

This shared library offers several advantages:

  1. Consistent logging format across all services
  2. Built-in support for correlation IDs to track requests
  3. Environment-specific configuration
  4. Context-aware child loggers
  5. Unified transport configuration

To use it in your microservices:

// In your service
const createLogger = require('@your-org/logger');

const logger = createLogger({
  service: 'payment-service',
  version: require('./package.json').version,
  logToFile: true,
  elasticsearchConfig: {
    node: process.env.ELASTICSEARCH_URL,
    auth: {
      username: process.env.ELASTICSEARCH_USER,
      password: process.env.ELASTICSEARCH_PASSWORD
    }
  }
});

// Use in Express middleware
app.use((req, res, next) => {
  req.logger = logger.requestLogger(req);
  req.logger.info('Request received');
  next();
});

// Use in route handlers
app.post('/api/payments', (req, res) => {
  req.logger.info('Processing payment', { amount: req.body.amount });
  
  // Business logic...
  
  req.logger.info('Payment successful', { transactionId: result.id });
  res.json(result);
});

By standardizing your logging across services, you can:

  1. Easily trace requests as they flow through your system
  2. Ensure consistent data format for analysis tools
  3. Simplify onboarding for developers moving between teams
  4. Create unified dashboards that work with all services
💡
If you need to run scheduled tasks in your Node.js app, cron jobs can help. Here’s how to set them up and manage them: Read more.

Implement Robust Error Handling for Your Logging System

What happens when your logger itself throws an error? Make sure your logging code has its own error handling:

// Create a fail-safe wrapper for your logger
function createFailSafeLogger(logger) {
  // Create a simple console fallback logger
  const fallbackLogger = {
    error: console.error,
    warn: console.warn,
    info: console.info,
    debug: console.debug,
    verbose: console.log
  };
  
  // Count of consecutive failures
  let failureCount = 0;
  // Is the main logger currently in failed state?
  let usingFallback = false;
  // When did we last try to recover?
  let lastRecoveryAttempt = 0;
  
  // The recovery interval increases with failures
  const getRecoveryInterval = () => Math.min(30000, 1000 * Math.pow(2, failureCount));
  
  // Wrapper function that catches errors
  function safeLog(level, originalMethod) {
    return function(...args) {
      try {
        // If we're in fallback mode, check if we should try recovery
        if (usingFallback) {
          const now = Date.now();
          if (now - lastRecoveryAttempt > getRecoveryInterval()) {
            lastRecoveryAttempt = now;
            // Try the original method to see if logging is working again
            originalMethod.apply(logger, args);
            // If we get here, recovery succeeded
            console.info('Logging system recovered after failure');
            usingFallback = false;
            return;
          }
          // Still in recovery period, use fallback
          fallbackLogger[level](...args);
          return;
        }
        
        // Normal operation - use the original logger
        originalMethod.apply(logger, args);
        // Reset failure count after successful logs
        if (failureCount > 0) failureCount = 0;
      } catch (error) {
        // Logger failed - increment failure count
        failureCount++;
        // Switch to fallback mode
        usingFallback = true;
        lastRecoveryAttempt = Date.now();
        
        // Log the failure and original message with fallback
        fallbackLogger.error(`Logging system failure (${failureCount}): ${error.message}`);
        fallbackLogger[level]('Original log (via fallback):', ...args);
        
        // If this keeps happening, emit a metric or alert
        if (failureCount >= 5) {
          // This would be a call to your monitoring system
          try {
            require('./monitoring').emitAlert('LOGGING_SYSTEM_FAILURE', {
              failureCount,
              lastError: error.message
            });
          } catch (monitoringError) {
            // Last resort
            console.error('Both logging and monitoring systems failed');
          }
        }
      }
    };
  }
  
  // Create the wrapped logger
  const safeLogger = {};
  for (const level of ['error', 'warn', 'info', 'debug', 'verbose']) {
    safeLogger[level] = safeLog(level, logger[level]);
  }
  
  return safeLogger;
}

// Usage
const robustLogger = createFailSafeLogger(logger);

// Use it normally - it will handle failures gracefully
robustLogger.info('Application starting up');

This fail-safe logging wrapper:

  1. Creates a simple console-based fallback logger
  2. Catches all errors from the main logging system
  3. Implements exponential backoff for recovery attempts
  4. Automatically tries to recover after failures
  5. Alerts your monitoring system if logging repeatedly fails
  6. Ensures your application continues running even if logging breaks

By implementing this pattern, you protect your application from being taken down by logging issues while still maintaining visibility into what's happening.

💡
If you're using Winston for logging in Node.js, here’s a guide to help you set it up and make the most of it: Read more.

Logging Tools and Integrations for Node.js Applications

A craftsman is only as good as their tools, right? Here are some that'll make you look like a logging wizard:

Advanced Visualization and Analysis Tools for Log Data

Last9

If you’re looking for a budget-friendly managed observability solution without sacrificing performance, Last9 is worth a shot. Trusted by industry giants like Disney+ Hotstar, CleverTap, and Replit, Last9 delivers high-cardinality observability at scale.

As a telemetry data platform, we have monitored 11 of the 20 largest live-streaming events in history. With seamless OpenTelemetry and Prometheus integration, Last9 unifies logs, metrics, and traces—giving you correlated monitoring, real-time insights, and cost-optimized performance without the usual complexity.

Kibana

Kibana is a visualization tool that works with Elasticsearch, allowing you to search, analyze, and visualize your logs through an intuitive UI. It enables teams to create real-time dashboards that track error rates, response times, and user behavior. With features like alerting and anomaly detection, Kibana is a popular choice for monitoring large-scale applications.

Grafana

Grafana excels at connecting multiple data sources, including logs, metrics, and traces, into a single dashboard. It allows for advanced visualization, alerting, and correlation of logs with performance data, providing deeper insights into system health. Teams can build interactive panels that help identify patterns, detect bottlenecks, and streamline troubleshooting.

Probo Cuts Monitoring Costs by 90% with Last9
Probo Cuts Monitoring Costs by 90% with Last9

Real-time Log Monitoring and Alerting Systems

Setting up real-time monitoring involves more than just collecting logs - you need to analyze them for problems:

// Manual approach with Winston and webhooks
const axios = require('axios');

// Custom transport that sends alerts for high-severity issues
class AlertTransport extends winston.Transport {
  constructor(options) {
    super(options);
    this.name = 'alert';
    this.level = options.level || 'error';
    this.webhookUrl = options.webhookUrl;
    this.minInterval = options.minInterval || 60000; // 1 minute
    this.lastAlerts = new Map(); // Track recent alerts
  }
  
  async log(info, callback) {
    try {
      // Extract alert key (we don't want to flood with the same alert)
      const alertKey = info.code || info.message;
      
      // Check if we've sent this alert recently
      const now = Date.now();
      const lastSent = this.lastAlerts.get(alertKey) || 0;
      
      if (now - lastSent < this.minInterval) {
        // Skip this alert - too soon after the last one
        return callback(null, true);
      }
      
      // Update the last alert time
      this.lastAlerts.set(alertKey, now);
      
      // Clean up old alerts from the map
      for (const [key, time] of this.lastAlerts.entries()) {
        if (now - time > this.minInterval * 10) {
          this.lastAlerts.delete(key);
        }
      }
      
      // Send the alert
      await axios.post(this.webhookUrl, {
        text: `🚨 ALERT: ${info.message}`,
        severity: info.level,
        service: info.service || 'unknown',
        timestamp: info.timestamp || new Date().toISOString(),
        details: info
      });
      
      callback(null, true);
    } catch (error) {
      console.error('Failed to send alert:', error);
      callback(error);
    }
  }
}

// Add to Winston
winston.add(new AlertTransport({
  level: 'error',
  webhookUrl: process.env.SLACK_WEBHOOK_URL,
  minInterval: 5 * 60000 // 5 minutes
}));

For more sophisticated monitoring, integrate with dedicated services. Here's how you can adapt your Winston logger to send structured logs to Last9:

const winston = require('winston');

// Custom formatter for structured logging
const last9Format = winston.format((info) => {
  if (info.level === 'error' || info.level === 'warn') {
    // Add structured metadata for Last9
    info.last9 = {
      level: info.level,
      message: info.message,
      timestamp: new Date().toISOString(),
      ...(info.error ? { stack: info.error.stack } : {}),
    };
  }
  return info;
});

// Configure Winston logger
const logger = winston.createLogger({
  format: winston.format.combine(
    last9Format(),
    winston.format.json()
  ),
  transports: [
    new winston.transports.Console(), // Add other transports as needed
  ],
});

// Example usage
logger.error('Something went wrong!', { error: new Error('Test error') });
logger.warn('This is a warning');

module.exports = logger;

This prepares logs in a structured format that can be sent to Last9 through OpenTelemetry, Fluent Bit, or another pipeline.

The node log landscape keeps evolving. Some trends to watch:

OpenTelemetry and the Future of Unified Observability

OpenTelemetry is transforming how we think about application monitoring by unifying logs, metrics, and traces:

// OpenTelemetry integration
const { NodeTracerProvider } = require('@opentelemetry/sdk-trace-node');
const { WinstonInstrumentation } = require('@opentelemetry/instrumentation-winston');
const { registerInstrumentations } = require('@opentelemetry/instrumentation');
const winston = require('winston');

// Set up the tracer provider
const provider = new NodeTracerProvider();
provider.register();

// Register Winston instrumentation
registerInstrumentations({
  instrumentations: [
    new WinstonInstrumentation({
      // Extract trace context from log records
      logHook: (span, record) => {
        record['trace_id'] = span.spanContext().traceId;
        record['span_id'] = span.spanContext().spanId;
      }
    })
  ]
});

// Create your logger
const logger = winston.createLogger({
  // Regular config...
});

// Example of logging with trace context
app.get('/api/products/:id', async (req, res) => {
  const tracer = opentelemetry.trace.getTracer('product-service');
  
  await tracer.startActiveSpan('fetch-product', async (span) => {
    try {
      logger.info('Fetching product', { productId: req.params.id });
      
      const product = await db.findProductById(req.params.id);
      
      if (!product) {
        logger.warn('Product not found', { productId: req.params.id });
        span.setStatus({ code: opentelemetry.SpanStatusCode.ERROR });
        return res.status(404).json({ error: 'Product not found' });
      }
      
      logger.info('Product retrieved successfully', { productId: req.params.id });
      res.json(product);
    } catch (err) {
      logger.error('Error fetching product', { 
        productId: req.params.id,
        error: err.message
      });
      span.setStatus({ code: opentelemetry.SpanStatusCode.ERROR });
      span.recordException(err);
      res.status(500).json({ error: 'Server error' });
    } finally {
      span.end();
    }
  });
});

With OpenTelemetry integration, you can:

  1. Correlate logs with traces and metrics
  2. See the entire journey of a request across services
  3. Understand the performance impact of issues
  4. Create unified dashboards showing the complete picture
  5. Switch between observability tools without changing your code
💡
Understanding log levels helps you separate routine messages from critical issues. Here’s a breakdown of what they mean: Read more.

AI-Powered Log Analysis and Anomaly Detection

Modern logging systems are leveraging AI to find patterns humans might miss:

// Integrate with an AI-powered analysis service
const { LogIntelligence } = require('log-intelligence-sdk');

const logAnalyzer = new LogIntelligence({
  apiKey: process.env.LOG_INTELLIGENCE_API_KEY,
  service: 'payment-service',
  environment: process.env.NODE_ENV
});

// Create a custom transport that sends logs for AI analysis
class AIAnalysisTransport extends winston.Transport {
  constructor(options) {
    super(options);
    this.analyzer = options.analyzer;
    this.sampleRate = options.sampleRate || 0.1; // Analyze 10% of logs
  }
  
  log(info, callback) {
    // Random sampling to reduce volume
    if (Math.random() > this.sampleRate) {
      return callback(null, true);
    }
    
    // Send to AI service
    this.analyzer.analyzelog(info)
      .then(result => {
        // If the AI detected an anomaly, log it
        if (result.anomalyDetected) {
          console.warn('AI detected anomaly:', result.anomalyDetails);
          
          // Maybe trigger an alert
          if (result.severity > 0.7) {
            notificationService.sendAlert({
              title: 'AI detected unusual log pattern',
              message: result.explanation,
              severity: result.severity,
              source: info
            });
          }
        }
        callback(null, true);
      })
      .catch(err => {
        console.error('AI analysis failed:', err);
        callback(null, true);
      });
  }
}

// Add the AI transport
winston.add(new AIAnalysisTransport({
  analyzer: logAnalyzer,
  sampleRate: 0.2 // Analyze 20% of logs
}));

AI-powered log analysis can:

  1. Detect unusual patterns that might indicate problems
  2. Identify correlations between seemingly unrelated events
  3. Predict potential issues before they become critical
  4. Reduce alert noise by focusing on truly anomalous events
  5. Provide natural language explanations of complex issues

Edge Logging and Client-side Error Tracking

Modern applications need visibility beyond the server:

// Server-side setup for receiving client logs
app.post('/api/client-logs', (req, res) => {
  const { level, message, context } = req.body;
  
  // Add client IP and user agent
  const clientMeta = {
    ip: req.ip,
    userAgent: req.headers['user-agent'],
    source: 'client',
    ...context
  };
  
  // Log with the appropriate level
  if (logger[level]) {
    logger[level](message, clientMeta);
  } else {
    logger.info(message, { ...clientMeta, originalLevel: level });
  }
  
  res.status(200).end();
});

// Client-side logging (browser code)
const clientLogger = {
  _send(level, message, context = {}) {
    // Add some browser context
    const fullContext = {
      url: window.location.href,
      viewport: {
        width: window.innerWidth,
        height: window.innerHeight
      },
      userLanguage: navigator.language,
      ...context
    };
    
    // Send to backend
    fetch('/api/client-logs', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ level, message, context: fullContext }),
      // Use keepalive to ensure logs are sent even during page navigation
      keepalive: true
    }).catch(err => console.error('Failed to send log:', err));
  },
  
  error(message, context) {
    console.error(message);
    this._send('error', message, context);
  },
  
  warn(message, context) {
    console.warn(message);
    this._send('warn', message, context);
  },
  
  info(message, context) {
    console.info(message);
    this._send('info', message, context);
  }
};

// Capture unhandled errors
window.addEventListener('error', (event) => {
  clientLogger.error('Unhandled error', {
    message: event.message,
    source: event.filename,
    lineno: event.lineno,
    colno: event.colno,
    stack: event.error?.stack
  });
});

// Capture promise rejections
window.addEventListener('unhandledrejection', (event) => {
  clientLogger.error('Unhandled promise rejection', {
    reason: event.reason?.message || String(event.reason),
    stack: event.reason?.stack
  });
});

// Example usage
clientLogger.info('Page loaded', { 
  pageLoadTime: performance.now(),
  referrer: document.referrer
});

By combining client and server logging, you get:

  1. End-to-end visibility into user experiences
  2. Real-time detection of client-side issues
  3. Correlation between client behavior and server problems
  4. Better context for debugging user-reported issues
  5. Insights into performance across different browsers and devices

Contextual and Semantic Logging for Deeper Insights

Modern logging is moving beyond raw data to capture the meaning behind events:

// Define semantic log categories
const LogCategory = {
  SECURITY: 'security',
  PERFORMANCE: 'performance',
  USER_JOURNEY: 'user-journey',
  SYSTEM: 'system',
  BUSINESS: 'business'
};

// Define log entities
const LogEntity = {
  USER: 'user',
  ORDER: 'order',
  PAYMENT: 'payment',
  PRODUCT: 'product',
  SESSION: 'session'
};

// Create semantically rich logger
const semanticLogger = {
  _log(category, entity, action, level, data) {
    logger[level]({
      category,
      entity,
      action,
      ...data
    });
  },
  
  // Security events
  security: {
    loginSuccess: (userId, data = {}) => 
      semanticLogger._log(LogCategory.SECURITY, LogEntity.USER, 'login_success', 'info', { userId, ...data }),
    
    loginFailure: (userId, reason, data = {}) =>
      semanticLogger._log(LogCategory.SECURITY, LogEntity.USER, 'login_failure', 'warn', { userId, reason, ...data }),
    
    permissionDenied: (userId, resource, data = {}) =>
      semanticLogger._log(LogCategory.SECURITY, LogEntity.USER, 'permission_denied', 'warn', { userId, resource, ...data })
  },
  
  // Business events
  business: {
    orderCreated: (orderId, userId, data = {}) =>
      semanticLogger._log(LogCategory.BUSINESS, LogEntity.ORDER, 'created', 'info', { orderId, userId, ...data }),
    
    orderCompleted: (orderId, amount, data = {}) =>
      semanticLogger._log(LogCategory.BUSINESS, LogEntity.ORDER, 'completed', 'info', { orderId, amount, ...data }),
    
    paymentProcessed: (paymentId, orderId, amount, data = {}) =>
      semanticLogger._log(LogCategory.BUSINESS, LogEntity.PAYMENT, 'processed', 'info', { paymentId, orderId, amount, ...data })
  },
  
  // User journey events
  userJourney: {
    pageView: (userId, page, data = {}) =>
      semanticLogger._log(LogCategory.USER_JOURNEY, LogEntity.USER, 'page_view', 'info', { userId, page, ...data }),
    
    featureUsed: (userId, feature, data = {}) =>
      semanticLogger._log(LogCategory.USER_JOURNEY, LogEntity.USER, 'feature_used', 'info', { userId, feature, ...data })
  },
  
  // System events
  system: {
    serviceStart: (data = {}) =>
      semanticLogger._log(LogCategory.SYSTEM, LogEntity.SESSION, 'service_start', 'info', data),
    
    serviceStop: (data = {}) =>
      semanticLogger._log(LogCategory.SYSTEM, LogEntity.SESSION, 'service_stop', 'info', data),
    
    resourceExhausted: (resource, limit, data = {}) =>
      semanticLogger._log(LogCategory.SYSTEM, LogEntity.SESSION, 'resource_exhausted', 'warn', { resource, limit, ...data })
  }
};

// Usage examples
app.post('/api/login', (req, res) => {
  const { username, password } = req.body;
  
  authenticateUser(username, password)
    .then(user => {
      // Log successful login with semantic context
      semanticLogger.security.loginSuccess(user.id, {
        method: 'password',
        ipAddress: req.ip,
        userAgent: req.headers['user-agent']
      });
      
      res.json({ token: generateToken(user) });
    })
    .catch(err => {
      // Log failed login with semantic context
      semanticLogger.security.loginFailure(username, err.message, {
        ipAddress: req.ip,
        userAgent: req.headers['user-agent'],
        attemptCount: getLoginAttempts(username)
      });
      
      res.status(401).json({ error: 'Authentication failed' });
    });
});

Semantic logging provides:

  1. Consistent categorization of log events
  2. Rich, structured data for analysis
  3. Business-oriented view of technical events
  4. Easy filtering and aggregation by category or entity
  5. Natural mapping to business metrics and KPIs
💡
If you need a simple way to log HTTP requests in your Node.js app, Morgan is a handy tool. Here’s how it works: Read more.

Conclusion

Node logging isn't just about catching errors – it's about gaining visibility into the heartbeat of your applications.

As your applications grow in complexity, your logging strategy should evolve from simple error capturing to a comprehensive observability solution that combines logs with metrics and traces. The tools and patterns covered in this guide provide a solid foundation for building that strategy.

Remember: your future self will thank you for the logs you write today.

💡
Want to chat more about logging or share your node log setup? Join our Discord Community where we're talking about this and more DevOps goodness every day.

FAQs

1. Why is logging important in a Node.js application?

Logging helps track application activity, debug errors, monitor performance, and analyze user behavior. It provides insights into how your application is running and helps identify potential issues before they escalate.

2. What are the best logging libraries for Node.js?

Some of the most popular logging libraries include:

  • Winston: A versatile logging library with customizable transports and formats.
  • Pino: A fast and lightweight logger optimized for high-performance applications.
  • Morgan: A middleware specifically designed for logging HTTP requests in Express applications.

3. How do I set up Winston for logging?

To install Winston, run:

npm install winston

Then, create a basic logger:

const winston = require('winston');

const logger = winston.createLogger({
  level: 'info',
  format: winston.format.json(),
  transports: [
    new winston.transports.Console(),
    new winston.transports.File({ filename: 'app.log' })
  ]
});

logger.info('Hello, Winston!');

4. How do I log HTTP requests in an Express app?

You can use Morgan:

npm install morgan

Then, add it to your Express app:

const express = require('express');
const morgan = require('morgan');
const app = express();

app.use(morgan('combined'));

This logs all incoming HTTP requests in a standardized format.

5. How do I store logs in a file instead of the console?

With Winston, use the File transport:

new winston.transports.File({ filename: 'app.log' })

Pino also supports file output:

node app.js > logs.txt 2>&1

6. How can I format logs for better readability?

Winston allows custom formats:

const { createLogger, format, transports } = require('winston');
const logger = createLogger({
  format: format.combine(
    format.timestamp(),
    format.printf(({ timestamp, level, message }) => {
      return `${timestamp} [${level.toUpperCase()}]: ${message}`;
    })
  ),
  transports: [new transports.Console()]
});

7. How do I handle different log levels?

Winston and Pino support log levels like info, warn, and error:

logger.error('This is an error message');
logger.warn('This is a warning');
logger.info('This is an info message');

Log levels help filter messages based on importance.

8. How do I send logs to an external service?

You can use logging services like Loggly, Datadog, or Last9. In Winston, configure a transport for external logging:

new winston.transports.Http({ host: 'logs.example.com', port: 3000 })

For Pino, use:

pino-pretty | curl -X POST -H "Content-Type: application/json" -d @- http://logs.example.com

9. How do I handle logging in production?

  • Use log rotation to prevent excessive file sizes.
  • Store logs in a centralized location for analysis.
  • Use structured logging (JSON format) for easier parsing.
  • Integrate with an observability tool like Last9 for real-time insights.

10. How do I filter sensitive data from logs?

Use Winston’s format option to redact sensitive information:

format((info) => {
  if (info.message.includes('password')) {
    info.message = info.message.replace(/password:.*/, 'password:[REDACTED]');
  }
  return info;
})

Many logging platforms also support data redaction features.

Contents


Newsletter

Stay updated on the latest from Last9.

Authors
Preeti Dewani

Preeti Dewani

Technical Product Manager at Last9

X
Topics