Vibe monitoring with Last9 MCP: Ask your agent to fix production issues! Setup →
Last9 Last9

Apr 24th, ‘25 / 10 min read

How Does OpenTelemetry Logging Work?

OpenTelemetry logging helps standardize how logs are collected and processed across different systems, providing clear visibility into your apps.

How Does OpenTelemetry Logging Work?

Modern systems throw off logs like confetti—and making sense of all that noise is half the battle. OpenTelemetry logging offers a way to bring some order to the chaos. It helps DevOps teams collect logs in a consistent format, no matter what language or framework they’re working with.

In this guide, we’ll walk through what OpenTelemetry logging is, why it matters, and how to put it to work in your stack.

What Is OpenTelemetry Logging?

OpenTelemetry logging refers to the collection, processing, and exporting of log data using the OpenTelemetry framework. Unlike traditional logging approaches that often use proprietary formats and protocols, OpenTelemetry offers a vendor-neutral, open-source standard that works across different services and technologies.

At its core, OpenTelemetry logs provide context-rich information about events happening within your applications. When combined with metrics and traces (the other two signal types in OpenTelemetry), logs help create a complete picture of your system's behavior.

Why OpenTelemetry Logs Matter for DevOps Teams

For DevOps engineers, OpenTelemetry logging brings several key benefits:

  • Consistent Data Collection: Gather logs in a standardized format across multiple services, languages, and environments
  • Reduced Vendor Lock-in: Freedom to switch between different observability backends without changing your instrumentation code
  • Better Context: Correlate logs with traces and metrics for more powerful troubleshooting
  • Open Standards: Benefit from community-driven innovation rather than proprietary technology

By adopting OpenTelemetry logs, your team can build a more resilient, observable system that scales with your needs.

💡
If you're wondering how OpenTelemetry compares to older tools like OpenTracing, this breakdown covers the key differences: OpenTelemetry vs OpenTracing.

Getting Started with OpenTelemetry Logging

Setting up OpenTelemetry logging involves a few key components that work together:

Understanding the Core Components

OpenTelemetry's logging architecture consists of:

  1. Log Sources: Where log events originate (applications, services, infrastructure)
  2. SDK: Libraries that capture and process logs in your application
  3. Collector: Optional component that receives, processes, and exports telemetry data
  4. Exporters: Components that send logs to your chosen observability backend

This modular design lets you customize how logs flow through your system while maintaining compatibility with the broader OpenTelemetry ecosystem.

Setting Up Your First OpenTelemetry Logger

Let's look at a simple example of setting up OpenTelemetry logging in a Node.js application:

// Import required packages
const { logs } = require('@opentelemetry/api');
const { LoggerProvider } = require('@opentelemetry/sdk-logs');
const { Resource } = require('@opentelemetry/resources');
const { SemanticResourceAttributes } = require('@opentelemetry/semantic-conventions');
const { OTLPLogExporter } = require('@opentelemetry/exporter-logs-otlp-http');

// Create a resource that identifies your service
const resource = new Resource({
  [SemanticResourceAttributes.SERVICE_NAME]: 'my-service',
  [SemanticResourceAttributes.SERVICE_VERSION]: '1.0.0',
});

// Create an exporter that sends logs to your backend
const exporter = new OTLPLogExporter({
  url: 'http://localhost:4318/v1/logs',
});

// Create a logger provider with the exporter
const loggerProvider = new LoggerProvider({
  resource: resource,
});
loggerProvider.addLogRecordProcessor(new SimpleLogRecordProcessor(exporter));

// Register the logger provider
logs.setGlobalLoggerProvider(loggerProvider);

// Get a logger
const logger = logs.getLogger('my-logger');

// Log events
logger.emit({
  body: 'This is a test log message',
  severity: logs.SeverityNumber.INFO,
  attributes: {
    'request.id': '123456',
    'user.id': 'user-789',
  },
});

This basic setup handles log creation, processing, and export to an OpenTelemetry Protocol (OTLP) endpoint.

💡
If you’re setting up OpenTelemetry logging, it’s worth checking how environment variables work to keep your config clean and consistent.

Best Practices for OpenTelemetry Logging

To get the most out of OpenTelemetry logs, follow these proven approaches:

Structured Logging

Always use structured logging patterns rather than plain text:

// Not ideal
logger.info("User logged in: userId=123");

// Better - structured and parseable
logger.emit({
  body: "User login successful",
  severity: logs.SeverityNumber.INFO,
  attributes: {
    'user.id': '123',
    'login.method': 'password',
    'client.ip': '192.168.1.1'
  }
});

Structured logs make it easier to search, filter, and analyze your data downstream.

Correlation with Traces and Metrics

One of OpenTelemetry's biggest strengths is connecting different telemetry signals. When emitting logs:

  1. Include trace IDs and span IDs in log attributes
  2. Use consistent attribute names across all telemetry types
  3. Add the same resource attributes to logs, metrics, and traces

This correlation powers better root cause analysis when issues occur.

Log Level Management

Set appropriate log levels based on your environment:

Environment Recommended Log Level Rationale
Development DEBUG or TRACE Capture detailed information for local troubleshooting
Testing DEBUG Balance between information and performance
Staging INFO Mirror production settings to catch issues
Production INFO or WARN Focus on actionable events without overwhelming storage

Remember that OpenTelemetry's severity levels follow a numerical scale, with lower numbers representing more severe events.

💡
Understanding how OpenTelemetry compares to OpenMetrics can help you choose the right tool for your observability setup—this comparison guide breaks it down.

OpenTelemetry Logging Tools Comparison

Several tools can help you implement and manage OpenTelemetry logs:

Tool Type Key Strengths Best For
Last9 Managed observability platform Budget-friendly event-based pricing; High-cardinality support; Unified telemetry Teams seeking cost-effective, scalable observability
Jaeger Open-source tracing Strong integration with OpenTelemetry; Great visualization Distributed tracing-focused teams
Prometheus Open-source metrics Powerful querying; Native Kubernetes support Metrics-heavy monitoring
Grafana Visualization Multi-source dashboards; Alert management Creating unified observability views
Elasticsearch Search & analytics Full-text search; Log analytics Complex log query requirements

If you're looking for a managed solution that doesn’t cut corners on capabilities, Last9 stands out. It works well with OpenTelemetry and Prometheus, helping you unify all your telemetry data in one place. We've also scaled to monitor some of the largest live-streaming events in history. And with our event-based pricing model, costs stay predictable.

Probo Cuts Monitoring Costs by 90% with Last9
Probo Cuts Monitoring Costs by 90% with Last9

Implementing OpenTelemetry Collector for Logs

The OpenTelemetry Collector serves as a central processing point for all your telemetry data. For logs, it offers several key benefits:

  • Buffer logs during backend outages
  • Pre-process logs before they reach storage
  • Route logs to multiple destinations
  • Convert between logging formats

Here's a sample configuration for an OpenTelemetry Collector that processes logs:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

processors:
  batch:
    timeout: 1s
    send_batch_size: 1024
  attributes:
    actions:
      - key: environment
        value: production
        action: upsert
  resourcedetection:
    detectors: [env, system]
    timeout: 2s

exporters:
  otlp:
    endpoint: observability-backend:4317
    tls:
      insecure: false
      cert_file: /certs/client.crt
      key_file: /certs/client.key
  file:
    path: /var/log/otel-collector/backup-logs.json

service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [batch, attributes, resourcedetection]
      exporters: [otlp, file]

This configuration receives logs via OTLP, processes them in batches, adds environment information, and exports them to both your main observability backend and a local file backup.

Advanced OpenTelemetry Logging Techniques

Once you've mastered the basics, explore these advanced techniques:

Sampling Strategies

Log volumes can grow quickly in busy systems. Intelligent sampling lets you maintain visibility while controlling costs:

  • Head-based sampling: Makes sampling decisions when logs are created
  • Tail-based sampling: Makes decisions after aggregating related logs
  • Attribute-based sampling: Varies sampling rates based on log attributes

For example, you might sample 100% of error logs but only 10% of debug logs.

Custom Log Processors

Create custom processors to modify, enrich, or filter logs:

class PiiRedactionProcessor {
  constructor() {
    // Initialize regex patterns for PII detection
  }

  onEmit(logRecord) {
    // Redact email addresses, credit card numbers, etc.
    if (logRecord.body && typeof logRecord.body === 'string') {
      logRecord.body = this.redactPii(logRecord.body);
    }
    
    // Also check attributes
    if (logRecord.attributes) {
      for (const [key, value] of Object.entries(logRecord.attributes)) {
        if (typeof value === 'string') {
          logRecord.attributes[key] = this.redactPii(value);
        }
      }
    }
    
    return logRecord;
  }
  
  redactPii(text) {
    // Implementation of PII redaction
    return text.replace(/\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\b/g, '[EMAIL REDACTED]');
    // Add more redaction patterns as needed
  }
}

// Add to your logger provider
loggerProvider.addLogRecordProcessor(new PiiRedactionProcessor());

Custom processors help you enforce security and compliance requirements across your logging pipeline.

💡
Now, fix production OpenTelemetry log issues instantly—right from your IDE, with AI and Last9 MCP.

Common OpenTelemetry Logging Challenges and Solutions

Even with a well-designed logging system, challenges can arise:

High Log Volume Management

Challenge: OpenTelemetry makes it easy to generate lots of logs, which can lead to storage and cost issues.

Solution: Implement a tiered storage strategy:

  • Keep recent logs (7-30 days) in hot storage for quick access
  • Move older logs to cold storage for compliance and occasional access
  • Use aggregation to create summaries of log patterns over time

Ensuring Log Data Quality

Challenge: Poor log quality undermines the value of your entire observability stack.

Solution: Create a log quality scoring system:

  • Check for required attributes (service name, timestamp, severity)
  • Verify that structured data is properly formatted
  • Ensure trace context is properly propagated
  • Monitor the percentage of logs that meet quality standards

Cross-Service Log Correlation

Challenge: In microservices, a single user request might generate logs across dozens of services.

Solution: Use trace context propagation:

  • Pass trace ID and span ID with every service call
  • Include these IDs in all log records
  • Configure your observability platform to link logs to traces
  • Create dashboards that show the full request journey

The OpenTelemetry logging landscape continues to evolve. Keep an eye on these emerging trends:

  • Semantic conventions for logs: Standardized attribute names make correlation easier
  • AI-powered log analysis: Machine learning models that detect anomalies and suggest root causes
  • Extended log-trace-metric correlation: Deeper integration between all telemetry types
  • Native Kubernetes integration: Smoother collection of container and pod logs

Staying current with these trends will help your team build more observable, reliable systems.

Conclusion

OpenTelemetry logging represents a significant step forward in application observability. Standardizing how logs are collected, processed, and exported helps DevOps teams build more reliable, easier-to-troubleshoot systems.

💡
If you'd like to continue the conversation, join our Discord Community to connect with other engineers implementing OpenTelemetry logging in their systems.

FAQs

How do you collect logs in OpenTelemetry?

Collecting logs in OpenTelemetry involves:

  1. Instrumenting your application with the OpenTelemetry SDK
  2. Configuring log sources (application code, log files, system logs)
  3. Setting up processors for batching and filtering
  4. Deploying exporters that send logs to your observability backend

You can collect logs directly from code using the SDK or use the OpenTelemetry Collector to gather logs from existing files and systems without modifying applications.

What is the difference between OpenTelemetry and log?

OpenTelemetry is a comprehensive observability framework that handles metrics, traces, and logs. Logs are just one type of telemetry data that OpenTelemetry can process. While traditional logging focuses only on capturing event records, OpenTelemetry provides a standard way to collect, process, and export all telemetry data with built-in correlation between signals.

Does telemetry include logs?

Yes, telemetry includes logs along with metrics and traces. These three signals form the core of modern observability:

  • Logs provide detailed records of events
  • Metrics offer aggregated numerical measurements
  • Traces show the path of requests through distributed systems

OpenTelemetry treats logs as a first-class telemetry signal that can be correlated with the other types.

What are the disadvantages of OpenTelemetry?

While OpenTelemetry offers many benefits, some challenges include:

  • Steeper learning curve compared to simpler logging frameworks
  • More complex initial setup, especially in multi-language environments
  • Ongoing development means some features may still be maturing
  • Potential for high data volumes that can increase storage costs
  • Additional runtime overhead if not configured properly

Many of these disadvantages can be mitigated with proper planning and implementation strategies.

What are OpenTelemetry Logs?

OpenTelemetry logs are standardized log records that follow the OpenTelemetry specification. They include:

  • Core log information (timestamp, severity, message)
  • Structured data as attributes
  • Resource information that identifies the source
  • Optional trace context for correlation
  • Standard semantic conventions for naming

This standardization makes logs more portable between different observability backends.

How Do People Maintain Good Metrics?

Maintaining good metrics alongside logs involves:

  1. Defining clear service level objectives (SLOs) and indicators (SLIs)
  2. Using consistent naming conventions across services
  3. Balancing detail vs. volume (too many metrics becomes noisy)
  4. Setting appropriate aggregation levels
  5. Regularly reviewing dashboards to remove unused metrics
  6. Correlating metrics with logs and traces for context

Good metrics complement logs by providing aggregated views that help identify where to look in your logs.

What is the OpenTelemetry Collector?

The OpenTelemetry Collector is a vendor-agnostic component that receives, processes, and exports telemetry data. For logs, it serves as:

  • A central aggregation point
  • A processor for filtering, transforming, and enriching logs
  • A buffer during backend outages
  • A translator between different log formats
  • A router that can send logs to multiple destinations

The Collector can run as an agent on the same host as your application or as a gateway that receives data from multiple hosts.

How Does OpenTelemetry Support Logging?

OpenTelemetry supports logging through:

  1. A dedicated Logs SDK for direct instrumentation
  2. Log appenders/bridges for existing logging libraries
  3. Collectors that can ingest logs from files and other sources
  4. A standardized log data model
  5. Processors for common log manipulation tasks
  6. Exporters for sending logs to various backends
  7. Context propagation to link logs with traces

This multi-layered approach lets you adopt OpenTelemetry logging gradually based on your needs.

What Are the Benefits of the OpenTelemetry Log Data Model?

The OpenTelemetry log data model provides:

  • Consistent structure across languages and platforms
  • Built-in support for structured data via attributes
  • Standard fields like timestamp, severity, and body
  • Resource information that identifies the log source
  • Easy correlation with traces and metrics
  • Compatibility with many observability backends
  • Future-proofing as the ecosystem evolves

This standardization simplifies log analysis and makes it easier to switch between tools.

How Does OpenTelemetry Collect Logs from Systems and Applications?

OpenTelemetry collects logs through several methods:

Code Instrumentation:

// Direct SDK usage
const logger = logs.getLogger('app-logger');
logger.emit({
  body: 'User logged in',
  severity: logs.SeverityNumber.INFO,
  attributes: { 'user.id': '12345' }
});

Log File Collection:

# Collector config for file logs
receivers:
  filelog:
    include: [ /var/log/*.log ]
    start_at: end
    operators:
      - type: regex_parser
        regex: '^(?P<time>\d{4}-\d{2}-\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)'

Bridge from Existing Loggers:

// Java example with Log4j bridge
Logger logger = LogManager.getLogger("MyClass");
MDC.put("trace_id", currentSpan.getSpanContext().getTraceId());
logger.info("Processing request");

System Log Collection:

# Collector config for syslog
receivers:
  syslog:
    protocol: tcp
    listen_address: "0.0.0.0:54527"

How do you integrate OpenTelemetry logs with existing logging systems?

To integrate with existing logging systems:

  1. Use log appenders/bridges for your current framework (e.g., Log4j, Winston)
  2. Configure the OpenTelemetry Collector to read from existing log files
  3. Run both systems in parallel during migration
  4. Use processors to transform logs into the format your existing tools expect
  5. Consider a phased approach, starting with new services and gradually adding older ones

Many popular logging libraries already have OpenTelemetry integrations available.

How do you configure OpenTelemetry for log aggregation?

Configuring OpenTelemetry for log aggregation involves:

  1. Setting up the Collector with appropriate receivers for your log sources
  2. Configuring processors for batching and filtering to handle high volumes
  3. Using the load balancing exporter for horizontal scaling
  4. Setting appropriate buffer sizes and queue limits
  5. Implementing sampling strategies for high-volume environments
  6. Establishing a pipeline from edge collectors to a central aggregation tier

Here's a sample configuration for a scalable log aggregation setup:

receivers:
  otlp:
    protocols:
      grpc:
      http:
  filelog:
    include: [/var/log/application/*.log]

processors:
  batch:
    send_batch_size: 10000
    timeout: 5s
  memory_limiter:
    check_interval: 1s
    limit_mib: 2000
  filter:
    logs:
      include:
        match_type: regexp
        severity_number: [5-9] # WARN, ERROR, FATAL

exporters:
  otlp:
    endpoint: aggregation-tier:4317
  loadbalancing:
    protocol:
      otlp:
        timeout: 10s
    resolver:
      dns:
        hostname: aggregation-tier.logs.svc.cluster.local
        port: 4317

service:
  pipelines:
    logs:
      receivers: [otlp, filelog]
      processors: [memory_limiter, filter, batch]
      exporters: [loadbalancing]

Contents


Newsletter

Stay updated on the latest from Last9.

Authors
Anjali Udasi

Anjali Udasi

Helping to make the tech a little less intimidating. I love breaking down complex concepts into easy-to-understand terms.