Sep 2nd, ‘24/6 min read

OpenTelemetry Filelog Receiver: Collecting Logs from Kubernetes

Learn to configure, optimize, and troubleshoot log collection from various sources including syslog and application logs. Discover advanced parser operator techniques for robust observability.

OpenTelemetry Filelog Receiver: Collecting Logs from Kubernetes

As an engineer who's wrestled with log collection in complex Kubernetes environments, I've found the OpenTelemetry filelog receiver to be a very valuable tool. In this article, I'll share my insights on leveraging this powerful component to streamline log collection in Kubernetes deployments.

Table of Contents

  1. OpenTelemetry and Filelog Receiver
  2. Setting Up the Filelog Receiver in Kubernetes
  3. Understand the configuration
  4. Optimizing Performance
  5. Troubleshooting Common Issues
  6. Syslog and Other Log Sources

OpenTelemetry and the Filelog Receiver

OpenTelemetry (OTel) is an open-source observability framework that's gaining traction in the cloud-native world. It provides a unified approach to collecting metrics, traces, and logs, making it a go-to solution for many engineering teams.

The filelog receiver is a key component of the OpenTelemetry Collector, responsible for reading log files and converting them into the OpenTelemetry log format.

Filelog receiver is part of the opentelemetry-collector-contrib repository on GitHub, which houses various community-contributed components for the OpenTelemetry Collector.

📑
For more detailed information about the filelog receiver and its configuration options, you can refer to the official OpenTelemetry documentation: OpenTelemetry Kubernetes Collector Components - Filelog Receiver.

Setting Up the Filelog Receiver in Kubernetes

Let's walk through setting up the filelog receiver to collect logs from a Kubernetes cluster:

  1. Deploy the OpenTelemetry Collector in your Kubernetes cluster using Helm:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts

helm install my-otel-collector open-telemetry/opentelemetry-collector
  1. Configure the filelog receiver using a YAML configuration file:
receivers:
  filelog:
    include: [ /var/log/pods/*/*/*.log ]
    start_at: beginning
    operators:
      - type: regex_parser
        regex: '^(?P<time>\S+) (?P<stream>stdout|stderr) (?P<logtag>\w) (?P<log>.*)$'
        timestamp:
          parse_from: time
          layout: '%Y-%m-%dT%H:%M:%S.%LZ'
      - type: json_parser
        parse_from: log
      - type: resource
        attributes:
          - key: k8s.pod.name
            from: resource.attributes["file.name"]
            regex: '/var/log/pods/(?P<namespace>.*)/(?P<pod>.*)/.*\.log'

processors:
  batch:

exporters:
  otlp:
    endpoint: "otel-collector:4317"

service:
  pipelines:
    logs:
      receivers: [filelog]
      processors: [batch]
      exporters: [otlp]
  1. Apply this configuration to your OpenTelemetry Collector deployment:
kubectl apply -f otel-collector-config.yaml

This configuration tells the filelog receiver to read logs from all containers in the Kubernetes cluster, parse the log entries, and extract relevant metadata.

📖
Check out our guide on using kubectl logs to view Kubernetes pod logs.

Understanding the Configuration

Let's break down some key elements:

  • The include field specifies which log files to read.
  • start_at: beginning ensures we don't miss any logs.
  • The operators section defines how we parse and transform the logs.
  • The resource operator adds Kubernetes metadata to our logs.

Optimizing Performance

To handle high volumes of logs efficiently:

  1. Use the batch processor to reduce API calls to your backend.
  2. Implement a memory_limiter processor to prevent OOM issues:
processors:
  memory_limiter:
    check_interval: 1s
    limit_mib: 1000
    spike_limit_mib: 200
  1. For very high log volumes, consider using multiple collector instances and sharding your log collection.

Troubleshooting Common Issues

Here are some issues I've encountered and how to resolve them:

  1. Missing logs: Check your include patterns and ensure they match your log file paths.
  2. Parsing errors: Verify your regex patterns using online regex testers with sample log entries.
  3. High CPU usage: Review your operators and consider simplifying complex regex patterns.

Integrating with the OpenTelemetry Ecosystem

The filelog receiver integrates seamlessly with other OpenTelemetry components. I often use it alongside:

  • The OTLP exporter sends logs to backends like Last9 Levitate, Clickhouse, etc.
  • Custom processors for data enrichment or filtering
  • Other receivers for a complete observability solution

Syslog and Other Log Sources

While we've focused on collecting application logs from Kubernetes pods, the OpenTelemetry filelog receiver is versatile enough to handle various log sources. One common log format you might encounter is syslog.

Here is how to configure the filelog receiver to collect syslog messages:

receivers:
   filelog/syslog:
     include: [/var/log/syslog]
     operators:
       - type: add_attributes
         attributes:
           log.source: syslog

This configuration uses the parser operator to extract relevant fields from syslog messages. The `regex_parser` operator, in addition, can be particularly useful for structured logs like syslog, allowing us to parse timestamp, host, program, and message components.

Sending Logs to Last9 Levitate

I've found it very easy to send logs collected by the OpenTelemetry filelog receiver to Last9 Levitate. Here's how to configure it:

  1. Update the exporters section in your OpenTelemetry Collector configuration:
exporters:
  otlp:
    endpoint: "otlp.last9.io:443"
    headers:
      Authorization: "Bearer <your-last9-api-key>"
    tls:
      insecure: false
  1. Ensure your service pipeline uses this exporter:
service:
  pipelines:
    logs:
      receivers: [filelog]
      processors: [batch]
      exporters: [otlp]

Advanced Parser Operator Techniques

The parser operator is a powerful tool in the filelog receiver's arsenal. We've already seen its use with regex parsing, but let's explore some advanced techniques:

  1. Parsing JSON logs:
operators:
+   - type: json_parser
+     parse_from: body
+     timestamp:
+       parse_from: time
+       layout: '%Y-%m-%dT%H:%M:%S.%fZ'
  1. Parsing key-value pairs:
+ operators:
+   - type: regex_parser
+     regex: '(\w+)=("[^"]*"|\S+)'
+     parse_from: body
  1. Using the `log.file.path` attribute:
+ operators:
+   - type: add_attributes
+     attributes:
+       log.file.path: EXPR(attributes["log.file.path"])

These parser operator examples demonstrate how to handle different log formats and extract valuable metadata from your application logs.

How do I copy file.log.name from attributes to resource for filelog receiver?

Use the move operator:

receivers:
  filelog:
    include: [/path/to/your/logs/*.log]
    operators:
      - type: move
        from: attributes["file.name"]
        to: resource["log.file.name"]

Is it possible to assign different labels based on the folder I'm parsing from?

Yes, using the router operator:

receivers:
  filelog:
    include:
      - /path/to/service1/*.log
      - /path/to/service2/*.log
    operators:
      - type: router
        id: route_by_folder
        routes:
          - output: service1_parser
            expr: 'strings.HasPrefix(attributes["file.path"], "/path/to/service1")'
          - output: service2_parser
            expr: 'strings.HasPrefix(attributes["file.path"], "/path/to/service2")'
      - type: add_attributes
        id: service1_parser
        attributes:
          service.name: service1
      - type: add_attributes
        id: service2_parser
        attributes:
          service.name: service2

How can I collect logs from multiple log sources?

To collect logs from multiple sources, you can configure multiple filelog receivers or use include patterns. Here's an example that collects both Kubernetes pod logs and syslog:

 receivers:
   filelog/pods:
     include: [/var/log/pods/*/*/*.log]
   filelog/syslog:
     include: [/var/log/syslog]

 service:
   pipelines:
     logs:
       receivers: [filelog/pods, filelog/syslog]
       processors: [batch]
       exporters: [otlp]

This configuration allows you to collect and process logs from different sources, giving you a comprehensive view of your system's behavior.

Why is OpenTelemetry's log data model needed?

The OpenTelemetry log data model provides:

  1. Standardization across different sources and systems (or sinks)
  2. Interoperability with metrics and traces
  3. Vendor neutrality
  4. Rich context through additional attributes

What are OpenTelemetry Logs?

OpenTelemetry Logs are structured log records conforming to the OpenTelemetry log data model. They typically include:

  1. Timestamp
  2. Severity level
  3. Body (the actual log message)
  4. Attributes (key-value pairs for additional context)
  5. Resource information (details about the source of the log)

Why implement this as an operator and not as a processor?

Operators offer:

  1. Granularity: Working at the individual log entry level
  2. Performance: Processing logs as they're read
  3. Flexibility: Building complex processing pipelines within the receiver

How do I configure the filelog receiver to monitor log files from multiple services?

Use multiple filelog receivers:

receivers:
  filelog/service1:
    include: [/path/to/service1/*.log]
    operators:
      - type: add_attributes
        attributes:
          service.name: service1
  filelog/service2:
    include: [/path/to/service2/*.log]
    operators:
      - type: add_attributes
        attributes:
          service.name: service2

service:
  pipelines:
    logs:
      receivers: [filelog/service1, filelog/service2]
      processors: [batch]
      exporters: [otlp]

How do I configure the OpenTelemetry filelog receiver to collect logs from multiple files?

Use the include and exclude fields:

receivers:
  filelog:
    include:
      - /path/to/logs/*.log
      - /another/path/to/logs/*.log
    exclude:
      - /path/to/logs/excluded.log
    start_at: beginning
    operators:
      - type: regex_parser
        regex: '^(?P<time>\S+) (?P<severity>\S+) (?P<message>.*)$'
        timestamp:
          parse_from: time
          layout: '%Y-%m-%d %H:%M:%S'

Conclusion

The OpenTelemetry filelog receiver, part of the opentelemetry-collector-contrib repository, has become an indispensable tool in my Kubernetes observability toolkit. Its flexibility in handling various log sources, powerful parser operators, and seamless integration with the broader OpenTelemetry ecosystem make it a robust solution for log collection in complex, containerized environments.

Remember, observability is a journey, not a destination. Whether you're dealing with application logs, syslog, or other log sources, keep experimenting, optimizing, and sharing your experiences with the community. The open-source nature of OpenTelemetry means we all benefit from each other's learnings and contributions.

Integrating the OpenTelemetry filelog receiver with backends such as Last9 Levitate creates a robust observability pipeline, offering valuable insights into your Kubernetes applications. Make sure to keep your API keys secure and periodically review your log collection setup to capture all the essential data.

Happy logging, and may your systems always be observable!

📉
Get started with Levitate! Know more about how we unlocked high cardinality monitoring for live streaming giants with 40M+ concurrent users! Schedule a demo with us!

Newsletter

Stay updated on the latest from Last9.

Authors

Prathamesh Sonpatki

Prathamesh works as an evangelist at Last9, runs SRE stories - where SRE and DevOps folks share their stories, and maintains o11y.wiki - a glossary of all terms related to observability.

Handcrafted Related Posts