Vibe monitoring with Last9 MCP: Ask your agent to fix production issues! Setup →
Last9 Last9

.NET Logging with Serilog and OpenTelemetry

Bring structure and trace context to your .NET logs by combining Serilog with OpenTelemetry for better debugging and observability.

May 21st, ‘25
.NET Logging with Serilog and OpenTelemetry
See How Last9 Works

Unified observability for all your telemetry.Open standards. Simple pricing.

Talk to us

Debugging modern .NET apps isn’t as simple as scanning logs anymore. With services spread out and systems growing more complex, it's easy to miss the bigger picture. Serilog gives you clean, structured logs. OpenTelemetry brings in traces and metrics to connect the dots.

This guide covers how to wire up Serilog with OpenTelemetry, send logs to traces, and build an observability setup that helps you troubleshoot, without digging through disconnected logs for hours.

Why Serilog Works Well with OpenTelemetry

Serilog’s great for making .NET logs structured and easy to query. Most folks use it because it just fits into the .NET workflow without much fuss. OpenTelemetry, meanwhile, helps you stitch logs, traces, and metrics together—so you're not flipping between tools trying to figure out what went wrong.

Using both means you get clean logs and the bigger picture. You can see what happened, where it happened, and how it fits into the rest of your system—all without rewriting your whole logging setup.

How Serilog Brought Structure to .NET Logging

Logging used to mean writing plain text messages. It worked—until you needed to search for something, filter logs, or build alerts. That’s where Serilog changed things for .NET developers.

Instead of writing logs like this:

_logger.Information("User 123 purchased 5 items for $200");

You write:

_logger.Information("User {UserId} purchased {ItemCount} items for {TotalAmount}", 
    userId, itemCount, totalAmount);

Here’s what’s happening:

  • The message is still human-readable.
  • But behind the scenes, Serilog stores UserId, ItemCount, and TotalAmount as separate fields.
  • That makes the logs structured, not just one long string.

Why does this matter? Because now your logs are easier to work with:

  • You can search for all purchases over $100.
  • Filter logs by user ID.
  • Build dashboards or alerts based on log data.

In short, structured logs give you clean, searchable data without making your logging code more complicated.

💡
If you're already using Serilog for structured logs, you might also want to see how Loki handles log management at scale.

OpenTelemetry: One Framework for All Your Observability Data

Modern systems generate a lot of signals—logs, metrics, and traces—each offering a different view of what’s happening. The problem? They’re often handled by different tools, with different formats and setups.

OpenTelemetry fixes that. It provides a single framework to collect and work with:

  • Logs – Useful for capturing events, errors, and application messages. Think of them as the breadcrumbs left behind as your app runs.
  • Metrics – Numeric data that helps you track system health over time, like request rates, memory usage, or error counts.
  • Traces – Show the journey of a request as it moves through different services, helping you understand latency, bottlenecks, and failures.

By standardizing how you instrument code and export telemetry, OpenTelemetry saves you from wiring up separate solutions for each signal.

And OpenTelemetry is backed by major cloud providers and observability platforms (like Last9), making it a safe bet for anyone building or maintaining distributed systems.

Integrating Serilog with OpenTelemetry for Unified Telemetry

To make logs, traces, and metrics useful—not just data sitting in silos—you need a proper flow. Here’s how to wire everything together:

Step 1: Run the OpenTelemetry Collector

The OpenTelemetry Collector acts as a middleman—it receives telemetry from your app and forwards it to your observability backend (like Last9).

Here’s a setup using Docker Compose:

services:
  otel-collector:
    image: otel/opentelemetry-collector-contrib:latest
    volumes:
      - ./otel-collector-config.yaml:/etc/otel/config.yaml
    ports:
      - "4317:4317"   # gRPC endpoint for OTLP

The Collector listens for OTLP data over gRPC and routes it based on your config.

Now, a basic config file to get logs flowing:

receivers:
  otlp:
    protocols:
      grpc:

exporters:
  otlp:
    endpoint: "last9-endpoint:4317"
    headers:
      api-key: "your-api-key"

service:
  pipelines:
    logs:
      receivers: [otlp]
      exporters: [otlp]

This tells the Collector: "Accept logs over OTLP, batch them, and send them to my backend."

You can later extend this to handle metrics and traces by adding those pipelines, but keeping it minimal to start makes debugging easier.

Step 2: Send Structured Logs with Serilog

Once the Collector is up, you can push logs from your .NET app using Serilog:

.WriteTo.OpenTelemetry(options => {
    options.Endpoint = "http://localhost:4317";
    options.Protocol = OtlpProtocol.Grpc;
})

This hooks Serilog into the Collector using the OTLP exporter. You still get all the benefits of structured logging, but now your logs are part of a unified pipeline.

The logs aren’t just strings—they’re rich, queryable data that can carry tags like service.name, environment, and more. That context is key when you're debugging across services.

Step 3: Add Metrics and Traces (Optional but Worth It)

To round out your observability setup, you can wire in tracing and metrics:

builder.Services.AddOpenTelemetryTracing(b => {
    b.AddAspNetCoreInstrumentation()
     .AddHttpClientInstrumentation()
     .AddOtlpExporter(o => o.Endpoint = new Uri("http://localhost:4317"));
});
builder.Services.AddOpenTelemetryMetrics(b => {
    b.AddAspNetCoreInstrumentation()
     .AddOtlpExporter(o => o.Endpoint = new Uri("http://localhost:4317"));
});

These SDKs automatically capture HTTP calls, response times, and request flows. You’ll start seeing spans and metrics without needing to manually instrument everything.

Why It All Works Better Together

This setup gives you a consistent, flexible pipeline:

  • Serilog handles structured, readable logs.
  • OpenTelemetry captures traces and metrics.
  • The Collector brings everything together and forwards it wherever you need—be it Last9, Prometheus, or another backend.

The best part? All three signals (logs, metrics, traces) carry the same metadata.

💡
If you're weighing observability tools, this comparison of CloudWatch and OpenTelemetry breaks down the trade-offs and helps you decide what fits your stack.

One of the biggest advantages of combining Serilog with OpenTelemetry is the ability to correlate logs with traces. This gives you more than just isolated data points—you get context. That means faster debugging, clearer root causes, and a better understanding of what your app is doing.

Let’s walk through a few ways to make that happen.

Correlating Logs with Traces

There are two main ways to add trace context to your logs: manual and automatic.

1. Manual Correlation

The simplest option is to grab the current activity and add the trace and span IDs to each log:

var activity = Activity.Current;
if (activity != null)
{
    _logger.Information("Processing request {TraceId} {SpanId}", 
        activity.TraceId, activity.SpanId);
}

This works fine, especially if you only need trace info in a few places. But it does mean repeating this logic throughout your code.

2. Automatic Correlation with Serilog Enrichers

If you want to avoid boilerplate, you can use the Serilog.Enrichers.Span package. It automatically adds TraceId and SpanId to every log, based on the current OpenTelemetry Activity.

dotnet add package Serilog.Enrichers.Span

Then update your Serilog configuration:

Log.Logger = new LoggerConfiguration()
    .Enrich.WithSpan()
    .WriteTo.OpenTelemetry(/* config */)
    .CreateLogger();

Now every log entry will include trace metadata automatically—no manual code needed.

Writing Logs as Trace Events

If you want to go a step further, you can write logs directly into trace spans as events. This way, when you view a trace in your observability backend, you’ll see the logs inline with the span timeline.

Here’s a simple custom Serilog sink that does this:

public class TraceEventSink : ILogEventSink
{
    public void Emit(LogEvent logEvent)
    {
        var activity = Activity.Current;
        if (activity != null)
        {
            var attributes = logEvent.Properties.ToDictionary(
                prop => prop.Key,
                prop => (object)prop.Value.ToString());

            activity.AddEvent(new ActivityEvent(
                logEvent.RenderMessage(),
                DateTimeOffset.Now,
                new ActivityTagsCollection(attributes)));
        }
    }
}

And in your Serilog setup:

Log.Logger = new LoggerConfiguration()
    .WriteTo.Sink(new TraceEventSink())
    .WriteTo.OpenTelemetry()
    .CreateLogger();

With this in place, your logs will show up in both:

  • In your logs backend (as usual)
  • And inside trace spans, giving you a complete step-by-step timeline of what happened during a request.

This setup turns observability from a pile of disconnected data into a coherent story about your system’s behavior.

💡
Using Serilog with OpenTelemetry? You’ll also want to understand the role of the OpenTelemetry Collector vs Exporter and how each fits into your pipeline.

Practical Scenarios for Using Serilog with OpenTelemetry

Serilog and OpenTelemetry are especially useful when you’re trying to move fast, from “something’s broken” to “here’s exactly what went wrong.” Below are two common scenarios where this setup saves serious time and frustration.

Scenario 1: Debugging Microservice Failures

You’re working with a system made up of several microservices. A request comes in, something breaks, and all you’ve got is a vague error in the logs. You’re left guessing which service caused the issue.

With traditional logging, you’d have to jump between logs from different services, trying to line things up by timestamp—hoping they tell a consistent story.

With Serilog and OpenTelemetry, the process is much more straightforward:

  • You start by grabbing the trace ID for the failed request.
  • Then you can view the entire trace, seeing how the request moved across services and where it stalled or failed.
  • All the logs tied to that trace ID are automatically pulled in, no matter which service they came from.
  • That makes it easy to spot the exact service and operation where things went wrong.

Scenario 2: Investigating Performance Issues

Let’s say your app feels slow, but you don’t yet know what’s behind it. Maybe a particular endpoint is lagging, or maybe it’s something deeper in the stack.

Here’s how Serilog and OpenTelemetry help you break it down:

  • Start by looking at metrics to see which endpoints are responding slowly or spiking in latency.
  • Then zoom into traces for those requests to see how long each service or component is taking.
  • Finally, use the trace ID to pull the related logs—so you can see what the app was doing at each step: whether it was stuck waiting on a DB call, retrying something, or logging a warning you might’ve missed.

You get the high-level view (metrics), the request journey (traces), and the fine-grained details (logs)—all tied together.

💡
Understanding how histograms work in OpenTelemetry can help you capture latency and performance patterns more effectively.

How to Minimize Overhead in Production Telemetry

While Serilog and OpenTelemetry are powerful tools, using them in production environments requires some care, especially in systems with high throughput or strict latency budgets.

Unbounded logging, full trace capture, or aggressive flush intervals can add unnecessary load. It’s important to strike a balance between visibility and performance.

Here are a few key areas to optimize:

Key Considerations

  • Log Levels
    Use appropriate log levels. Keep Information for key events and Debug limited to local or test environments. Avoid overly verbose logging in production.
  • Batching
    Batching helps reduce the overhead of frequent exports. Configure exporters to batch data and flush at regular intervals instead of sending each event individually.
  • Sampling
    For traces, consider probabilistic sampling. Capturing every trace might be useful in dev, but it doesn’t scale. Sampling ensures you retain enough context without overwhelming your backend.
  • Buffering
    Exporters should be configured with adequate buffers to handle spikes. Under-provisioned buffers can lead to dropped data under load.

Optimizing for High-Traffic Systems

In applications with high request volumes, default configurations often fall short. Consider:

  • Trace Sampling: A simple example using ratio-based sampling:
builder.Services.AddOpenTelemetryTracing(b => {
    b.SetSampler(new TraceIdRatioBasedSampler(0.1)); // 10% of traces
    // other configuration
});

This reduces trace volume while still providing useful insights.

  • Log Filtering: Introduce filters to discard low-signal logs in production. Focus on logs that provide actionable value.
  • Tuning Batch Settings: Adjust batch size and flush intervals to suit your traffic profile. Too frequent flushing adds overhead; too infrequent risks delay or data loss during shutdowns.
💡
Now, fix production .NET log issues instantly—right from your IDE, with AI and Last9 MCP. Bring real-time production context—logs, metrics, and traces—into your

Practical Patterns for Using Serilog with OpenTelemetry in Production

Beyond basic setup, certain implementation patterns can help you get more out of Serilog and OpenTelemetry, especially when it comes to scaling, clarity, and relevance of your logs. Below are two patterns that can make a real impact in production systems.

Add Context to Logs Without Repeating Yourself

Logs are far more useful when they include context, things like OrderId, CustomerId, or UserId. But you don’t want to repeat those in every log line manually.

Instead, use Serilog’s contextual logging to attach properties that automatically apply to all logs within a specific scope:

using Serilog.Context;

using (LogContext.PushProperty("OrderId", orderId))
using (LogContext.PushProperty("CustomerId", customerId))
{
    _logger.Information("Processing order");
    // Every log here will include OrderId and CustomerId automatically
}

This pattern is especially useful in request pipelines, background workers, or anything tied to a logical unit of work. It makes filtering and debugging much easier, without adding noise to every log call.

Route Critical and Non-Critical Logs Separately

Not all logs are equal. Some deserve to be retained longer or sent to a more robust backend, while others are mainly for short-term visibility or debugging.

You can configure Serilog to route logs by severity, using different filters and exporters for each stream:

Log.Logger = new LoggerConfiguration()
    .WriteTo.Logger(lc => lc
        .Filter.ByIncludingOnly(e => e.Level >= LogEventLevel.Error)
        .WriteTo.OpenTelemetry(/* high-priority sink */))
    .WriteTo.Logger(lc => lc
        .Filter.ByIncludingOnly(e => e.Level < LogEventLevel.Error)
        .WriteTo.OpenTelemetry(/* lower-priority sink */))
    .CreateLogger();

This gives you more control over log storage, cost, and noise. For example, you might keep critical errors in a long-retention backend, while routing info/debug logs to a cheaper, short-term store, or even dropping them in high-traffic paths.

Conclusion

Tying Serilog and OpenTelemetry together gives you more than just logs or traces—it gives you connected, meaningful context across your application.

To get the most out of this setup, you’ll need a backend that handles logs, metrics, and traces without adding overhead. That’s where our platform, Last9, fits naturally. It works seamlessly with OpenTelemetry and lets you focus on what the data mean, not how to wire it all together.

Talk to us to know more or get started for free today!

FAQs

How is Serilog different from traditional .NET logging frameworks?
Serilog uses structured logging, where log data is captured as key-value pairs instead of plain strings. This makes logs easier to query and analyze, unlike traditional tools like log4net or NLog, which rely heavily on text formatting.

Can I use Serilog with OpenTelemetry in both .NET Framework and .NET Core?
Yes. It works with both, but the setup differs slightly. .NET Core and .NET 5+ have more built-in support, while .NET Framework may require additional compatibility packages.

Does OpenTelemetry add overhead to my application?
When configured correctly—with batching and sampling—the performance overhead is minimal (typically 1–3%). For most applications, the added visibility far outweighs the cost.

How can I filter or redact sensitive data from logs before export?
You can use Serilog’s filtering features to exclude sensitive content before it’s sent out:

.WriteTo.OpenTelemetry(options => {
    options.IncludeFormattedMessage = true;
    options.PreFilter = logEvent =>
        !logEvent.Properties.TryGetValue("Contains", out var val) ||
        !val.ToString().Contains("password");
})

This ensures fields like passwords or tokens don’t get exported.

Can I gradually migrate to Serilog and OpenTelemetry?
You can run Serilog and your existing logging setup side by side. Start by integrating Serilog into a few components, monitor the results, and roll it out further as you gain confidence in the new system.

Contents

Do More with Less

Unlock high cardinality monitoring for your teams.