Skip to content
Last9
Book demo

RUM Session Correlation

Propagate RUM session IDs end-to-end across backend services and async workers so you can filter logs, traces, and spans by browser session.

RUM session correlation lets you take a session ID from the browser and trace it all the way through your backend — across HTTP service calls, async queues, and workers. Once set up, you can filter logs, traces, and spans by session ID anywhere in your stack.

How It Works

L9RUM sends a W3C baggage header alongside traceparent on every outgoing request. Backend services extract this baggage, attach it to spans and logs, and forward it to any downstream services — including async message queues.

Browser (L9RUM)
── traceparent + baggage: session.id=abc ──► API Service
├── span attribute: session.id=abc
├── log field: session.id=abc
└── SQS message attribute: baggage=session.id=abc
└──► Worker Service
├── span attribute: session.id=abc
└── log field: session.id=abc

Prerequisites

  • L9RUM SDK initialized with network.backendCorrelation.enabled: true
  • Backend services instrumented with OpenTelemetry (Node.js guides: Express, NestJS, Node.js)

Setup

  1. Configure L9RUM

    Enable baggage propagation and add session.id to the allowed keys. Set the session ID value once the SDK initializes.

    L9RUM.init({
    baseUrl: "YOUR_BASE_URL",
    headers: { clientToken: "YOUR_CLIENT_TOKEN" },
    resourceAttributes: {
    serviceName: "your-frontend-app",
    deploymentEnvironment: "production",
    },
    network: {
    backendCorrelation: {
    enabled: true,
    injectToAllRequests: true,
    baggage: {
    enabled: true,
    allowedKeys: ["session.id"],
    },
    },
    },
    });
    // Set the session ID — use any stable identifier for this browser session
    L9RUM.spanAttributes({
    "session.id": getYourSessionId(),
    });

    L9RUM will include baggage: session.id=<value> on every fetch and XHR request from that point on.

  2. Configure Backend Services

    Each backend service that receives requests from the browser (or from another service that forwarded the baggage) needs three additions to its OTel setup:

    1. W3CBaggagePropagator — parses the baggage header on incoming requests and forwards it on outgoing calls
    2. BaggageSpanProcessor — promotes baggage entries to span attributes so they appear in traces
    3. logHook on WinstonInstrumentation — injects baggage entries into every Winston log record automatically, alongside trace_id and span_id

    Update instrumentation.ts / instrumentation.js in each service:

import {
CompositePropagator,
W3CTraceContextPropagator,
W3CBaggagePropagator,
} from '@opentelemetry/core';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { BatchSpanProcessor, SpanProcessor, Span } from '@opentelemetry/sdk-trace-base';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { registerInstrumentations } from '@opentelemetry/instrumentation';
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
import { context, propagation } from '@opentelemetry/api';
import type { Context } from '@opentelemetry/api';
// Promotes all baggage entries to span attributes
class BaggageSpanProcessor implements SpanProcessor {
onStart(span: Span, parentContext: Context): void {
const baggage = propagation.getBaggage(parentContext ?? context.active());
if (!baggage) return;
for (const [key, entry] of baggage.getAllEntries()) {
span.setAttribute(key, entry.value);
}
}
onEnd(): void {}
forceFlush(): Promise<void> { return Promise.resolve(); }
shutdown(): Promise<void> { return Promise.resolve(); }
}
const provider = new NodeTracerProvider();
provider.addSpanProcessor(new BaggageSpanProcessor());
provider.addSpanProcessor(new BatchSpanProcessor(new OTLPTraceExporter()));
provider.register({
propagator: new CompositePropagator({
propagators: [
new W3CTraceContextPropagator(),
new W3CBaggagePropagator(),
],
}),
});
registerInstrumentations({
instrumentations: [
getNodeAutoInstrumentations({
'@opentelemetry/instrumentation-fs': { enabled: false },
// Runs after trace_id/span_id are injected — adds baggage to every log record
'@opentelemetry/instrumentation-winston': {
logHook: (_span, record) => {
const baggage = propagation.getBaggage(context.active());
if (!baggage) return;
for (const [key, entry] of baggage.getAllEntries()) {
record[key] = entry.value;
}
},
},
}),
],
});

Once registered, OTel handles propagation automatically:

  • Incoming requests: the baggage header is parsed and stored in the active context
  • Outgoing HTTP calls: the baggage header is forwarded to downstream services
  • Every Winston log line: baggage entries (e.g. session.id) are injected alongside trace_id and span_id via logHook — no changes to your logger or middleware needed
  1. Propagate Through SQS

    When a backend service publishes to SQS, it must inject the current context (including baggage) into the message attributes. The consumer extracts it before processing.

    SQS allows up to 10 MessageAttributes per message. traceparent, tracestate, and baggage count as 3 toward this limit.

    Producer — inject on send

    import { propagation, context } from '@opentelemetry/api';
    import { SQSClient, SendMessageCommand } from '@aws-sdk/client-sqs';
    const sqs = new SQSClient({});
    async function sendMessage(queueUrl: string, body: object) {
    const carrier: Record<string, string> = {};
    propagation.inject(context.active(), carrier); // injects traceparent, tracestate, baggage
    const messageAttributes: Record<string, { DataType: string; StringValue: string }> = {};
    for (const [key, value] of Object.entries(carrier)) {
    messageAttributes[key] = { DataType: 'String', StringValue: value };
    }
    await sqs.send(new SendMessageCommand({
    QueueUrl: queueUrl,
    MessageBody: JSON.stringify(body),
    MessageAttributes: messageAttributes,
    }));
    }

    Consumer — extract on receive

    import { propagation, context, trace, SpanKind } from '@opentelemetry/api';
    const tracer = trace.getTracer('worker');
    async function processMessage(message: { MessageAttributes?: Record<string, any> }) {
    const carrier: Record<string, string> = {};
    for (const [key, attr] of Object.entries(message.MessageAttributes ?? {})) {
    // Handle both Lambda ESM format (stringValue) and SDK format (StringValue)
    const value = attr.stringValue ?? attr.StringValue;
    if (value) carrier[key] = value;
    }
    const parentCtx = propagation.extract(context.active(), carrier);
    await context.with(parentCtx, async () => {
    const baggage = propagation.getBaggage(context.active());
    const sessionId = baggage?.getEntry('session.id')?.value;
    await tracer.startActiveSpan('process_message', { kind: SpanKind.CONSUMER }, async (span) => {
    if (sessionId) span.setAttribute('session.id', sessionId);
    // ... processing logic
    span.end();
    });
    });
    }

    SNS → SQS

    When messages flow through SNS before reaching SQS, inject baggage on the SNS publish call the same way as the SQS producer above. SNS forwards MessageAttributes to subscribed SQS queues unchanged, so the consumer extraction code works without modification.

Verification

  1. Open the browser, perform an action that triggers a backend request

  2. In Last9, open a trace for that request — the root span should have a session.id attribute

  3. Find a downstream span (auth service, internal API) — it should also carry session.id

  4. If using SQS, find a worker span — session.id should appear there too

  5. Filter logs by session.id to see all log lines across services for a single browser session


Troubleshooting

  • Services without W3CBaggagePropagator registered will silently drop the baggage header. Every service in the call chain needs it.
  • Background jobs and queue consumers that originate independently (no browser session upstream) will have no session.id. Always handle the undefined case in your logging middleware.
  • Keep baggage lean. Each key in allowedKeys is sent on every outgoing browser request. The W3C spec recommends staying well under 8 KB total.

Please get in touch with us on Discord or Email if you have any questions.