TL;DR:
- The unroll processor splits single log records containing multiple events (like JSON arrays) into separate records — one per event
- Available in OTel Collector Contrib v0.137.0 at Alpha stability
- Minimal configuration required — add
unroll:to your pipeline - Addresses bundled log formats from VPC flow logs, CloudWatch exports, and Windows endpoint collectors
Introduction
Some log sources bundle multiple events into a single record before shipping them. This is common with VPC flow logs, CloudWatch exports, and certain Windows endpoint collectors. While this batching approach is efficient for transport, it creates challenges when you need to filter, search, or correlate individual events.
When a log record contains an array of 47 events, your analytics tool sees one entry instead of 47 distinct records. This makes it difficult to filter on specific events or correlate them with other telemetry. The OTel Collector now includes the unroll processor to handle this scenario.
How the Processor Works
Simple: if a log record's body contains a list (like a JSON array), the unroll processor expands it into separate log records — one per element.
Input: 1 log record with 10 objects in an array
Output: 10 distinct log records, each with one object
The processor keeps everything intact:
- Original timestamps stay with each record
- Resource attributes carry forward
- Log attributes remain unchanged
- Each record becomes independently filterable
No data loss. No metadata changes. Each expanded record maintains the context from the original bundle.
Why a dedicated processor instead of using a transform
The transform processor with OTTL might seem like a natural fit for this task. The OTel team explored this approach but found limitations in how OTTL handles record expansion. The problem is how OTTL operates. The transform processor iterates through records and applies transformations. But it can't safely add new records during that iteration. Trying to expand records mid-loop leads to skipped entries and unreliable behavior.
Transform and filter processors are built for mutation (changing records) and suppression (dropping records). They're not designed for expansion (creating new records from existing ones).
Expansion changes the number of records in your pipeline. That requires different guarantees than just modifying values. A dedicated processor handles this cleanly and predictably.
Basic Configuration
The processor requires minimal configuration:
processors:
unroll:The processor automatically detects if a log body is an iterable list. If the body contains a list, it expands the record. Otherwise, it passes the record through unchanged.
Here's a basic pipeline setup:
receivers:
otlp:
protocols:
grpc:
http:
processors:
unroll:
exporters:
otlp:
endpoint: "your-backend:4317"
service:
pipelines:
logs:
receivers: [otlp]
processors: [unroll]
exporters: [otlp]This works well when log bodies are already structured as proper arrays. However, some log formats require preprocessing before unroll can handle them.
Preprocessing with transform
Log records aren't always structured as clean arrays. In some cases, multiple JSON objects are concatenated together and separated by delimiters like },{:
{"@timestamp":"2025-09-19T02:20:17.920Z","log.level":"INFO","message":"initialized","service.name":"ES_ECS"},{"type":"server","timestamp":"2025-09-18T20:44:01,838-04:00","level":"INFO","message":"initialized"}For these scenarios, you can combine the transform processor with unroll:
processors:
transform:
error_mode: ignore
log_statements:
- context: log
statements:
# Split concatenated JSON into a list
- set(body, Split(body, "},{"))
unroll: {}
service:
pipelines:
logs:
receivers: [filelog]
processors: [transform, unroll]
exporters: [otlp]The Split function breaks the body on the },{ delimiter, producing a list that unroll can then expand into individual records.
Additional processing may be needed afterward to handle the JSON structure, but the expansion logic is handled by the processor. This pattern of transform-then-unroll works for various log formats that require some initial restructuring.
When to Use Unroll
The unroll processor makes sense when:
- Your log sources bundle events: VPC flow logs, CloudWatch, endpoint collectors often ship multiple events per record
- You need event-level filtering: Bundled logs can't be filtered by individual events
- Correlation matters: Tracing relationships between events requires them to exist as separate records
When something else might work better:
- Logs are already individual records: The processor does nothing when the body isn't a list, but you probably don't need it
- You need complex parsing: Use transform for parsing before or after unroll
- You're working with metrics or traces: The processor currently supports logs only
These use cases have been validated in production environments. The unroll processor was initially developed and deployed in the Bindplane Distro of the OTel Collector in January 2025. It was tested across production workloads, including VPC logs, CloudWatch pipelines, and Windows endpoint logs, before being contributed to the upstream Collector Contrib repository.
Stability and Availability
| Aspect | Details |
|---|---|
| Stability | Alpha |
| Distribution | OTel Collector Contrib (otelcontribcol) |
| First release | v0.137.0 |
| Supported signals | Logs only |
| Config options | None |
The Alpha stability designation indicates that the API may change in future releases. However, the core functionality has been running in production environments since early 2025.
One consideration when using unroll: if a single log record expands into hundreds or thousands of entries, some UI tools may experience rendering slowdowns. This isn't a processor issue, but something to keep in mind when working with extremely dense bundled logs.
Getting Started
Step 1: Make sure you're running OTel Collector Contrib v0.137.0 or later.
Step 2: Add the unroll processor to your pipeline:
processors:
unroll:
service:
pipelines:
logs:
receivers: [your_receiver]
processors: [unroll]
exporters: [your_exporter]Step 3: If your logs need preprocessing (like splitting concatenated JSON), add a transform processor before unroll:
processors:
transform:
log_statements:
- context: log
statements:
- set(body, Split(body, "YOUR_DELIMITER"))
unroll:
service:
pipelines:
logs:
processors: [transform, unroll]Step 4: Test with a sample of your bundled logs to verify the expansion works as expected.
Once configured, the processor integrates with standard OTLP endpoints. Last9 accepts OTLP natively, so unrolled logs from the OTel Collector work without additional configuration.
Individual records maintain their original timestamps and attributes, which means your queries reflect actual event timing rather than bundle arrival time. For high-volume sources like VPC flow logs, unrolling logs before ingestion allows for event-level filtering, searching, and correlation.