TL;DR:
- OTLP 1.9.0 adds maps, heterogeneous arrays, byte arrays, and empty values across all signals—previously, only logs supported these types
- Use complex attributes when data has inherent structure (like LLM tool definitions or GraphQL error lists) that loses meaning when flattened
- Flat attributes remain the default choice—they're widely supported and easier to query across most backends
Introduction
OpenTelemetry now supports maps, heterogeneous arrays, and byte arrays across all signals. Here’s where these new types shine — and where simple primitives still fit naturally.
If you’ve been working with OpenTelemetry for a while, you’re likely familiar with the straightforward key-value approach to attributes. It’s simple, fast, and works well with how most telemetry backends store, index, and query data. Semantic conventions were built with this in mind, which is why so many common patterns map cleanly to flat attributes.
But some workloads carry a richer structure. Capturing details from an LLM call, a GraphQL response, or any nested payload can benefit from something more expressive.
Until now, that structure was tricky to represent across signals. Metrics, traces, and logs all accepted primitives or arrays of primitives, so anything more complex required some creativity — either flattening, serializing, or leaving out pieces you wanted to keep.
OTLP 1.9.0 changes that landscape. Complex attribute types are now supported everywhere, and API/SDK updates are rolling out across the ecosystem. As these land, you’ll be able to carry structured data through your telemetry pipeline without losing clarity or queryability.
What's New in Complex Attributes
Starting with OTLP 1.9.0, OpenTelemetry supports these attribute types on all signals:
- Maps (with string keys and values of any supported type)
- Heterogeneous arrays (containing elements of any supported type)
- Byte arrays
- Empty values
This follows OTEP 4485 and its implementation in OTLP and the spec.
Here's what's worth knowing: most backends today are optimized for flat attributes. They're designed to query, index, and aggregate primitives efficiently. Complex attributes work differently — they're not always indexed the same way, which can affect how you query them later. The semantic conventions reflect this reality. They tend to use flat attributes for metrics and other places where you'll be filtering and grouping frequently.
That doesn't mean complex attributes aren't useful — they absolutely are when you need them. It just means flat attributes are still the default choice when both options would work. If your data naturally fits into key-value pairs, that's usually the simpler path. If it doesn't, complex attributes give you a way to capture it without losing structure.
Why This is Important
As the OpenTelemetry community builds out semantic conventions and instrumentations, we keep running into scenarios where the data being captured has inherent structure. When you're working with these types of data, flattening can mean losing important context or making the telemetry harder to use effectively.
Here are a few examples where this comes up:
- LLM operations — If you're instrumenting LLM calls, you're dealing with tool definitions, input messages, and output messages. These are naturally structured objects with nested fields. Keeping that structure intact makes the data easier to work with downstream.
- GraphQL — GraphQL responses can include lists of structured errors, each with its own path, message, and extensions. Preserving this structure means you can see exactly which field caused which error, without having to reconstruct it from dozens of separate attributes.
- Database operations — Batch operations often have properties that vary per item in the batch. You could capture the count, but having access to individual query parameters or results gives you more detailed visibility into what happened.
Before adding complex attributes to all signals, the OpenTelemetry community explored a few other approaches. Each had different trade-offs.
What About Just Using Logs and Spans?
One approach was to keep complex attributes limited to logs and spans, where they were already supported. This would work, but it would mean different signals support different attribute types. If you're writing instrumentation code, you'd need to keep track of which signals accept which types. It's manageable, but having a consistent API across all signals is simpler.
What About Flattening?
Flattening is another option. Take a nested structure and flatten it into separate attributes with dot notation. This works well for simple maps of primitives. With arrays, though, there are some trade-offs.
Say you have this structure:
{
"data": [
{
"foo": "bar",
"baz": 42
}
]
}You could flatten it to:
data.0.foo = "bar"
data.0.baz = 42This preserves the information, but you're now embedding array indices in attribute names. If the array changes size, your attribute names change with it. Querying requires knowing the exact indices you want.
Alternatively, you could flip the structure:
data.foo = ["bar"]
data.baz = [42]This avoids the index problem, but now everything becomes an array, even single values. The original structure—which objects contained which fields—gets lost in the transformation.
Both approaches preserve the data, but they change how you interact with it. The structure you had in the original data becomes harder to work with in queries and analysis.
What About String Serialization?
Another option is to serialize complex data to JSON strings and store those as attributes. This works—some instrumentations already do it—and it's a straightforward way to handle structured data.
There are some considerations, though. Without a standard serialization format, different libraries might handle it differently. One might use compact JSON, another might pretty-print it. This variation can make it trickier to work with the data consistently across different sources.
Also, when serialization happens early in the pipeline (in the instrumentation itself), it limits what you can do with the data later. If you need to truncate a large object, you might lose important fields. If you want to extract schema information or do smart filtering, that's harder once everything is already a string.
Having the backend handle serialization gives you more flexibility. The backend can decide whether to preserve the structure, flatten it, or serialize it based on its storage model and query capabilities. Different backends can make different choices, and instrumentations stay simpler.
What This Means for Backends
If you're building or maintaining an observability backend, the direction complex attributes are heading is toward native support for querying nested properties. Being able to filter by error.details.code or group by tool.parameters.model gives users a more natural way to work with structured data.
That kind of query support takes time to build, though. A practical intermediate step is to serialize complex attributes to JSON (or another format) at ingestion. This preserves the data so users can see it, and leaves the door open for richer query capabilities later.
The OTel Specification, semantic conventions, and API docs are being updated to clarify that backend support for complex attributes is still evolving. The guidance around using flat attributes when possible remains—complex attributes expand what's possible, but flat attributes are still the more widely supported option.
When Should You Use Complex Attributes?
The choice comes down to what fits your data. If your data naturally fits into flat key-value pairs, that's usually the simpler path. Flat attributes work everywhere, they're well-supported across backends, and they're straightforward to query.
Complex attributes make sense when the data has a structure that doesn't flatten cleanly. If you're recording a list of objects, each with multiple fields, and keeping that structure intact makes the data more useful—that's where complex attributes help.
For semantic conventions and instrumentation libraries, this isn't about going back and changing existing code. If something works with flat attributes today, there's no reason to change it. Complex attributes are there for new features where the data structure naturally calls for them.
How Last9 Handles Complex Attributes
Last9 accepts OTLP natively, which means complex attributes work out of the box. If you're sending traces, metrics, or logs with maps or arrays, we ingest them without needing any special configuration on your end.
Once the data's in, you can query across all your telemetry in one place. We support high-cardinality attributes without dropping data or timing out on queries. If you're curious how it works with your setup, you can get started in about 5 minutes.
Additional Resources
To learn more about complex attribute types: