The OpenTelemetry Gateway (OTel Gateway) is a centralized service that collects, processes, and routes telemetry data—metrics, traces, and logs—across your infrastructure.
In a typical setup, each service pushes telemetry directly to an observability backend. While this approach works well for small environments, it becomes increasingly difficult to manage as systems grow. Configuration changes must be applied to every service, as network usage is less predictable, and backends may become overloaded with unprocessed data.
By introducing the OTel Gateway, all telemetry flows through a single, configurable point before reaching your backend. This enables:
- Consistent processing – Apply transformations, filtering, and enrichment in one place.
- Resource efficiency – Batch and compress data to reduce network and storage costs.
- Operational simplicity – Centralize configuration instead of updating every service.
- Reliability – Buffer data and handle retries during backend outages.
In this guide, we’ll break down:
- How the OTel Gateway works in different architectures.
- When it’s the right choice for your telemetry pipeline.
- Configuration patterns for various environments and workloads.
By the end, you’ll know exactly where an OTel Gateway fits in your observability stack and how to set it up for maximum performance and resilience.
What and Why of the Gateway
What is OpenTelemetry Gateway?
The OpenTelemetry Gateway is a standalone service that acts as a centralized bridge between your applications and your observability backends. Instead of each application sending telemetry directly to a backend, the gateway receives, processes, and routes the data based on your configuration.
You can think of it as a telemetry proxy—capable of:
- Ingesting multiple formats (OTLP, Jaeger, Zipkin, etc.).
- Transforming data through processors (e.g., batching, filtering, attribute enrichment).
- Forwarding telemetry to one or more destinations according to defined rules.
The gateway runs the OpenTelemetry Collector in gateway mode, meaning it’s optimized for receiving telemetry from multiple clients rather than scraping metrics or collecting data from the host it’s running on. This makes it especially useful in large systems where many services need to send telemetry to shared backends.
Example: Basic Gateway Configuration
Below is a minimal configuration that:
- Receives OTLP data over gRPC (
4317
) and HTTP (4318
). - Batches telemetry to reduce network overhead.
- Exports metrics to Prometheus Remote Write.
- Exports traces to Jaeger.
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 1024
exporters:
prometheusremotewrite:
endpoint: "https://your-prometheus-endpoint"
jaeger:
endpoint: "http://jaeger:14250"
tls:
insecure: true
service:
pipelines:
metrics:
receivers: [otlp]
processors: [batch]
exporters: [prometheusremotewrite]
traces:
receivers: [otlp]
processors: [batch]
exporters: [jaeger]
How this works:
- OTLP Receiver – Accepts telemetry from clients over both gRPC and HTTP.
- Batch Processor – Groups data into batches for more efficient transmission.
- Prometheus Remote Write Exporter – Sends metrics to a Prometheus-compatible endpoint.
- Jaeger Exporter – Sends trace data to Jaeger, with TLS disabled for local testing.
This setup gives you a single entry point for telemetry ingestion, reduces per-service configuration, and ensures consistent processing across your entire system.
Gateway vs Agent: What’s the Difference and When to Use Each
The OpenTelemetry Collector can run in two main modes — Gateway or Agent. Both process telemetry, but they differ in where they run, how they collect data, and what role they play in your observability pipeline.
The Core Difference
Aspect | Gateway Mode | Agent Mode |
---|---|---|
Deployment location | Centralized service — often one per cluster, VPC, or region. | Runs close to workloads — as a sidecar, daemonset, or process on each host. |
Primary role | Central ingestion, processing, and routing for multiple services. | Local collector that captures telemetry from its host or container. |
Data collection | Receives telemetry pushed from agents or applications. Not meant for scraping host metrics. | Can scrape host metrics (CPU, memory, disk, container stats) and receive app telemetry directly. |
Processing scope | Global transformations, filtering, enrichment, aggregation across data from many services. | Local filtering or transformation for that host’s data before sending upstream. |
Connection pattern | Many apps → Gateway → Observability backend. | App/host → Agent → (optionally Gateway) → Observability backend. |
Best for | Large, multi-service systems with centralized routing and normalization needs. | Host-level monitoring, edge deployments, or latency-sensitive setups. |
Drawbacks | Adds a network hop; needs redundancy to avoid a single point of failure. | Harder to apply global changes; more configs to maintain across agents. |
When Gateway Mode Makes Sense
Choose a gateway when you want to centralize telemetry ingestion and processing:
- Many microservices – Avoid configuring dozens of services to talk directly to your backend.
- Consistent processing – Apply the same transformations, filtering, or enrichment rules to all telemetry.
- Fewer backend connections – Backends like Prometheus or Jaeger only maintain one connection to the gateway instead of many.
- Mixed protocols – Normalize OTLP, Zipkin, Jaeger, and other formats before sending to the backend.
Example: A large Kubernetes cluster where services emit telemetry in different formats and need to be routed to multiple backends.
When Agent Mode is Better
Go with an agent when you need data collected at the host level and sent with minimal latency:
- Host/system metrics – Capture CPU, memory, disk, and container stats locally.
- Small/simple deployments – Avoid extra infrastructure for a single service or monolith.
- Low-latency paths – Direct service → backend connections can be faster for critical tracing.
Example: A single-node deployment collecting local system metrics and application traces directly to a backend.
Setting Up OTel Gateway for Different Backends
The OTel Gateway can route telemetry to one or more observability backends, applying transformations or filtering before delivery. Below are example configurations for Prometheus metrics, Jaeger traces, and multi-backend routing.
Prometheus Metrics
When exporting metrics to Prometheus Remote Write, the gateway can:
- Convert incoming OTLP metrics to Prometheus format.
- Add consistent resource attributes (labels) across all metrics for easier querying.
Example: Adding environment and cluster labels
processors:
attributes:
actions:
- key: environment
value: production
action: insert
- key: cluster
value: us-east-1
action: insert
exporters:
prometheusremotewrite:
endpoint: "https://prometheus-remote-write-endpoint"
headers:
Authorization: "Bearer ${PROMETHEUS_TOKEN}"
resource_to_telemetry_conversion:
enabled: true
service:
pipelines:
metrics:
receivers: [otlp]
processors: [attributes, batch]
exporters: [prometheusremotewrite]
How it works:
attributes
processor insertsenvironment
andcluster
labels into all metric data.prometheusremotewrite
exporter sends data to a Prometheus-compatible endpoint with authentication.resource_to_telemetry_conversion
ensures resource attributes are converted into Prometheus labels.
Jaeger Traces
For distributed tracing, you might want to sample traces before exporting to Jaeger. Sampling reduces data volume while retaining statistically useful traces.
Example: Probabilistic sampling for traces
processors:
probabilistic_sampler:
sampling_percentage: 1.0
hash_seed: 22
exporters:
jaeger:
endpoint: "http://jaeger-collector:14250"
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [probabilistic_sampler, batch]
exporters: [jaeger]
How it works:
probabilistic_sampler
selects traces based on a percentage (1.0 = keep all traces; adjust for lower rates).batch
processor groups span for efficient export.jaeger
exporter sends trace data to the Jaeger collector over gRPC.
Multi-Backend Routing
A single telemetry stream can be sent to multiple backends—useful for redundancy, migration testing, or different analysis use cases.
Example: Sending metrics and traces to two backends
service:
pipelines:
metrics:
receivers: [otlp]
processors: [batch]
exporters: [prometheusremotewrite, last9]
traces:
receivers: [otlp]
processors: [batch]
exporters: [jaeger, last9]
How it works:
- Metrics are exported to both Prometheus Remote Write and Last9.
- Traces are exported to both Jaeger and Last9.
- This allows you to keep a production-grade backend while testing or validating another.
Use Last9 as a Telemetry Backend
When you route telemetry from the OTel Gateway to Last9, you get a backend built for high-cardinality observability from the ground up. Last9 supports 20M+ series per metric without sampling, applies real-time cost controls to prevent ingestion surprises, and stores older data in cost-efficient cold storage without losing query fidelity.
If you're sending data from many services, or mixing metrics, traces, and logs in one pipeline, adding a last9
exporter to your gateway configuration ensures that scale, performance, and cost remain predictable.
Resource Management and Scaling
Because the gateway processes all telemetry for your infrastructure, it needs to be provisioned and tuned carefully. Without the right limits and scaling in place, it can become a bottleneck or even fail during traffic spikes.
Memory configuration
A memory limiter ensures the gateway doesn’t consume all available RAM under load. This processor monitors usage and drops data if memory crosses defined thresholds:
processors:
memory_limiter:
limit_mib: 1000
spike_limit_mib: 200
check_interval: 5s
limit_mib
sets the maximum memory the Collector can use, in MiB.spike_limit_mib
defines how much sudden memory growth is allowed before dropping data.check_interval
controls how often memory usage is checked.
This prevents crashes during sudden bursts in telemetry volume.
Batch processing
Batching groups multiple telemetry items before sending them to a backend. This improves throughput but can increase latency if batches are too large.
processors:
batch:
timeout: 5s
send_batch_size: 2048
send_batch_max_size: 4096
send_batch_size
is the target number of items per batch.send_batch_max_size
is the absolute maximum batch size.- Larger batches reduce network calls but require more memory and can delay exports.
Horizontal scaling
In high-volume environments, a single gateway may not be enough to handle all traffic. The common approach is to:
- Run multiple gateway instances behind a load balancer.
- Distribute telemetry traffic evenly across instances.
- Scale up or down based on CPU and memory usage.
This adds redundancy and ensures ingestion capacity grows with demand.
When the OTel Gateway is exporting to Last9, you can track ingestion rates, dropped spans, and export queue sizes in real time. These metrics make it easier to decide when to scale horizontally or adjust batch/memory settings.
Advanced Processing Capabilities
The OTel Gateway can transform telemetry data before sending it to backends. This allows you to reduce noise, improve data quality, and control ingestion costs.
Filtering unwanted data
You can drop metrics, traces, or logs that are not useful for analysis. For example, removing telemetry from test services helps keep production data clean:
processors:
filter:
metrics:
exclude:
match_type: regexp
resource_attributes:
- key: service.name
value: "test-.*"
- This configuration uses a regular expression to match any
service.name
starting withtest-
. - Matching metrics are excluded before reaching the backend.
- Similar rules can be applied to traces or logs to remove other low-value data sources.
Tail sampling
Tail sampling allows you to make sampling decisions after seeing the full trace, rather than at the moment a span is recorded. This enables selective retention of important traces.
processors:
tail_sampling:
decision_wait: 10s
num_traces: 50000
expected_new_traces_per_sec: 10
policies:
- name: error_traces
type: status_code
status_code:
status_codes: [ERROR]
- name: slow_traces
type: latency
latency:
threshold_ms: 1000
decision_wait
is how long the processor waits to gather all spans for a trace before deciding whether to keep it.error_traces
policy keeps traces with error status codes.slow_traces
policy keeps traces where the latency is greater than 1 second.- Other traces are sampled according to the default behavior.
This approach ensures critical traces are retained while reducing the volume of routine, low-value data.
Keep the Gateway Healthy
The gateway is the single path for all telemetry in your infrastructure. Any slowdown or failure here can create gaps in observability. Tracking its health ensures that you can act before problems escalate.
Track operational metrics
Focus on metrics that show how the gateway receives, processes, and exports data:
otelcol_receiver_accepted_spans
– Spans received successfully.otelcol_exporter_sent_spans
– Spans sent successfully to backends.otelcol_processor_dropped_spans
– Spans dropped during processing due to limits or errors.otelcol_receiver_refused_spans
– Spans rejected on receipt due to invalid data.
These values highlight ingestion slowdowns, exporter failures, and misconfigurations.
Enable runtime health checks
Expose health and profiling endpoints to monitor the gateway’s status:
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:1777
service:
extensions: [health_check, pprof]
- The health check endpoint integrates with Kubernetes liveness and readiness probes.
- The pprof endpoint provides CPU and memory profiles for performance analysis.
Deployment Patterns
Configure the OTel Gateway for Different Environments
The OTel Gateway can be adapted for different environments, reliability requirements, and security policies. Below are examples for development, production failover, and secure deployments.
Route all telemetry to local backends in a development environment
For local testing, you may want to send all telemetry to local backends for quick debugging:
exporters:
logging:
loglevel: debug
jaeger:
endpoint: "http://localhost:14250"
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, jaeger]
This setup logs trace data to the console and sends it to a local Jaeger instance.
Use primary and backup exporters in production to prevent data loss
To improve reliability, configure both primary and backup exporters:
exporters:
jaeger/primary:
endpoint: "http://jaeger-primary:14250"
jaeger/backup:
endpoint: "http://jaeger-backup:14250"
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [jaeger/primary, jaeger/backup]
If the primary Jaeger instance is unavailable, the gateway continues sending traces to the backup.
Secure gateway connections with TLS and backend authentication
When running a centralized gateway, secure incoming and outgoing connections with TLS.
TLS configuration:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
tls:
cert_file: /path/to/cert.pem
key_file: /path/to/key.pem
exporters:
otlp:
endpoint: "https://secure-backend:4317"
tls:
ca_file: /path/to/ca.pem
Authentication:
exporters:
otlp:
endpoint: "https://backend:4317"
headers:
Authorization: "Bearer ${API_TOKEN}"
Use TLS for encryption in transit and authentication headers or client certificates for backend authentication.
Push vs Pull Data Collection Patterns
The OpenTelemetry Collector can receive telemetry in two primary ways: by having applications send it directly, or by actively scraping it from sources. Choosing the right pattern—or combining them—depends on how your services are instrumented and how much control you need over collection timing.
Push-based collection
In this pattern, applications send telemetry to the gateway as soon as it’s generated. The Collector’s OTLP receiver listens on gRPC and HTTP endpoints for incoming data:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
OpenTelemetry SDKs in your applications handle the export, which means the gateway receives data at the rate and volume the applications produce. This works well for traces and logs, where low latency is important.
Pull-based collection
Here, the gateway takes the lead by scraping metrics from services that expose them, typically over HTTP. A Prometheus receiver in the Collector defines scrape targets and intervals:
receivers:
prometheus:
config:
scrape_configs:
- job_name: 'app-metrics'
static_configs:
- targets: ['app1:8080', 'app2:8080']
scrape_interval: 30s
This approach is ideal for services already instrumented with Prometheus client libraries, or when you need centralized control over when and how often data is collected.
Hybrid collection
Many production setups use both methods: pushing traces and logs for immediacy, while pulling metrics on a fixed schedule. This example shows a gateway pulling metrics with Prometheus while accepting traces via OTLP:
service:
pipelines:
metrics:
receivers: [prometheus]
processors: [batch]
exporters: [prometheusremotewrite]
traces:
receivers: [otlp]
processors: [batch]
exporters: [jaeger]
The hybrid approach allows you to match the collection method to the type of telemetry, balancing timeliness, control, and resource efficiency.
Deployment Patterns in Kubernetes
In many environments, the OpenTelemetry Gateway runs inside Kubernetes clusters, acting as the central telemetry entry point for workloads spread across multiple namespaces and nodes.
Kubernetes offers several deployment options that affect how the gateway scales, how it’s managed, and how isolated it is from application workloads.
Choosing the right pattern depends on factors like operational complexity, performance requirements, and security policies.
Run the gateway as a standard Kubernetes Deployment
The simplest approach is to deploy the gateway as a Kubernetes Deployment and expose it via a Service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: otel-gateway
spec:
replicas: 3
selector:
matchLabels:
app: otel-gateway
template:
metadata:
labels:
app: otel-gateway
spec:
containers:
- name: otel-collector
image: otel/opentelemetry-collector-contrib:latest
ports:
- containerPort: 4317
- containerPort: 4318
volumeMounts:
- name: config
mountPath: /etc/otel-collector-config.yaml
subPath: config.yaml
volumes:
- name: config
configMap:
name: otel-gateway-config
---
apiVersion: v1
kind: Service
metadata:
name: otel-gateway
spec:
selector:
app: otel-gateway
ports:
- name: grpc
port: 4317
targetPort: 4317
- name: http
port: 4318
targetPort: 4318
This creates a stable network endpoint that applications can send telemetry to. Replicas ensure availability, but you must manage updates and configuration changes yourself.
Deploy the gateway with Helm for easier configuration management
For production environments, Helm charts simplify deployment and configuration:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm install otel-gateway open-telemetry/opentelemetry-collector \
--set mode=deployment \
--set config.receivers.otlp.protocols.grpc.endpoint="0.0.0.0:4317"
Helm automates service creation, manages ConfigMaps, and applies deployment best practices, making upgrades and rollbacks easier.
Isolate the gateway in a dedicated namespace
Running the gateway in its own namespace helps separate telemetry infrastructure from application workloads:
apiVersion: v1
kind: Namespace
metadata:
name: observability
labels:
name: observability
Namespace isolation makes it easier to apply RBAC rules, network policies, and resource quotas specifically for observability components.
Deploy the gateway as a DaemonSet for node-level collection
In scenarios where you want each node to run its own instance of the Collector—similar to agent mode—you can use a DaemonSet. This pattern works well when collecting host-level metrics, container stats, or telemetry that should stay local until processed.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: otel-gateway-daemon
spec:
selector:
matchLabels:
app: otel-gateway-daemon
template:
metadata:
labels:
app: otel-gateway-daemon
spec:
containers:
- name: otel-collector
image: otel/opentelemetry-collector-contrib:latest
ports:
- containerPort: 4317
- containerPort: 4318
volumeMounts:
- name: config
mountPath: /etc/otel-collector-config.yaml
subPath: config.yaml
volumes:
- name: config
configMap:
name: otel-gateway-config
A DaemonSet ensures every node in the cluster runs an instance of the gateway, allowing you to collect telemetry close to the workload while still forwarding it to a central backend or gateway tier.
Network Capacity and Scaling Considerations
Gateway performance depends on both how much telemetry your services generate and the available network capacity. Understanding data volume patterns helps you plan bandwidth and scaling strategies that prevent dropped data and slow exports.
Estimate network requirements from telemetry volume
Different telemetry types have different average sizes:
- Metrics: ~1 KB per metric point
- Traces: ~10–50 KB per trace (depending on the number of spans)
- Logs: ~500 B–2 KB per log entry
For example, a service generating 1,000 metric points, 100 traces, and 5,000 logs per minute would produce roughly 10 MB per minute of telemetry data. Scaling up to dozens of services multiplies this quickly, so capacity planning is essential.
Scale gateways based on resource utilization
Instead of scaling by connection count, scale based on CPU and memory usage to ensure enough processing power is available:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: otel-gateway-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: otel-gateway
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
This ensures the gateway can handle traffic spikes without overprovisioning.
Distribute traffic evenly across gateway instances
Use a service mesh or ingress controller to balance telemetry load across multiple gateway instances:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: otel-gateway
spec:
host: otel-gateway
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
A round-robin load balancing policy ensures no single gateway instance becomes a bottleneck. For more complex setups, consider load balancing based on source labels or traffic weight.

Optimize Telemetry Pipelines
High-volume telemetry can strain both your gateway and backends if not processed efficiently. These techniques help control data volume, preserve valuable information, and maintain reliability.
Retain the most important traces with tail sampling
Tail sampling evaluates traces after completion, allowing you to keep the ones that matter most and drop the rest:
processors:
tail_sampling:
decision_wait: 10s
num_traces: 100000
expected_new_traces_per_sec: 100
policies:
- name: errors_policy
type: status_code
status_code:
status_codes: [ERROR]
- name: latency_policy
type: latency
latency:
threshold_ms: 5000
- name: rate_limiting
type: rate_limiting
rate_limiting:
spans_per_second: 500
- Keeps all traces with error status codes
- Retains traces longer than 5 seconds
- Limits all other traces to 500 spans per second
Improve delivery reliability with retries and queuing
Retries and in-memory queues prevent data loss when a backend is temporarily unavailable:
exporters:
otlp:
endpoint: "https://backend:4317"
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
sending_queue:
enabled: true
num_consumers: 10
queue_size: 5000
This setup retries failed deliveries over a controlled interval and queues data for later sending, avoiding drops during short outages.
Limit memory use and optimize network throughput
The memory limiter and batch processor protect the gateway from over-consuming resources and help send data efficiently:
processors:
memory_limiter:
limit_mib: 2000
spike_limit_mib: 400
check_interval: 5s
batch:
timeout: 5s
send_batch_size: 1024
send_batch_max_size: 2048
- Memory limiter enforces hard and spike limits to apply backpressure
- Batch processor groups data into efficient payloads, balancing throughput and latency
Switch to Last9 for Faster, Richer Telemetry Insights
Moving from direct service-to-backend connections to an OTel Gateway can be done in steps. Start the gateway alongside your existing setup, route a few non-critical services through it, and expand as you gain confidence.
The setups we’ve covered here—multi-backend routing, tail sampling, batching, Kubernetes deployments—work even better with Last9 as the backend:
- Test new exporters or run Prometheus, Jaeger, and Last9 side-by-side without changing every service config.
- Use Last9’s ingestion metrics to see how HPA scaling or load balancing changes affect CPU, memory, and dropped data.
- Keep detailed labels and attributes intact while applying filters or sampling, thanks to high-cardinality support.
- Control costs in real time when adjusting batch sizes, memory limits, or retention policies.
With this combination, you get a single point to manage telemetry and the flexibility to scale, filter, and route it, without losing the detail you rely on for debugging.
Start for free today — handle 100M events a month with ease!
FAQs
Q: What is the difference between the OpenTelemetry agent and gateway?
An agent runs on each host to collect local telemetry (system metrics, logs), while a gateway acts as a centralized hub that receives telemetry from multiple services and routes it to backends. Agents are for data collection, gateways are for data processing and routing.
Q: What is an OTel in monitoring?
OpenTelemetry (OTel) is an open-source observability framework that provides APIs, libraries, and tools to collect, process, and export telemetry data (metrics, logs, traces) from applications and infrastructure.
Q: What is an OTel receiver?
A receiver is a component that accepts telemetry data in the OpenTelemetry Collector. It defines how data enters the collector — whether via OTLP, Prometheus scraping, Jaeger, or other protocols.
Q: Is an OTel collector push or pull?
The collector supports both. In gateway mode, it typically receives pushed data from applications. It can also pull data using receivers like Prometheus scraper or host metrics collector.
Q: Why use an OpenTelemetry Collector?
The collector provides centralized telemetry processing, format translation, data enrichment, and backend routing. It reduces the complexity of configuring individual services and gives you control over data flow before it reaches your observability backends.
Q: Can the capacity and structure of your network support scaling your Collector instances?
Gateway scaling depends on your network bandwidth and backend capacity. Each gateway instance processes data independently, so you can scale horizontally. Monitor network utilization and backend connection limits when planning your deployment.
Q: How do I set up an OpenTelemetry Gateway for centralized telemetry data collection?
Deploy the OpenTelemetry Collector with receivers for your telemetry formats (typically OTLP), configure processors for any data transformation, and set up exporters for your backends. The gateway runs as a service that applications send data to.
Q: How do I set up an OpenTelemetry Gateway for my application?
Configure your application to send telemetry to the gateway endpoint instead of directly to backends. Update your application's OpenTelemetry configuration to point to the gateway's OTLP receiver (typically port 4317 for gRPC or 4318 for HTTP).
Q: How does the OpenTelemetry Gateway enhance observability data management?
The gateway provides centralized control over data routing, transformation, and export. You can apply consistent processing rules, filter unwanted data, add metadata, and route different data types to appropriate backends without modifying individual services.