🏏 450 million fans watched the last IPL. What is 'Cricket Scale' for SREs? Know More

Jul 27th, β€˜23/Changelog

Enhancing Metric Gateway Performance

Enhancing Levitate Metric Gateway Performance: Introducing the Upgraded Cardinality Limiter

Enhancing Metric Gateway Performance

This release aims to improve the metric gateway with a more performant and accurate Cardinality Limiter.

Key Features and Improvements

More Accurate Series Limiter

The new algorithm precisely monitors the time series per metric per counter. Consequently, even the already ingested time series would be prevented from being written during a surge. This limitation is due to how probabilistic data structures function and their space requirements at our scale which are fixed in this release.

Cardinality Metric Reporting Improvements

Constantly reporting cardinality numbers at our scale is expensive. Now, the system provides more timely information for metrics when they hit cardinality and when the quota resets to 0.

Resource Efficiency & Threshold Heuristics

Counting distinct and unique sets across a large stream is challenging and consumes a significant amount of memory. Maintaining 1 million hash values alone requires 400 MB of memory. For clusters and tenants, this problem is compounded, making it even slower. However, a new algorithm has been developed that initiates counting only when metrics cross a heuristic threshold, which indicates that the metric may exceed the cardinality quota. This approach dramatically reduces the resources required.

Lower Reset Spikes in Resource Usage and Latency

The new cardinality limiter does not follow a stop-the-world design. It gracefully rests in the background and does not bring existing writes to a halt. This prevents P99 requests from timing out, resulting in no backpressure on remote-write clients sending data to Levitate.

Stop Ingesting Metric After Limit is Reached

When the cardinality limit for a specific metric is reached, a conditional switch halts the ingestion of that metric. This measure is taken to ensure the accuracy and integrity of the metric data, as partial ingestion results in partial histograms that yield meaningless query results. It's best to halt ingestion and encourage using high-cardinality workflows, such as relabeling and streaming aggregation, to handle high-cardinality metrics meaningfully.

Please read the blog post below to learn more about our design choices in achieving these results.

How to make high cardinality work in time series databases: Part 1 | Last9
Part 1 of the series of posts which talk about engineering design decisions to make high cardinality work in time-series databases
Last9

Cloud Native Monitoring

Β© 2024 β€” Last9, Inc
All rights reserved.

Last9 on DiscordLast9 on LinkedInLast9 on TwitterLast9 on YouTubeLast9 on Peerlist
Last9 Levitate is an AWS Qualified Software Partner
Last9 Levitate is a Cloud Native Computing Foundation Silver Member
Last9 Levitate is Soc2 Type 2 certified

SOC2 Type 2 certified.
Contact us for the report.