If youβve been using Redis but havenβt explored pipelining, youβre missing out on some significant performance benefits. Redis pipelining is like a hidden gemβthose who know about it canβt imagine working without it.
In this guide, weβll break down why pipelining is important and how it can help improve the efficiency of your applications.
What Is Redis Pipeline - A Command Batching Mechanism
Redis pipeline allows you to send multiple commands to the server without waiting for individual responses. Instead of this back-and-forth ping-pong of requests and responses, you bundle commands together, fire them off in one go, and then receive all responses simultaneously.
Imagine sending 20 separate text messages versus one single message with all your thoughts. The latter is much more efficient, right?
At a technical level, Redis pipeline works by buffering commands client-side, sending them in a single network packet when possible, and then reading all responses in one batch. This approach dramatically reduces the impact of network latency, especially in high-latency environments like cloud platforms or geographically distributed systems.
How to Eliminate Network Delays through Redis Pipeline
Without pipelining, each Redis command creates a full TCP roundtrip:
- Client sends command
- Server processes command
- Server sends response
- Client reads response
This pattern creates latency β the sneaky performance killer that adds up quickly, especially over networks.
With Redis pipeline, you're essentially saying: "Hey Redis, here's everything I need you to do. Let me know when you're done with all of it."
The technical magic happens because:
- TCP packets have overhead (headers, handshakes)
- Network latency affects each request individually
- Context switching between requests and responses adds CPU overhead
- System calls for each read/write operation take time
When you batch commands, you spread these costs across multiple operations, leading to significant performance improvements.
Performance Benchmarks: Quantifying the Redis Pipeline Advantage
Let's talk real results with specific benchmarks. Here's what happens when you implement Redis pipeline:
Approach | Commands per Second | Relative Speedup | Network Overhead per Command |
---|---|---|---|
Single commands | 5,000 | 1x (baseline) | ~1ms per command |
Basic pipelining (10 commands) | 50,000 | 10x | ~0.1ms per command |
Optimized pipeline batches (100 commands) | 100,000+ | 20x+ | ~0.05ms per command |
Pipeline with connection pooling | 150,000+ | 30x+ | ~0.03ms per command |
These numbers will vary based on:
- Network latency between client and Redis server
- Command complexity and data size
- Client library implementation
- Server resource availability
In high-latency environments (like cross-region cloud deployments), the gains can be even more dramatic β often reaching 50-100x improvement.
Implementing Redis Pipeline: Language-Specific Patterns and Best Practices
Getting started with Redis pipeline isn't rocket science. Let's look at implementation patterns in several popular languages.
Python Implementation with redis-py
import redis
r = redis.Redis(host='localhost', port=6379, db=0)
# Start a pipeline
pipe = r.pipeline()
# Queue up commands (these don't execute yet)
pipe.set("user:1:name", "Alex")
pipe.set("user:1:email", "alex@example.com")
pipe.incr("user:1:visits")
pipe.expire("user:1:sessions", 3600)
# Execute all commands in a single roundtrip
results = pipe.execute()
# Results is an array of responses in the same order as commands
print(results) # [True, True, 1, True]
Explanation: This code creates a pipeline object that acts as a command buffer. Each method call (.set()
, .incr()
, etc.) doesn't actually send anything to Redis yet β it just queues the command.
When .execute()
is called, all commands are sent to Redis in a single network operation, and all results are returned together as an array. The order of responses matches the order of commands, so results[0]
corresponds to the first command, results[1]
to the second, and so on.
Node.js Implementation with ioredis
const Redis = require('ioredis');
const redis = new Redis({
host: 'localhost',
port: 6379
});
// Create pipeline
const pipeline = redis.pipeline();
// Queue commands
pipeline.set('product:1234:views', 0);
pipeline.incr('product:1234:views');
pipeline.get('product:1234:views');
// Execute pipeline
pipeline.exec((err, results) => {
if (err) {
console.error('Pipeline failed:', err);
return;
}
// Each result is [err, response]
console.log(results); // [[null, 'OK'], [null, 1], [null, '1']]
// Access individual results
const viewCount = results[2][1];
console.log(`Product view count: ${viewCount}`);
});
Explanation: In ioredis, the pipeline pattern is similar, but the response format is slightly different. Each result in the array is itself an array with two elements: [error, response]
.
This allows for per-command error handling. If a command succeeds, its error value is null
. The second example also shows how to use the results β accessing the third command's response (results[2][1]
) to get the view count.
Java Implementation with Jedis
import redis.clients.jedis.Jedis;
import redis.clients.jedis.Pipeline;
import java.util.List;
public class RedisPipelineExample {
public static void main(String[] args) {
try (Jedis jedis = new Jedis("localhost", 6379)) {
// Create pipeline
Pipeline pipeline = jedis.pipelined();
// Queue commands
pipeline.hset("user:100", "name", "John");
pipeline.hset("user:100", "email", "john@example.com");
pipeline.hincrBy("user:100", "login_count", 1);
// Execute and get all responses
List<Object> responses = pipeline.syncAndReturnAll();
// Process responses
System.out.println("Name set response: " + responses.get(0));
System.out.println("Email set response: " + responses.get(1));
System.out.println("New login count: " + responses.get(2));
}
}
}
Explanation: The Java implementation with Jedis follows the same pattern, but uses syncAndReturnAll()
to execute the pipeline and return responses.
Jedis returns the responses as a List<Object>
where each response type depends on the command executed. For example, hset
returns a Long representing the number of fields that were added, while hincrBy
returns the new incremented value.
Optimize Redis Pipeline: Batching Strategies for Better Performance
The simple pipeline examples above work well, but in production systems, you'll often need more sophisticated approaches.
Batch Size Optimization for Bulk Loading Operations
import redis
import time
r = redis.Redis(host='localhost', port=6379, db=0)
def bulk_insert_with_pipeline(items, batch_size=1000):
pipeline = r.pipeline(transaction=False)
count = 0
start_time = time.time()
for key, value in items.items():
pipeline.set(key, value)
count += 1
# Execute in optimized batches
if count % batch_size == 0:
pipeline.execute()
pipeline = r.pipeline(transaction=False)
# Optional progress reporting
elapsed = time.time() - start_time
rate = count / elapsed
print(f"Processed {count} items. Rate: {rate:.2f} items/sec")
# Don't forget the last batch if any
if count % batch_size != 0:
pipeline.execute()
return count
Explanation: This function demonstrates optimal batch processing for large datasets. The key optimization is the batch size β processing in chunks of 1000 items.
This balances the benefits of pipelining (reduced network overhead) with memory usage concerns (storing too many pending commands).
The function also calculates and reports the processing rate, which is helpful for performance tuning. Note the transaction=False
parameter which explicitly optimizes for performance over atomicity.
Time-Based Batching for Event Processing Systems
class RedisBatchProcessor {
constructor(redisClient, options = {}) {
this.redis = redisClient;
this.pipeline = this.redis.pipeline();
this.batchSize = options.batchSize || 100;
this.flushIntervalMs = options.flushIntervalMs || 1000;
this.commandCount = 0;
this.lastFlushTime = Date.now();
// Set up periodic flush
this.timer = setInterval(() => this.timeBasedFlush(), this.flushIntervalMs);
}
addCommand(method, ...args) {
this.pipeline[method](...args);
this.commandCount++;
// Flush if we hit the batch size
if (this.commandCount >= this.batchSize) {
this.flush();
}
}
timeBasedFlush() {
const now = Date.now();
// Only flush if there are commands and either we've hit the time limit
// or we have some commands and it's been at least half the interval
if (this.commandCount > 0 &&
(now - this.lastFlushTime >= this.flushIntervalMs ||
(this.commandCount > 10 && now - this.lastFlushTime >= this.flushIntervalMs / 2))) {
this.flush();
}
}
flush() {
if (this.commandCount === 0) return Promise.resolve([]);
const executingPipeline = this.pipeline;
const commandCount = this.commandCount;
// Reset state
this.pipeline = this.redis.pipeline();
this.commandCount = 0;
this.lastFlushTime = Date.now();
console.log(`Flushing ${commandCount} commands`);
return executingPipeline.exec();
}
shutdown() {
clearInterval(this.timer);
return this.flush();
}
}
Explanation: This more sophisticated JavaScript class implements a hybrid batching strategy based on both batch size and time.
It automatically flushes commands when either:
1) the batch size limit is reached, or 2) the time interval has elapsed and there are pending commands.
This approach is ideal for systems that need to balance throughput with latency requirements. The adaptive flush logic also includes a heuristic to flush earlier if there's a moderate number of commands waiting for more than half the interval, which helps reduce average latency.
Redis Pipeline vs. Lua Scripts vs. Multi/Exec: Choosing the Right Tool
People often confuse Redis pipeline with other Redis features. Let's break down the differences:
Feature | Redis Pipeline | Redis Multi/Exec | Redis Lua Scripts |
---|---|---|---|
Primary purpose | Performance optimization | Transaction-like atomicity | Atomicity + Complex logic |
Commands execution | Batched, sequential | Atomic unit | Atomic unit |
Can include conditional logic | No | No | Yes |
Network roundtrips | One | Two (MULTI+commands, then EXEC) | One |
Performance | Fastest for simple batches | Medium | Fast for complex operations |
Where code runs | Client-side | Client+Server | Server-side |
Error handling | Continue after errors | Abort all on errors (with WATCH) | Abort script on error |
Can watch keys | No | Yes | Implicit watch on all keys |
Let's see a practical comparison:
Task: Increment a counter and expire it if it reaches a threshold
Pipeline Approach (requires two roundtrips):
# First roundtrip to get current value
current = r.get("counter")
# Logic runs on client
if int(current) >= 10:
# Second roundtrip with pipeline
p = r.pipeline()
p.incr("counter")
p.expire("counter", 60)
p.execute()
else:
r.incr("counter")
Lua Script Approach (one roundtrip):
script = """
local current = redis.call('incr', KEYS[1])
if current >= 10 then
redis.call('expire', KEYS[1], 60)
end
return current
"""
# Register script once
increment_script = r.register_script(script)
# Execute with one roundtrip
new_value = increment_script(keys=["counter"], args=[])
Explanation: This comparison shows when you might choose Lua scripts over pipelining. The pipeline approach requires two roundtrips because the conditional logic runs on the client.
The Lua script approach requires only a single roundtrip because the logic executes directly on the Redis server.
For operations that need conditional behavior or complex transformations based on the data, Lua scripts often offer better performance than pipelining alone.
Redis Pipeline Performance: Key Configurations and Client Tips
To get the most out of Redis pipeline, tune these key parameters:
Client-Side Settings
- Optimal batch size: Test different batch sizes (100-10,000 commands) to find the sweet spot for your workload.
- Connection pooling: Most Redis clients support connection pools, which work well with pipelining:
// Node.js ioredis connection pool example
const Redis = require('ioredis');
const cluster = new Redis.Cluster([
{ port: 6379, host: 'redis-node1' },
{ port: 6379, host: 'redis-node2' }
], {
// Connection pool settings
maxConnections: 20,
// When using pipelines with a pool, queue commands until an available connection
enableOfflineQueue: true,
// May need to adjust based on workload
commandTimeout: 5000
});
// Now use pipelines as normal
const pipeline = cluster.pipeline();
for (let i = 0; i < 1000; i++) {
pipeline.set(`key:${i}`, `value:${i}`);
}
pipeline.exec();
Explanation: This code sets up a connection pool for a Redis Cluster. The maxConnections
parameter limits the number of simultaneous connections, while enableOfflineQueue
ensures commands are queued if all connections are busy.
With connection pooling, multiple pipelines can execute simultaneously across different connections, maximizing throughput. The commandTimeout
parameter prevents commands from hanging indefinitely if there's a network issue.
Server-Side Settings
In your redis.conf
, these settings impact pipeline performance:
# Higher for pipeline-heavy workloads
tcp-backlog 511
# Adjust based on expected request size
client-query-buffer-limit 1mb
# Important for bulk operations
proto-max-bulk-len 512mb
# Higher for many concurrent clients using pipelines
maxclients 10000
Explanation: These Redis server settings help optimize for pipelined workloads. The tcp-backlog
setting increases the queue for incoming connections. client-query-buffer-limit
determines how large a single command can be. proto-max-bulk-len
sets the maximum size of a bulk request, which is important for large pipeline operations. maxclients
controls how many clients can connect simultaneously.
Advanced Error Handling Strategies for Redis Pipeline Operations
One challenge with pipelines is that commands continue executing even if earlier commands fail. Here's how to handle that:
def safe_pipeline_execution(pipeline_commands):
"""Execute pipeline commands with comprehensive error handling"""
r = redis.Redis(host='localhost', port=6379, db=0)
pipe = r.pipeline()
# Queue commands
for cmd, args, kwargs in pipeline_commands:
method = getattr(pipe, cmd)
method(*args, **kwargs)
# Execute pipeline
results = pipe.execute(raise_on_error=False)
# Process results with error handling
processed_results = []
for i, result in enumerate(results):
cmd_info = pipeline_commands[i]
cmd_name = cmd_info[0]
if isinstance(result, Exception):
# Handle specific error types differently
if isinstance(result, redis.exceptions.ResponseError):
if "WRONGTYPE" in str(result):
# Handle wrong data type errors
processed_results.append({
'success': False,
'command': cmd_name,
'error': 'Data type mismatch',
'details': str(result)
})
else:
# Handle other response errors
processed_results.append({
'success': False,
'command': cmd_name,
'error': 'Command error',
'details': str(result)
})
else:
# Handle other exceptions
processed_results.append({
'success': False,
'command': cmd_name,
'error': 'Unexpected error',
'details': str(result)
})
else:
# Command succeeded
processed_results.append({
'success': True,
'command': cmd_name,
'result': result
})
return processed_results
Explanation: This function provides a structured approach to pipeline error handling. It takes a list of commands as input, executes them in a pipeline, and then processes each result individually. By setting raise_on_error=False
, the pipeline returns exceptions as results rather than raising them.
The function then categorizes different types of errors (wrong data type, command errors, unexpected errors) and returns a structured result that makes it easy for calling code to determine which commands succeeded and which failed. This pattern is particularly useful for batch operations where partial success is acceptable.
Practical Redis Pipeline Examples
Example 1: High-Performance Leaderboard System with Pipeline
def update_and_get_leaderboard(user_id, new_score, leaderboard_key="leaderboard:global"):
"""Update a user's score and get updated leaderboard info"""
r = redis.Redis(host='localhost', port=6379, db=0)
p = r.pipeline()
# Update the user's score
p.zadd(leaderboard_key, {user_id: new_score})
# Get user's new rank (0-based)
p.zrevrank(leaderboard_key, user_id)
# Get user's previous score
p.zscore(leaderboard_key, user_id)
# Get top 10 players
p.zrevrange(leaderboard_key, 0, 9, withscores=True)
# Get nearby players (5 above and below the user)
p.zrevrank(leaderboard_key, user_id) # Get rank again for calculation
# Execute all at once
results = p.execute()
# Process results
user_rank = results[1]
# If user is ranked, get nearby players
if user_rank is not None:
# Calculate range for nearby players (5 above and 5 below)
start_rank = max(0, user_rank - 5)
end_rank = user_rank + 5
# Fetch nearby players in a second pipeline
p = r.pipeline()
p.zrevrange(leaderboard_key, start_rank, end_rank, withscores=True)
nearby_players = p.execute()[0]
else:
nearby_players = []
return {
"update_success": bool(results[0]),
"user_rank": results[1], # 0-based rank
"display_rank": None if results[1] is None else results[1] + 1, # 1-based for display
"previous_score": results[2],
"new_score": new_score,
"score_change": None if results[2] is None else new_score - results[2],
"top_players": [(player.decode(), score) for player, score in results[3]],
"nearby_players": [(player.decode(), score) for player, score in nearby_players]
}
Explanation: This function demonstrates a practical real-world use of Redis pipeline for a gaming leaderboard. It performs multiple operations: updating a user's score, getting their rank, retrieving their previous score for comparison, fetching the top players, and retrieving players with similar ranks.
All these operations are batched into a single network roundtrip, dramatically improving performance compared to individual calls. The function then processes the results into a structured response that can be directly used by a game client or API.
Example 2: Rate Limiting with Redis Pipeline
async function checkRateLimit(userId, action, limit, windowSeconds) {
const redis = new Redis();
const now = Date.now();
const key = `rate:${action}:${userId}`;
// Create pipeline for rate limit check
const pipeline = redis.pipeline();
// Add current timestamp to sorted set
pipeline.zadd(key, now, `${now}`);
// Remove elements outside the time window
const cutoff = now - (windowSeconds * 1000);
pipeline.zremrangebyscore(key, 0, cutoff);
// Count remaining elements (actions within the window)
pipeline.zcard(key);
// Set expiration on the key to auto-cleanup
pipeline.expire(key, windowSeconds);
// Execute all commands
const results = await pipeline.exec();
// Get the count from results (3rd command, 2nd index, 2nd element of the [err, result] pair)
const count = results[2][1];
// Check if user exceeded the rate limit
const allowed = count <= limit;
return {
allowed,
current: count,
limit,
remaining: Math.max(0, limit - count),
resetAt: new Date(now + windowSeconds * 1000)
};
}
// Usage example
async function handleRequest(userId, action) {
const rateLimitCheck = await checkRateLimit(userId, action, 100, 3600);
if (!rateLimitCheck.allowed) {
throw new Error(`Rate limit exceeded. Try again after ${rateLimitCheck.resetAt}`);
}
// Continue with the action...
}
Explanation: This example implements a sliding window rate limiter using Redis sorted sets and pipelining. The function performs four Redis operations atomically: adding the current timestamp to a sorted set, removing old timestamps outside the time window, counting the remaining elements, and setting an expiration on the key.
Using a pipeline allows these operations to complete in a single network roundtrip, making the rate limiting check efficient even under high load. The function returns detailed information about the rate limit status, which can be used for response headers or error messages.
When to Use Redis Pipeline (And When Not To)
Redis pipeline shines brightest when:
- High command volume: You need to execute multiple commands in sequence (10+ commands)
- Network-bound operations: Your Redis performance is limited by network latency rather than CPU or memory
- Batch processing jobs: You're performing ETL, data migrations, or other bulk operations
- Geographically distributed systems: Client and server are in different regions or data centers
- Micro-operations: You're performing many small operations that individually don't justify the network overhead
Situations where pipelining might not be the best choice:
- Single command operations: If you're only executing one command at a time
- Blocking operations: Commands like BLPOP or BRPOP that are designed to block
- Real-time requirements with complex logic: If you need intermediate results to make decisions
- When Lua scripts are more appropriate: For complex atomic operations with business logic
- Small data sets on low-latency networks: The overhead of creating a pipeline might exceed the benefits
Monitoring and Debugging Redis Pipeline Performance
To ensure your pipelines are performing optimally, integrate monitoring:
import time
import redis
import statistics
def benchmark_pipeline(commands_per_batch, num_batches=10):
r = redis.Redis(host='localhost', port=6379, db=0)
# Track execution times
single_command_times = []
pipeline_times = []
# Generate test data
test_data = {f"benchmark:key:{i}": f"value:{i}" for i in range(commands_per_batch)}
keys = list(test_data.keys())
# Benchmark single commands
start_time = time.time()
for batch in range(num_batches):
for key, value in test_data.items():
r.set(key, value)
for key in keys:
r.get(key)
single_command_elapsed = time.time() - start_time
# Benchmark pipelined commands
start_time = time.time()
for batch in range(num_batches):
# Set pipeline
p = r.pipeline()
for key, value in test_data.items():
p.set(key, value)
p.execute()
# Get pipeline
p = r.pipeline()
for key in keys:
p.get(key)
p.execute()
pipeline_elapsed = time.time() - start_time
# Calculate operations per second
total_operations = num_batches * commands_per_batch * 2 # SET + GET
single_ops_per_sec = total_operations / single_command_elapsed
pipeline_ops_per_sec = total_operations / pipeline_elapsed
speedup = pipeline_ops_per_sec / single_ops_per_sec
# Clean up benchmark keys
r.delete(*keys)
return {
"commands_per_batch": commands_per_batch,
"num_batches": num_batches,
"total_operations": total_operations,
"single_command_time": single_command_elapsed,
"pipeline_time": pipeline_elapsed,
"single_ops_per_sec": single_ops_per_sec,
"pipeline_ops_per_sec": pipeline_ops_per_sec,
"speedup_factor": speedup
}
# Run benchmarks with different batch sizes
results = []
for batch_size in [10, 100, 1000, 5000]:
result = benchmark_pipeline(batch_size)
results.append(result)
print(f"Batch size: {batch_size}, Speedup: {result['speedup_factor']:.2f}x")
Explanation: This benchmark function provides a systematic way to measure Redis pipeline performance compared to individual commands. It performs the same operations (setting and then getting values) both with and without pipelining. The function tracks execution times and calculates operations per second and the speedup factor.
Running the benchmark with different batch sizes helps determine the optimal batch size for your specific environment. This testing is essential before implementing pipelining in production systems, as the ideal batch size can vary depending on network conditions, Redis server capacity, and data characteristics.
Conclusion
Redis pipeline is one of those rare optimizations that offers dramatic benefits with relatively little code complexity β making it a must-have tool in your Redis toolkit.
Reducing network overhead and batching commands can lead to significant improvements in throughput with minimal code changes.
The key takeaways:
- Start with simple pipelines for immediate performance gains
- Find the optimal batch size for your specific workload
- Combine with other Redis features like Lua scripts when appropriate
- Implement proper error handling for production resilience
- Monitor and benchmark to ensure continued performance
FAQs
What is the difference between Redis pipeline and Redis transactions?
Redis pipeline focuses on performance by batching multiple commands to reduce network roundtrips. Redis transactions (MULTI/EXEC) focus on atomicity, ensuring commands execute as a unit without interruption.
While pipeline improves throughput, transactions ensure consistency. You can combine them by creating a pipeline with transaction=True
to get both benefits.
Does Redis pipeline guarantee atomicity?
No. Redis pipeline by itself doesn't guarantee atomicity. Commands in a pipeline are executed sequentially, and other clients' commands might be interspersed between them. If you need atomicity, combine pipeline with MULTI/EXEC or use Lua scripts.
What's the optimal batch size for Redis pipeline?
It depends on your specific environment, but most applications see optimal performance with batch sizes between 100-1,000 commands. Beyond that, you might encounter diminishing returns or even performance degradation due to increased memory usage. Benchmark different batch sizes in your environment to find the sweet spot.
Can I use Redis pipeline with Redis Cluster?
Yes, but with limitations. In Redis Cluster, all keys in a pipeline must map to the same hash slot. If your commands target keys across different slots, you'll need to use hashtags or split your pipeline into multiple node-specific pipelines.
Does Redis pipeline guarantee that all commands will succeed?
No. If a command fails in a pipeline, Redis continues processing subsequent commands. Each response in the results array corresponds to its command, and errors appear as exceptions in the response array. You must check each result individually to ensure all commands succeeded.
How does Redis pipeline affect memory usage?
Redis must buffer all the responses until the entire pipeline is processed, which increases memory usage on the server side. For very large batches (thousands of commands), monitor your Redis memory usage to avoid pressure on the server.
Can I use blocking commands in a pipeline?
Technically yes, but it's not recommended. Blocking commands (like BLPOP) will block the entire pipeline until they complete. This defeats the purpose of pipelining, which is to maximize throughput.
Is Redis pipeline thread-safe?
Pipeline objects in most Redis clients are not thread-safe. You should create, use, and dispose of pipeline objects within the same thread. For multi-threaded applications, create separate pipeline instances for each thread.
How do I handle errors in a Redis pipeline?
Most Redis clients provide options for error handling in pipelines. In redis-py, use pipe.execute(raise_on_error=False)
to get exceptions as results rather than raising them. In ioredis, errors appear as the first element in each result's array. Always check each command's result before assuming success.
Can Redis pipeline be used with Pub/Sub?
Yes, you can use pipeline for publishing multiple messages, but not for subscribing. Subscription commands change the connection state and are incompatible with pipelining.
Does Redis pipeline work with Redis Sentinel or Redis Enterprise?
Yes, Redis pipeline works with any Redis deployment mode, including Sentinel, Enterprise, and managed cloud instances. The client interacts with the Redis protocol the same way regardless of deployment architecture.
How do I properly close/reset a pipeline?
Most client libraries automatically reset the pipeline after execute()
. For explicit control, in redis-py you can call pipeline.reset()
, and in ioredis, you typically create a new pipeline instance for each batch.
Is there a limit to how many commands I can put in a pipeline?
There's no hard limit in Redis itself, but practical limits exist based on available memory and client configuration. Most Redis clients have query buffer limits that might constrain very large pipelines. Monitor memory usage and response times to determine your system's limits.
How can I tell if my application would benefit from Redis pipeline?
Run a basic benchmark comparing the performance of individual commands versus pipelined commands. If you see high latency between the client and Redis server (>1ms), or if you're executing many small operations in sequence, pipelining will likely provide significant benefits.
What's the performance impact of network latency on Redis pipeline?
The higher the network latency, the greater the benefit of pipelining. In local deployments (sub-millisecond latency), you might see 2-5x improvement. In cross-region cloud deployments with 50-100ms latency, pipelining can yield 50-100x performance improvements.