Benchmark tool
Comes with Redis. Very good with variety of options.
Server on AWS (client on my local machine)
PS C:\Users\user> redis-benchmark -t set -n 100000 -h <MY_AWS_FREE_TIER_IP> -p 81
====== SET ======
100000 requests completed in 40.93 seconds
50 parallel clients
3 bytes payload
keep alive: 1
0.00% <= 14 milliseconds
0.22% <= 15 milliseconds
10.06% <= 16 milliseconds
24.73% <= 17 milliseconds
30.37% <= 18 milliseconds
32.67% <= 19 milliseconds
43.62% <= 20 milliseconds
63.83% <= 21 milliseconds
71.76% <= 22 milliseconds
74.68% <= 23 milliseconds
79.34% <= 24 milliseconds
89.36% <= 25 milliseconds
95.18% <= 26 milliseconds
97.42% <= 27 milliseconds
98.39% <= 28 milliseconds
98.87% <= 29 milliseconds
99.11% <= 30 milliseconds
99.36% <= 31 milliseconds
99.54% <= 32 milliseconds
99.65% <= 33 milliseconds
99.72% <= 34 milliseconds
99.80% <= 35 milliseconds
99.85% <= 36 milliseconds
99.89% <= 37 milliseconds
99.91% <= 38 milliseconds
99.93% <= 39 milliseconds
99.95% <= 40 milliseconds
99.96% <= 41 milliseconds
99.97% <= 42 milliseconds
99.98% <= 44 milliseconds
99.98% <= 45 milliseconds
100.00% <= 46 milliseconds
100.00% <= 50 milliseconds
100.00% <= 50 milliseconds
2443.38 requests per second
Localhost: (both client and server)
PS C:\Users\user> redis-benchmark -t set -n 100000
====== SET ======
100000 requests completed in 1.42 seconds
50 parallel clients
3 bytes payload
keep alive: 1
89.97% <= 1 milliseconds
99.88% <= 2 milliseconds
99.91% <= 3 milliseconds
99.93% <= 4 milliseconds
99.95% <= 8 milliseconds
99.96% <= 9 milliseconds
99.98% <= 10 milliseconds
99.99% <= 11 milliseconds
99.99% <= 12 milliseconds
99.99% <= 13 milliseconds
99.99% <= 14 milliseconds
99.99% <= 15 milliseconds
100.00% <= 16 milliseconds
100.00% <= 17 milliseconds
100.00% <= 18 milliseconds
100.00% <= 19 milliseconds
70521.86 requests per second
With pipelining (QPS is much higher but latency for 90 percentile requests is much higher)
PS C:\Users\user> redis-benchmark -t set -n 100000 -P 100
====== SET ======
100000 requests completed in 0.21 seconds
50 parallel clients
3 bytes payload
keep alive: 1
0.00% <= 1 milliseconds
0.60% <= 2 milliseconds
1.00% <= 3 milliseconds
1.10% <= 4 milliseconds
1.30% <= 5 milliseconds
1.40% <= 6 milliseconds
10.70% <= 7 milliseconds
35.80% <= 8 milliseconds
46.70% <= 9 milliseconds
57.00% <= 10 milliseconds
69.70% <= 11 milliseconds
79.10% <= 12 milliseconds
86.60% <= 13 milliseconds
91.70% <= 14 milliseconds
94.00% <= 15 milliseconds
97.00% <= 16 milliseconds
99.30% <= 17 milliseconds
100.00% <= 17 milliseconds
467289.72 requests per second
- Redis is single threaded, can fork another thread(process?) for persistence.
- If running in cluster mode, one node should have n/2 instances (master + slave) where n = NUM_CORE since one process for serving commands, another for persistence
- RDB vs AOF persistence
- Slave can be configured to become master if master hasn't been contacted in a while.
- Total ~16K Hashslots.
- Resharding (redistribution of keys) is always manual, whether adding a node or deleting one. Failover is automatic since slave already has the the same keys as master.
- Pipelining will increase throughput but 90-95 percentile latency will be very high. Essentially 100 percentile latency will be lower than non-pipeline version but for other percentiles it will be very high.
- Others : Recently Geo commands were added. Though tiles38 is also there for the similar stuff.