You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-cache-for-redis/cache-best-practices-performance.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: Learn how to test the performance of Azure Cache for Redis.
5
5
author: flang-msft
6
6
ms.service: cache
7
7
ms.topic: conceptual
8
-
ms.date: 06/19/2023
8
+
ms.date: 07/01/2024
9
9
ms.author: franlanglois
10
10
---
11
11
@@ -17,7 +17,7 @@ Fortunately, several tools exist to make benchmarking Redis easier. Two of the m
17
17
18
18
## How to use the redis-benchmark utility
19
19
20
-
1. Install open source Redis server to a client VM you can use for testing. The redis-benchmark utility is built into the open source Redis distribution. Follow the [Redis documentation](https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/) for instructions on how to install the open source image.
20
+
1. Install open source Redis server to a client virtual machines (VMs) you can use for testing. The redis-benchmark utility is built into the open source Redis distribution. Follow the [Redis documentation](https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/) for instructions on how to install the open source image.
21
21
22
22
1. The client VM used for testing should be _in the same region_ as your Azure Cache for Redis instance.
23
23
@@ -49,7 +49,7 @@ Fortunately, several tools exist to make benchmarking Redis easier. Two of the m
49
49
50
50
- On the Premium tier, scaling out, clustering, is typically recommended before scaling up. Clustering allows Redis server to use more vCPUs by sharding data. Throughput should increase roughly linearly when adding shards in this case.
51
51
52
-
- On _C0_ and _C1_ caches, you might see short spikes in server load not caused by an increase in requests a couple times a day while internal defender scanning is running on the VMs. You see higher latency for requests while internal defender scans happen on these tiers. Caches on the _C0_ and _C1_ tiers only have a single core to multitask, dividing the work of serving internal defender scanning and Redis requests. You can reduce the effect by scaling to a higher tier offering with multiple CPU cores, such as _C2_.
52
+
- On _C0_ and _C1_ caches, while internal Defender scanning is running on the VMs, you might see short spikes in server load that aren't caused by an increase in cache requests. You see higher latency for requests while internal Defender scans are run on these tiers a couple of times a day. Caches on the _C0_ and _C1_ tiers only have a single core to multitask, dividing the work of serving internal Defender scanning and Redis requests. You can reduce the effect by scaling to a higher tier offering with multiple CPU cores, such as _C2_.
53
53
54
54
## Redis-benchmark examples
55
55
@@ -90,7 +90,7 @@ redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
90
90
91
91
## Example performance benchmark data
92
92
93
-
The following tables show the maximum throughput values that were observed while testing various sizes of Standard, Premium, Enterprise, and Enterprise Flash caches. We used `redis-benchmark` from an IaaS Azure VM against the Azure Cache for Redis endpoint. The throughput numbers are only for GET commands. Typically, SET commands have a lower throughput. These numbers are optimized for throughput. Real-world throughput under acceptable latency conditions may be lower.
93
+
The following tables show the maximum throughput values that were observed while testing various sizes of Standard, Premium, Enterprise, and Enterprise Flash caches. We used `redis-benchmark` from an IaaS Azure VM against the Azure Cache for Redis endpoint. The throughput numbers are only for GET commands. Typically, SET commands have a lower throughput. These numbers are optimized for throughput. Real-world throughput under acceptable latency conditions might be lower.
94
94
95
95
The following configuration was used to benchmark throughput for the Basic, Standard, and Premium tiers:
96
96
@@ -146,7 +146,7 @@ redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
146
146
147
147
#### Enterprise Cluster Policy
148
148
149
-
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) |
149
+
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)|`GET` requests per second without SSL (1-kB value size) |`GET` requests per second with SSL (1-kB value size) |
150
150
|:---:| --- | ---:|---:| ---:| ---:|
151
151
| E10 | 12 GB | 4 | 4,000 | 300,000 | 207,000 |
152
152
| E20 | 25 GB | 4 | 4,000 | 680,000 | 480,000 |
@@ -158,7 +158,7 @@ redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
158
158
159
159
#### OSS Cluster Policy
160
160
161
-
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) |
161
+
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)|`GET` requests per second without SSL (1-kB value size) |`GET` requests per second with SSL (1-kB value size) |
@@ -172,7 +172,7 @@ redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
172
172
173
173
In addition to scaling up by moving to larger cache size, you can boost performance by [scaling out](cache-how-to-scale.md#how-to-scale-up-and-out---enterprise-and-enterprise-flash-tiers). In the Enterprise tiers, scaling out is called increasing the _capacity_ of the cache instance. A cache instance by default has capacity of two--meaning a primary and replica node. An Enterprise cache instance with a capacity of four indicates that the instance was scaled out by a factor of two. Scaling out provides access to more memory and vCPUs. Details on how many vCPUs are used by the core Redis process at each cache size and capacity can be found at the [Enterprise tiers best practices page](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization). Scaling out is most effective when using the OSS cluster policy.
174
174
175
-
The following tables show the GET requests per second at different capacities, using SSL and a 1-kB value size.
175
+
The following tables show the `GET` requests per second at different capacities, using SSL and a 1-kB value size.
0 commit comments