Skip to content

Commit cd40ac6

Browse files
committed
Update cache-best-practices-performance.md
1 parent adb643f commit cd40ac6

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

articles/azure-cache-for-redis/cache-best-practices-performance.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn how to test the performance of Azure Cache for Redis.
55
author: flang-msft
66
ms.service: cache
77
ms.topic: conceptual
8-
ms.date: 06/19/2023
8+
ms.date: 07/01/2024
99
ms.author: franlanglois
1010
---
1111

@@ -17,7 +17,7 @@ Fortunately, several tools exist to make benchmarking Redis easier. Two of the m
1717

1818
## How to use the redis-benchmark utility
1919

20-
1. Install open source Redis server to a client VM you can use for testing. The redis-benchmark utility is built into the open source Redis distribution. Follow the [Redis documentation](https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/) for instructions on how to install the open source image.
20+
1. Install open source Redis server to a client virtual machines (VMs) you can use for testing. The redis-benchmark utility is built into the open source Redis distribution. Follow the [Redis documentation](https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/) for instructions on how to install the open source image.
2121

2222
1. The client VM used for testing should be _in the same region_ as your Azure Cache for Redis instance.
2323

@@ -49,7 +49,7 @@ Fortunately, several tools exist to make benchmarking Redis easier. Two of the m
4949

5050
- On the Premium tier, scaling out, clustering, is typically recommended before scaling up. Clustering allows Redis server to use more vCPUs by sharding data. Throughput should increase roughly linearly when adding shards in this case.
5151

52-
- On _C0_ and _C1_ caches, you might see short spikes in server load not caused by an increase in requests a couple times a day while internal defender scanning is running on the VMs. You see higher latency for requests while internal defender scans happen on these tiers. Caches on the _C0_ and _C1_ tiers only have a single core to multitask, dividing the work of serving internal defender scanning and Redis requests. You can reduce the effect by scaling to a higher tier offering with multiple CPU cores, such as _C2_.
52+
- On _C0_ and _C1_ caches, while internal Defender scanning is running on the VMs, you might see short spikes in server load that aren't caused by an increase in cache requests. You see higher latency for requests while internal Defender scans are run on these tiers a couple of times a day. Caches on the _C0_ and _C1_ tiers only have a single core to multitask, dividing the work of serving internal Defender scanning and Redis requests. You can reduce the effect by scaling to a higher tier offering with multiple CPU cores, such as _C2_.
5353

5454
## Redis-benchmark examples
5555

@@ -90,7 +90,7 @@ redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
9090

9191
## Example performance benchmark data
9292

93-
The following tables show the maximum throughput values that were observed while testing various sizes of Standard, Premium, Enterprise, and Enterprise Flash caches. We used `redis-benchmark` from an IaaS Azure VM against the Azure Cache for Redis endpoint. The throughput numbers are only for GET commands. Typically, SET commands have a lower throughput. These numbers are optimized for throughput. Real-world throughput under acceptable latency conditions may be lower.
93+
The following tables show the maximum throughput values that were observed while testing various sizes of Standard, Premium, Enterprise, and Enterprise Flash caches. We used `redis-benchmark` from an IaaS Azure VM against the Azure Cache for Redis endpoint. The throughput numbers are only for GET commands. Typically, SET commands have a lower throughput. These numbers are optimized for throughput. Real-world throughput under acceptable latency conditions might be lower.
9494

9595
The following configuration was used to benchmark throughput for the Basic, Standard, and Premium tiers:
9696

@@ -146,7 +146,7 @@ redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
146146
147147
#### Enterprise Cluster Policy
148148

149-
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) |
149+
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| `GET` requests per second without SSL (1-kB value size) | `GET` requests per second with SSL (1-kB value size) |
150150
|:---:| --- | ---:|---:| ---:| ---:|
151151
| E10 | 12 GB | 4 | 4,000 | 300,000 | 207,000 |
152152
| E20 | 25 GB | 4 | 4,000 | 680,000 | 480,000 |
@@ -158,7 +158,7 @@ redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
158158

159159
#### OSS Cluster Policy
160160

161-
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) |
161+
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| `GET` requests per second without SSL (1-kB value size) | `GET` requests per second with SSL (1-kB value size) |
162162
|:---:| --- | ---:|---:| ---:| ---:|
163163
| E10 | 12 GB | 4 | 4,000 | 1,400,000 | 1,000,000 |
164164
| E20 | 25 GB | 4 | 4,000 | 1,200,000 | 900,000 |
@@ -172,7 +172,7 @@ redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
172172

173173
In addition to scaling up by moving to larger cache size, you can boost performance by [scaling out](cache-how-to-scale.md#how-to-scale-up-and-out---enterprise-and-enterprise-flash-tiers). In the Enterprise tiers, scaling out is called increasing the _capacity_ of the cache instance. A cache instance by default has capacity of two--meaning a primary and replica node. An Enterprise cache instance with a capacity of four indicates that the instance was scaled out by a factor of two. Scaling out provides access to more memory and vCPUs. Details on how many vCPUs are used by the core Redis process at each cache size and capacity can be found at the [Enterprise tiers best practices page](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization). Scaling out is most effective when using the OSS cluster policy.
174174

175-
The following tables show the GET requests per second at different capacities, using SSL and a 1-kB value size.
175+
The following tables show the `GET` requests per second at different capacities, using SSL and a 1-kB value size.
176176

177177
#### Scaling out - Enterprise cluster policy
178178

0 commit comments

Comments
 (0)