You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following tables show the maximum throughput values that were observed while testing various sizes of Azure Managed Redis instances. We used `memtier_benchmark` from an IaaS Azure VM against the Azure Managed Redis endpoint, utilizing the memtier commands shown in the [memtier_benchmark examples](best-practices-performance.md#memtier_benchmark-examples) section. The throughput numbers are only for GET commands. Typically, SET commands have a lower throughput. Real-world performance varies based on Redis configuration and commands. These numbers are provided as a point of reference, not a guarantee.
77
+
The table below shows optimal throughput that we observed while testing various sizes of Azure Managed Redis instances with a workload of all read commands and 1KB payload. The workload is same across all SKUs, except for the connection count (i.e. different thread and client count as required by memtier_benchmark). The connection count is chosen per SKU to leverage the Azure Managed Redis instance optimally. We used `memtier_benchmark` from an IaaS Azure VM against the Azure Managed Redis endpoint, utilizing the memtier commands shown in the [memtier_benchmark examples](best-practices-performance.md#memtier_benchmark-examples) section. The throughput numbers are only for GET commands. Typically, SET commands have a lower throughput. Real-world performance varies based on Redis configuration and commands. These numbers are provided as a point of reference, not a guarantee.
78
78
79
79
>[!CAUTION]
80
80
>These values aren't guaranteed and there's no SLA for these numbers. We strongly recommend that you should [perform your own performance testing](best-practices-performance.md) to determine the right cache size for your application.
81
-
>These numbers might change as we post newer results periodically.
81
+
>Performance could vary for various reasons like different connection count, payload size, commands that are executed etc.
82
82
>
83
83
84
84
>[!IMPORTANT]
85
-
>Microsoft periodically updates the underlying VM used in cache instances. This can change the performance characteristics from cache to cache and from region to region. The example benchmarking values on this page reflect older generation cache hardware in a single region. You may see better or different results in practice, especially with network bandwidth.
85
+
>Microsoft periodically updates the underlying VM used in cache instances. This can change the performance characteristics from cache to cache and from region to region. The example benchmarking values on this page reflect a particular generation cache hardware in a single region. You may see different results in practice, especially with network bandwidth.
86
86
>
87
87
88
-
Azure Managed Redis offers a choice of cluster policy: _Enterprise_ and _OSS_. Enterprise cluster policy is a simpler configuration that doesn't require the client to support clustering. OSS cluster policy, on the other hand, uses the [Redis cluster protocol](https://redis.io/docs/management/scaling) to support higher throughput. We recommend using OSS cluster policy in most cases. For more information, see [Clustering](architecture.md#clustering).
88
+
Azure Managed Redis offers a choice of cluster policy: _Enterprise_ and _OSS_. Enterprise cluster policy is a simpler configuration that doesn't require the client to support clustering. OSS cluster policy, on the other hand, uses the [Redis cluster protocol](https://redis.io/docs/management/scaling) to support higher throughput. We recommend using OSS cluster policy in most cases, especially when you require high performance. For more information, see [Clustering](architecture.md#clustering).
89
89
90
-
Benchmarks for both cluster policies are shown in the following tables. For the OSS cluster policy table, two benchmarking configurations are provided:
91
90
92
-
- 300 connections (50 clients and 6 threads)
93
-
- 2,500 connections (50 clients and 50 threads)
94
-
95
-
The second benchmarks are provided because 300 connections aren't enough to fully demonstrate the performance of the larger cache instances.
96
-
97
-
The following are throughput in GET operations per second on 1 kB payload for AMR instances along the number of client connections used for benchmarking. All numbers were computed for AMR instances with SSL connections and the network bandwidth is recorded in Mbps.
The following table lists the connection count in terms of memtier_benchmark thread count, client count that was used to produce the throughput numbers. As mentioned above, changing the connection count could result in varying performance.
103
+
104
+
| Size in GB | Clients/Threads/Connection Count for Memory Optimized | Clients/Threads/Connection Count for Balanced| Clients/Threads/Connection Count for Compute Optimized|
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| `GET` requests per second without TLS (1-kB value size) | `GET` requests per second with TLS (1-kB value size) |
142
-
|:---:| :--- | ---:|---:| ---:| ---:|
143
-
| M10 | 12 GB | 2 | 2,000 | TBD | TBD |
144
-
| M20 | 24 GB | 4 | 4,000 | TBD | TBD |
145
-
| M50 | 60 GB | 8 | 8,000 | TBD | TBD |
146
-
| M100 | 120 GB | 16 | 10,000 | TBD | TBD |
147
-
| M150 | 180 GB | 24 | 24,000 | TBD | TBD |
148
-
| M250 | 240 GB | 32 | 16,000 | TBD | TBD |
149
-
| M350 | 360 GB | 48 | 24,000 | TBD | TBD |
150
-
| M500 | 480 GB | 64 | 32,000 | TBD | TBD |
151
-
| M700 | 720 GB | 96 | 32,000 | TBD | TBD |
152
-
| M1000 | 960 GB | 128 | 64,000 | TBD | TBD |
153
-
| M1500 | 1440 GB | 192 | 96,000 | TBD | TBD |
154
-
| M2000 | 1920 GB | 256 | 128,000 | TBD | TBD |
155
-
156
-
#### Enterprise Cluster Policy
157
-
158
-
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| `GET` requests per second without TLS (1-kB value size) | `GET` requests per second with TLS (1-kB value size) |
159
-
|:---:| :--- | ---:|---:| ---:| ---:|
160
-
| M10 | 12 GB | 2 | 2,000 | TBD | TBD |
161
-
| M20 | 24 GB | 4 | 4,000 | TBD | TBD |
162
-
| M50 | 60 GB | 8 | 8,000 | TBD | TBD |
163
-
| M100 | 120 GB | 16 | 10,000 | TBD | TBD |
164
-
| M150 | 180 GB | 24 | 8,000 | TBD | TBD |
165
-
| M250 | 240 GB | 32 | 16,000 | TBD | TBD |
166
-
| M350 | 360 GB | 48 | 24,000 | TBD | TBD |
167
-
| M500 | 480 GB | 64 | 32,000 | TBD | TBD |
168
-
| M700 | 720 GB | 96 | 32,000 | TBD | TBD |
169
-
| M1000 | 960 GB | 128 | 32,000 | TBD | TBD |
170
-
| M1500 | 1440 GB | 192 | 32,000 | TBD | TBD |
171
-
| M2000 | 1920 GB | 256 | 32,000 | TBD | TBD |
172
-
173
-
### Balanced (Compute + Memory) Tier
174
-
175
-
#### OSS Cluster Policy
176
-
177
-
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| `GET` requests per second without TLS (1-kB value size) | `GET` requests per second with TLS (1-kB value size) |
178
-
|:---:| :--- | ---:|---:| ---:| ---:|
179
-
| B0 | 500 MB | 2 | 5,000 | TBD | TBD |
180
-
| B1 | 1 GB | 2 | 5,000 | TBD | TBD |
181
-
| B3 | 3 GB | 2 | 2,000 | TBD | TBD |
182
-
| B5 | 6 GB | 2 | 2,000 | TBD | TBD |
183
-
| B10 | 12 GB | 4 | 4,000 | TBD | TBD |
184
-
| B20 | 24 GB | 8 | 8,000 | TBD | TBD |
185
-
| B50 | 60 GB | 16 | 10,000 | TBD | TBD |
186
-
| B100 | 120 GB | 32 | 16,000 | TBD | TBD |
187
-
| B150 | 180 GB | 48 | 24,000 | TBD | TBD |
188
-
| B250 | 240 GB | 64 | 32,000 | TBD | TBD |
189
-
| B350 | 360 GB | 96 | 40,000 | TBD | TBD |
190
-
| B500 | 480 GB | 128 | 64,000 | TBD | TBD |
191
-
| B700 | 720 GB | 192 | 80,000 | TBD | TBD |
192
-
| B1000 | 960 GB | 256 | 128,000 | TBD | TBD |
193
-
194
-
#### Enterprise Cluster Policy
195
-
196
-
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| `GET` requests per second without TLS (1-kB value size) | `GET` requests per second with TLS (1-kB value size) |
197
-
|:---:| :--- | ---:|---:| ---:| ---:|
198
-
| B0 | 500 MB | 2 | 5,000 | TBD | TBD |
199
-
| B1 | 1 GB | 2 | 5,000 | TBD | TBD |
200
-
| B3 | 3 GB | 2 | 2,000 | TBD | TBD |
201
-
| B5 | 6 GB | 2 | 2,000 | TBD | TBD |
202
-
| B10 | 12 GB | 4 | 4,000 | TBD | TBD |
203
-
| B20 | 24 GB | 8 | 8,000 | TBD | TBD |
204
-
| B50 | 60 GB | 16 | 10,000 | TBD | TBD |
205
-
| B100 | 120 GB | 32 | 16,000 | TBD | TBD |
206
-
| B150 | 180 GB | 48 | 24,000 | TBD | TBD |
207
-
| B250 | 240 GB | 64 | 32,000 | TBD | TBD |
208
-
| B350 | 360 GB | 96 | 40,000 | TBD | TBD |
209
-
| B500 | 480 GB | 128 | 32,000 | TBD | TBD |
210
-
| B700 | 720 GB | 192 | 40,000 | TBD | TBD |
211
-
| B1000 | 960 GB | 256 | 32,000 | TBD | TBD |
212
-
213
-
### Compute Optimized Tier
214
-
215
-
#### OSS Cluster Policy
216
-
217
-
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| `GET` requests per second without TLS (1-kB value size) | `GET` requests per second with TLS (1-kB value size) |
218
-
|:---:| :--- | ---:|---:| ---:| ---:|
219
-
| X3 | 3 GB | 4 | 10,000 | TBD | TBD |
220
-
| X5 | 6 GB | 4 | 10,000 | TBD | TBD |
221
-
| X10 | 12 GB | 8 | 12,500 | TBD | TBD |
222
-
| X20 | 24 GB | 16 | 12,500 | TBD | TBD |
223
-
| X50 | 60 GB | 32 | 16,000 | TBD | TBD |
224
-
| X100 | 120 GB | 64 | 28,000 | TBD | TBD |
225
-
| X150 | 180 GB | 96 | 35,000 | TBD | TBD |
226
-
| X250 | 240 GB | 128 | 56,000 | TBD | TBD |
227
-
| X350 | 360 GB | 192 | 70,000 | TBD | TBD |
228
-
| X500 | 480 GB | 256 | 112,000 | TBD | TBD |
229
-
| X700 | 720 GB | 320 | 140,000 | TBD | TBD |
230
-
231
-
#### Enterprise Cluster Policy
232
-
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| `GET` requests per second without TLS (1-kB value size) | `GET` requests per second with TLS (1-kB value size) |
0 commit comments