You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-cache-for-redis/cache-management-faq.yml
+58-54Lines changed: 58 additions & 54 deletions
Original file line number
Diff line number
Diff line change
@@ -34,78 +34,79 @@ sections:
34
34
- question: |
35
35
What are some production best practices?
36
36
answer: |
37
-
* [StackExchange.Redis best practices](#stackexchangeredis-best-practices)
38
-
* [Configuration and concepts](#configuration-and-concepts)
39
-
* [Performance testing](#performance-testing)
37
+
- [StackExchange.Redis best practices](#stackexchangeredis-best-practices)
38
+
- [Configuration and concepts](#configuration-and-concepts)
39
+
- [Performance testing](#performance-testing)
40
40
41
41
### StackExchange.Redis best practices
42
42
43
-
* Set `AbortConnect` to false, then let the ConnectionMultiplexer reconnect automatically.
44
-
* Use a single, long-lived `ConnectionMultiplexer` instance rather than creating a new connection for each request.
45
-
* Redis works best with smaller values, so consider chopping up bigger data into multiple keys. In [this Redis discussion](https://groups.google.com/forum/#!searchin/redis-db/size/redis-db/n7aa2A4DZDs/3OeEPHSQBAAJ), 100 kb is considered large. For more information, see [Best practices development](cache-best-practices-development.md#consider-more-keys-and-smaller-values).
46
-
* Configure your [ThreadPool settings](#important-details-about-threadpool-growth) to avoid timeouts.
47
-
* Use at least the default connectTimeout of 5 seconds. This interval gives StackExchange.Redis sufficient time to re-establish the connection if there's a network blip.
48
-
* Be aware of the performance costs associated with different operations you're running. For instance, the `KEYS` command is an O(n) operation and should be avoided. The [redis.io site](https://redis.io/commands/) has details around the time complexity for each operation that it supports. Select each command to see the complexity for each operation.
43
+
- Set `AbortConnect` to false, then let the ConnectionMultiplexer reconnect automatically.
44
+
- Use a single, long-lived `ConnectionMultiplexer` instance rather than creating a new connection for each request.
45
+
- Redis works best with smaller values, so consider chopping up bigger data into multiple keys. In [this Redis discussion](https://groups.google.com/forum/#!searchin/redis-db/size/redis-db/n7aa2A4DZDs/3OeEPHSQBAAJ), 100 kb is considered large. For more information, see [Best practices development](cache-best-practices-development.md#consider-more-keys-and-smaller-values).
46
+
- Configure your [ThreadPool settings](#important-details-about-threadpool-growth) to avoid timeouts.
47
+
- Use at least the default connectTimeout of 5 seconds. This interval gives StackExchange.Redis sufficient time to re-establish the connection if there's a network blip.
48
+
- Be aware of the performance costs associated with different operations you're running. For instance, the `KEYS` command is an O(n) operation and should be avoided. The [redis.io site](https://redis.io/commands/) has details around the time complexity for each operation that it supports. Select each command to see the complexity for each operation.
49
49
50
50
### Configuration and concepts
51
51
52
-
* Use Standard or Premium Tier for Production systems. The Basic Tier is a single node system with no data replication and no SLA. Also, use at least a C1 cache. C0 caches are typically used for simple dev/test scenarios.
53
-
* Remember that Redis is an **In-Memory** data store. For more information, see [Troubleshoot data loss in Azure Cache for Redis](cache-troubleshoot-data-loss.md) so that you're aware of scenarios where data loss can occur.
54
-
* Develop your system such that it can handle connection blips caused by [patching and failover](cache-failover.md).
52
+
- Use Standard or Premium Tier for Production systems. The Basic Tier is a single node system with no data replication and no SLA. Also, use at least a C1 cache. C0 caches are typically used for simple dev/test scenarios.
53
+
- Remember that Redis is an _In-Memory_ data store. For more information, see [Troubleshoot data loss in Azure Cache for Redis](cache-troubleshoot-data-loss.md) so that you're aware of scenarios where data loss can occur.
54
+
- Develop your system such that it can handle connection blips caused by [patching and failover](cache-failover.md).
55
55
56
56
### Performance testing
57
57
58
-
* Start by using `redis-benchmark.exe` to get a feel for possible throughput before writing your own perf tests. Because `redis-benchmark` doesn't support TLS, you must [enable the Non-TLS port through the Azure portal](cache-configure.md#access-ports) before you run the test. For examples, see [How can I benchmark and test the performance of my cache?](#how-can-i-benchmark-and-test-the-performance-of-my-cache-)
59
-
* The client VM used for testing should be in the same region as your Azure Cache for Redis instance.
60
-
* We recommend using Dv2 VM Series for your client as they have better hardware and should give the best results.
61
-
* Make sure the client VM you choose has at least as much computing and bandwidth capability as the cache you're testing.
62
-
* Enable VRSS on the client machine if you are on Windows. [See here for details](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn383582(v=ws.11)).
63
-
* Premium tier Redis instances have better network latency and throughput because they're running on better hardware for both CPU and Network.
58
+
- Start by using `redis-benchmark.exe` to get a feel for possible throughput before writing your own perf tests. Because `redis-benchmark` doesn't support TLS, you must [enable the Non-TLS port through the Azure portal](cache-configure.md#access-ports) before you run the test. For examples, see [How can I benchmark and test the performance of my cache?](#how-can-i-benchmark-and-test-the-performance-of-my-cache-)
59
+
- The client VM used for testing should be in the same region as your Azure Cache for Redis instance.
60
+
- We recommend using Dv2 VM Series for your client as they have better hardware and should give the best results.
61
+
- Make sure the client VM you choose has at least as much computing and bandwidth capability as the cache you're testing.
62
+
- Enable VRSS on the client machine if you are on Windows. [See here for details](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn383582(v=ws.11)).
63
+
- Premium tier Redis instances have better network latency and throughput because they're running on better hardware for both CPU and Network.
64
64
65
65
- question: |
66
66
What are some of the considerations when using common Redis commands?
67
67
answer: |
68
-
* Avoid using certain Redis commands that take a long time to complete, unless you fully understand the result of these commands. For example, don't run the [KEYS](https://redis.io/commands/keys) command in production. Depending on the number of keys, it could take a long time to return. Redis is a single-threaded server and it processes commands one at a time. If you have other commands issued after KEYS, they're not be processed until Redis processes the KEYS command. The [redis.io site](https://redis.io/commands/) has details around the time complexity for each operation that it supports. Select each command to see the complexity for each operation.
69
-
* Key sizes - should I use small key/values or large key/values? It depends on the scenario. If your scenario requires larger keys, you can adjust the ConnectionTimeout, then retry values and adjust your retry logic. From a Redis server perspective, smaller values give better performance.
70
-
* These considerations don't mean that you can't store larger values in Redis; you must be aware of the following considerations. Latencies will be higher. If you have one set of data that is larger and one that is smaller, you can use multiple ConnectionMultiplexer instances. Configure each with a different set of timeout and retry values, as described in the previous [What do the StackExchange.Redis configuration options do](cache-development-faq.yml#what-do-the-stackexchange-redis-configuration-options-do-) section.
68
+
- Avoid using certain Redis commands that take a long time to complete, unless you fully understand the result of these commands. For example, don't run the [KEYS](https://redis.io/commands/keys) command in production. Depending on the number of keys, it could take a long time to return. Redis is a single-threaded server and it processes commands one at a time. If you have other commands issued after KEYS, they're not be processed until Redis processes the KEYS command. The [redis.io site](https://redis.io/commands/) has details around the time complexity for each operation that it supports. Select each command to see the complexity for each operation.
69
+
- Key sizes - should I use small key/values or large key/values? It depends on the scenario. If your scenario requires larger keys, you can adjust the ConnectionTimeout, then retry values and adjust your retry logic. From a Redis server perspective, smaller values give better performance.
70
+
- These considerations don't mean that you can't store larger values in Redis; you must be aware of the following considerations. Latencies will be higher. If you have one set of data that is larger and one that is smaller, you can use multiple ConnectionMultiplexer instances. Configure each with a different set of timeout and retry values, as described in the previous [What do the StackExchange.Redis configuration options do](cache-development-faq.yml#what-do-the-stackexchange-redis-configuration-options-do-) section.
71
71
72
72
- question: |
73
73
How can I benchmark and test the performance of my cache?
74
74
answer: |
75
-
* Enable cache diagnostics so you can [monitor](../redis/monitor-cache.md) the health of your cache. You can view the metrics in the Azure portal and you can also [download and review](https://github.com/rustd/RedisSamples/tree/master/CustomMonitoring) them using the tools of your choice.
76
-
* You can use redis-benchmark.exe to load test your Redis server.
77
-
* Ensure that the load testing client and the Azure Cache for Redis are in the same region.
78
-
* Use redis-cli.exe and monitor the cache using the INFO command.
79
-
* If your load is causing high memory fragmentation, you should scale up to a larger cache size.
80
-
* For instructions on downloading the Redis tools, see the [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-) section.
75
+
- Enable cache diagnostics so you can [monitor](../redis/monitor-cache.md) the health of your cache. You can view the metrics in the Azure portal and you can also [download and review](https://github.com/rustd/RedisSamples/tree/master/CustomMonitoring) them using the tools of your choice.
76
+
- You can use redis-benchmark.exe to load test your Redis server.
77
+
- Ensure that the load testing client and the Azure Cache for Redis are in the same region.
78
+
- Use redis-cli.exe and monitor the cache using the INFO command.
79
+
- If your load is causing high memory fragmentation, you should scale up to a larger cache size.
80
+
- For instructions on downloading the Redis tools, see the [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-) section.
81
81
82
82
Here are some examples of using redis-benchmark.exe. Run these commands from a VM in the same region as your cache for accurate results.cache-development-faq.yml
83
83
84
-
* Test Pipelined SET requests using a 1k payload
84
+
- Test Pipelined SET requests using a 1k payload
85
85
86
-
`redis-benchmark.exe -h **yourcache**.redis.cache.windows.net -a **yourAccesskey** -t SET -n 1000000 -d 1024 -P 50`
87
-
* Test Pipelined GET requests using a 1k payload.
86
+
`redis-benchmark.exe -h <yourcache>.redis.cache.windows.net -a <yourAccesskey> -t SET -n 1000000 -d 1024 -P 50`
87
+
88
+
- Test Pipelined GET requests using a 1k payload.
88
89
89
-
>[!NOTE]
90
-
> Run the SET test shown above first to populate cache
91
-
>
90
+
>[!NOTE]
91
+
> Run the SET test shown above first to populate cache
92
+
>
92
93
93
-
`redis-benchmark.exe -h **yourcache**.redis.cache.windows.net -a **yourAccesskey** -t GET -n 1000000 -d 1024 -P 50`
94
+
`redis-benchmark.exe -h <yourcache>.redis.cache.windows.net -a <yourAccesskey> -t GET -n 1000000 -d 1024 -P 50`
94
95
95
96
- question: |
96
97
Important details about ThreadPool growth
97
98
answer: |
98
99
The CLR ThreadPool has two types of threads - "Worker" and "I/O Completion Port" (IOCP) threads.
99
100
100
-
* Worker threads are used for things like processing the `Task.Run(…)`, or `ThreadPool.QueueUserWorkItem(…)` methods. These threads are also used by various components in the CLR when work needs to happen on a background thread.
101
-
* IOCP threads are used when asynchronous IO happens, such as when reading from the network.
101
+
- Worker threads are used for things like processing the `Task.Run(…)`, or `ThreadPool.QueueUserWorkItem(…)` methods. These threads are also used by various components in the CLR when work needs to happen on a background thread.
102
+
- IOCP threads are used when asynchronous IO happens, such as when reading from the network.
102
103
103
104
The thread pool provides new worker threads or I/O completion threads on demand (without any throttling) until it reaches the "Minimum" setting for each type of thread. By default, the minimum number of threads is set to the number of processors on a system.
104
105
105
106
Once the number of existing (busy) threads hits the "minimum" number of threads, the ThreadPool will throttle the rate at which it injects new threads to one thread per 500 milliseconds. Typically, if your system gets a burst of work needing an IOCP thread, it will process that work quickly. However, if the burst is more than the configured "Minimum" setting, there's some delay in processing some of the work as the ThreadPool waits for one of two possibilities:
106
107
107
-
* An existing thread becomes free to process the work.
108
-
* No existing thread becomes free for 500 ms and a new thread is created.
108
+
- An existing thread becomes free to process the work.
109
+
- No existing thread becomes free for 500 ms and a new thread is created.
109
110
110
111
Basically, when the number of Busy threads is greater than Min threads, you're likely paying a 500-ms delay before network traffic is processed by the application. Also, when an existing thread stays idle for longer than 15 seconds, it's cleaned up and this cycle of growth and shrinkage can repeat.
111
112
@@ -129,12 +130,11 @@ sections:
129
130
130
131
How to configure this setting:
131
132
132
-
* We recommend changing this setting programmatically by using the [ThreadPool.SetMinThreads (...)](/dotnet/api/system.threading.threadpool.setminthreads#System_Threading_ThreadPool_SetMinThreads_System_Int32_System_Int32_) method.
133
+
- We recommend changing this setting programmatically by using the [ThreadPool.SetMinThreads (...)](/dotnet/api/system.threading.threadpool.setminthreads#System_Threading_ThreadPool_SetMinThreads_System_Int32_System_Int32_) method.
133
134
134
-
For example, in NET Framework, you set it in `Global.asax.cs` in the `Application_Start` method:
135
+
For example, in NET Framework, you set it in `Global.asax.cs` in the `Application_Start` method:
If you ae using .NET Core, you would set it in `Program.cs`, just before the call to `WebApplication.CreateBuilder()`:
148
+
If you use .NET Core, you set it in `Program.cs`, just before the call to `WebApplication.CreateBuilder()`:
149
149
150
150
```csharp
151
151
const int minThreads = 200
@@ -156,23 +156,26 @@ sections:
156
156
// rest of application setup
157
157
```
158
158
159
-
> [!NOTE]
160
-
> The value specified by this method is a global setting, affecting the whole AppDomain. For example, if you have a 4-core machine and want to set *minWorkerThreads* and *minIoThreads* to 50 per CPU during run-time, use `ThreadPool.SetMinThreads(200, 200)`.
159
+
> [!NOTE]
160
+
> The value specified by this method is a global setting, affecting the whole AppDomain. For example, if you have a 4-core machine and want to set `minWorkerThreads` and `minIoThreads` to 50 per CPU during run-time, use `ThreadPool.SetMinThreads(200, 200)`.
161
+
>
161
162
162
-
* It is also possible to specify the minimum threads setting by using the [*minIoThreads* or *minWorkerThreads* configuration setting](/previous-versions/dotnet/netframework-4.0/7w2sway1(v=vs.100)) under the `<processModel>` configuration element in `Machine.config`. `Machine.config` is typically located at `%SystemRoot%\Microsoft.NET\Framework\[versionNumber]\CONFIG\`. **Setting the number of minimum threads in this way isn't recommended because it's a System-wide setting.**
163
+
-It is also possible to specify the minimum threads setting by using the `minIoThreads` or `minWorkerThreads` [configuration setting](/previous-versions/dotnet/netframework-4.0/7w2sway1(v=vs.100)) under the `<processModel>` configuration element in `Machine.config`. `Machine.config` is typically located at `%SystemRoot%\Microsoft.NET\Framework\[versionNumber]\CONFIG\`.
163
164
164
-
> [!NOTE]
165
-
> The value specified in this configuration element is a *per-core* setting. For example, if you have a 4-core machine and want your *minIoThreads* setting to be 200 at runtime, you would use `<processModel minIoThreads="50"/>`.
166
-
>
165
+
Setting the number of minimum threads in this way isn't recommended because it's a System-wide setting. If you do set it this way, you must restart the application pool for the change to take
166
+
167
+
> [!NOTE]
168
+
> The value specified in this configuration element is a _per-core_ setting. For example, if you have a 4-core machine and want your `minIoThreads`` setting to be 200 at runtime, you would use `<processModel minIoThreads="50"/>`.
169
+
>
167
170
168
171
- question: |
169
172
Enable server GC to get more throughput on the client when using StackExchange.Redis
170
173
answer: |
171
174
Enabling server GC can optimize the client and provide better performance and throughput when using StackExchange.Redis. For more information on server GC and how to enable it, see the following articles:
172
175
173
-
* [To enable server GC](/dotnet/framework/configure-apps/file-schema/runtime/gcserver-element)
174
-
* [Fundamentals of Garbage Collection](/dotnet/standard/garbage-collection/fundamentals)
175
-
* [Garbage Collection and Performance](/dotnet/standard/garbage-collection/performance)
176
+
- [To enable server GC](/dotnet/framework/configure-apps/file-schema/runtime/gcserver-element)
177
+
- [Fundamentals of Garbage Collection](/dotnet/standard/garbage-collection/fundamentals)
178
+
- [Garbage Collection and Performance](/dotnet/standard/garbage-collection/performance)
176
179
177
180
- question: |
178
181
Performance considerations around connections
@@ -183,5 +186,6 @@ sections:
183
186
184
187
additionalContent: |
185
188
186
-
## Next steps
187
-
Learn about other [Azure Cache for Redis FAQs](../redis/faq.yml).
189
+
## Related content
190
+
191
+
- Learn about other [Azure Cache for Redis FAQs](../redis/faq.yml)
0 commit comments