You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-cache-for-redis/cache-management-faq.yml
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -126,7 +126,7 @@ sections:
126
126
127
127
### Recommendation
128
128
129
-
Given this information, we strongly recommend that customers set the minimum configuration value for IOCP and WORKER threads to something larger than the default value. We can't give one-size-fits-all guidance on what this value should be because the right value for one application will likely be too high or low for another application. This setting can also affect the performance of other parts of complicated applications, so each customer needs to fine-tune this setting to their specific needs. A good starting place is 200 or 300, then test and tweak as needed.
129
+
We recommend that customers set the minimum configuration value for IOCP and WORKER threads to something larger than the default value. We can't give one-size-fits-all guidance on this value because the right value for one application is likely be too high or low for another application. This setting can also affect the performance of other parts of complicated applications.Each customer needs to fine-tune this setting to their specific needs. A good starting place is 200 or 300, then test and tweak as needed.
130
130
131
131
How to configure this setting:
132
132
@@ -165,7 +165,7 @@ sections:
165
165
Setting the number of minimum threads in this way isn't recommended because it's a System-wide setting. If you do set it this way, you must restart the application pool for the change to take
166
166
167
167
> [!NOTE]
168
-
> The value specified in this configuration element is a _per-core_ setting. For example, if you have a 4-core machine and want your `minIoThreads`` setting to be 200 at runtime, you would use `processModel minIoThreads="50"`.
168
+
> The value specified in this configuration element is a _per-core_ setting. For example, if you have a 4-core machine and want your `minIoThreads`` setting to be 200 at runtime, you would use `\<processModel minIoThreads="50"`\>.
Copy file name to clipboardExpand all lines: articles/redis/management-faq.yml
+37-33Lines changed: 37 additions & 33 deletions
Original file line number
Diff line number
Diff line change
@@ -27,55 +27,55 @@ sections:
27
27
- question: |
28
28
What are some production best practices?
29
29
answer: |
30
-
* [StackExchange.Redis best practices](#stackexchangeredis-best-practices)
31
-
* [Configuration and concepts](#configuration-and-concepts)
32
-
* [Performance testing](#performance-testing)
30
+
- [StackExchange.Redis best practices](#stackexchangeredis-best-practices)
31
+
- [Configuration and concepts](#configuration-and-concepts)
32
+
- [Performance testing](#performance-testing)
33
33
34
34
### StackExchange.Redis best practices
35
35
36
-
* Set `AbortConnect` to false, then let the ConnectionMultiplexer reconnect automatically.
37
-
* Use a single, long-lived `ConnectionMultiplexer` instance rather than creating a new connection for each request.
38
-
* Redis works best with smaller values, so consider chopping up bigger data into multiple keys. In [the Redis discussion](https://groups.google.com/forum/#!searchin/redis-db/size/redis-db/n7aa2A4DZDs/3OeEPHSQBAAJ), 100 kb is considered large. For more information, see [Best practices development](best-practices-development.md#consider-more-keys-and-smaller-values).
39
-
* Configure your [ThreadPool settings](#important-details-about-threadpool-growth) to avoid timeouts.
40
-
* Use at least the default connectTimeout of 5 seconds. This interval gives StackExchange.Redis sufficient time to re-establish the connection if there's a network blip.
41
-
* Be aware of the performance costs associated with different operations you're running. For instance, the `KEYS` command is an O(n) operation and should be avoided. The [redis.io site](https://redis.io/commands/) has details around the time complexity for each operation that it supports. Select each command to see the complexity for each operation.
36
+
- Set `AbortConnect` to false, then let the ConnectionMultiplexer reconnect automatically.
37
+
- Use a single, long-lived `ConnectionMultiplexer` instance rather than creating a new connection for each request.
38
+
- Redis works best with smaller values, so consider chopping up bigger data into multiple keys. In [the Redis discussion](https://groups.google.com/forum/#!searchin/redis-db/size/redis-db/n7aa2A4DZDs/3OeEPHSQBAAJ), 100 kb is considered large. For more information, see [Best practices development](best-practices-development.md#consider-more-keys-and-smaller-values).
39
+
- Configure your [ThreadPool settings](#important-details-about-threadpool-growth) to avoid timeouts.
40
+
- Use at least the default connectTimeout of 5 seconds. This interval gives StackExchange.Redis sufficient time to re-establish the connection if there's a network blip.
41
+
- Be aware of the performance costs associated with different operations you're running. For instance, the `KEYS` command is an O(n) operation and should be avoided. The [redis.io site](https://redis.io/commands/) has details around the time complexity for each operation that it supports. Select each command to see the complexity for each operation.
42
42
43
43
### Configuration and concepts
44
44
45
-
* Remember that Redis is an **In-Memory** data store. For more information, see [Troubleshoot data loss in Azure Managed Redis](troubleshoot-data-loss.md) so that you're aware of scenarios where data loss can occur.
46
-
* Develop your system such that it can handle connection blips caused by [patching and failover](failover.md).
45
+
- Remember that Redis is an _In-Memory_ data store. For more information, see [Troubleshoot data loss in Azure Managed Redis](troubleshoot-data-loss.md) so that you're aware of scenarios where data loss can occur.
46
+
- Develop your system such that it can handle connection blips caused by [patching and failover](failover.md).
47
47
48
48
### Performance testing
49
49
50
-
* See [Performance testing with Azure Managed Redis](best-practices-performance.md) for example benchmarks and instructions for running your own performance tests on Azure Managed Redis.
50
+
- See [Performance testing with Azure Managed Redis](best-practices-performance.md) for example benchmarks and instructions for running your own performance tests on Azure Managed Redis.
51
51
52
52
- question: |
53
53
What are some of the considerations when using common Redis commands?
54
54
answer: |
55
-
* Avoid using certain Redis commands that take a long time to complete, unless you fully understand the result of these commands. For example, don't run the [KEYS](https://redis.io/commands/keys) command in production. Depending on the number of keys, it could take a long time to return. Each Redis shard is a single-threaded, and it processes commands one at a time. If you have other commands issued after KEYS, they're not be processed until Redis processes the KEYS command. The [redis.io site](https://redis.io/commands/) has details around the time complexity for each operation that it supports. Select each command to see the complexity for each operation.
56
-
* Key sizes - should I use small key/values or large key/values? It depends on the scenario. If your scenario requires larger keys, you can adjust the ConnectionTimeout, then retry values and adjust your retry logic. From a Redis server perspective, smaller values give better performance.
57
-
* These considerations don't mean that you can't store larger values in Redis; you must be aware of the following considerations. Latencies are higher. If you have one set of data that is larger and one that is smaller, you can use multiple ConnectionMultiplexer instances. Configure each with a different set of timeout and retry values, as described in the previous [What do the StackExchange.Redis configuration options do](development-faq.yml#what-do-the-stackexchange-redis-configuration-options-do-) section.
55
+
- Avoid using certain Redis commands that take a long time to complete, unless you fully understand the result of these commands. For example, don't run the [KEYS](https://redis.io/commands/keys) command in production. Depending on the number of keys, it could take a long time to return. Each Redis shard is a single-threaded, and it processes commands one at a time. If you have other commands issued after KEYS, they're not be processed until Redis processes the KEYS command. The [redis.io site](https://redis.io/commands/) has details around the time complexity for each operation that it supports. Select each command to see the complexity for each operation.
56
+
- Key sizes - should I use small key/values or large key/values? It depends on the scenario. If your scenario requires larger keys, you can adjust the ConnectionTimeout, then retry values and adjust your retry logic. From a Redis server perspective, smaller values give better performance.
57
+
- These considerations don't mean that you can't store larger values in Redis; you must be aware of the following considerations. Latencies are higher. If you have one set of data that is larger and one that is smaller, you can use multiple ConnectionMultiplexer instances. Configure each with a different set of timeout and retry values, as described in the previous [What do the StackExchange.Redis configuration options do](development-faq.yml#what-do-the-stackexchange-redis-configuration-options-do-) section.
58
58
59
59
- question: |
60
60
How can I benchmark and test the performance of my cache?
61
61
answer: |
62
-
* Enable cache diagnostics so you can [monitor](monitor-cache.md) the health of your cache. You can view the metrics in the Azure portal and you can also [download and review](https://github.com/rustd/RedisSamples/tree/master/CustomMonitoring) them using the tools of your choice.
63
-
* See [Performance testing with Azure Managed Redis](best-practices-performance.md) for example benchmarks and instructions for running your own performance tests on Azure Managed Redis.
62
+
- Enable cache diagnostics so you can [monitor](monitor-cache.md) the health of your cache. You can view the metrics in the Azure portal and you can also [download and review](https://github.com/rustd/RedisSamples/tree/master/CustomMonitoring) them using the tools of your choice.
63
+
- See [Performance testing with Azure Managed Redis](best-practices-performance.md) for example benchmarks and instructions for running your own performance tests on Azure Managed Redis.
64
64
65
65
- question: |
66
66
Important details about ThreadPool growth
67
67
answer: |
68
68
The CLR ThreadPool has two types of threads - "Worker" and "I/O Completion Port" (IOCP) threads.
69
69
70
-
* Worker threads are used for things like processing the `Task.Run(…)`, or `ThreadPool.QueueUserWorkItem(…)` methods. These threads are also used by various components in the CLR when work needs to happen on a background thread.
71
-
* IOCP threads are used when asynchronous IO happens, such as when reading from the network.
70
+
- Worker threads are used for things like processing the `Task.Run(…)`, or `ThreadPool.QueueUserWorkItem(…)` methods. These threads are also used by various components in the CLR when work needs to happen on a background thread.
71
+
- IOCP threads are used when asynchronous IO happens, such as when reading from the network.
72
72
73
73
The thread pool provides new worker threads or I/O completion threads on demand (without any throttling) until it reaches the "Minimum" setting for each type of thread. By default, the minimum number of threads is set to the number of processors on a system.
74
74
75
75
Once the number of existing (busy) threads hits the "minimum" number of threads, the ThreadPool throttles the rate at which it injects new threads to one thread per 500 milliseconds. Typically, if your system gets a burst of work needing an IOCP thread, it processes that work quickly. However, if the burst is more than the configured "Minimum" setting, there's some delay in processing some of the work as the ThreadPool waits for one of two possibilities:
76
76
77
-
* An existing thread becomes free to process the work.
78
-
* No existing thread becomes free for 500 ms and a new thread is created.
77
+
- An existing thread becomes free to process the work.
78
+
- No existing thread becomes free for 500 ms and a new thread is created.
79
79
80
80
Basically, when the number of Busy threads is greater than Min threads, you're likely paying a 500-ms delay before network traffic is processed by the application. Also, when an existing thread stays idle for longer than 15 seconds, it gets cleaned up and this cycle of growth and shrinkage can repeat.
81
81
@@ -95,11 +95,11 @@ sections:
95
95
96
96
### Recommendation
97
97
98
-
Given this information, we strongly recommend that customers set the minimum configuration value for IOCP and WORKER threads to something larger than the default value. We can't give one-size-fits-all guidance on what this value should be because the right value for one application can be too high or low for another application. This setting can also affect the performance of other parts of complicated applications, so each customer needs to fine-tune this setting to their specific needs. A good starting place is 200 or 300, then test and tweak as needed.
98
+
Wwe strongly recommend that customers set the minimum configuration value for IOCP and WORKER threads to something larger than the default value. We can't give one-size-fits-all guidance on this value because the right value for one application can be too high or low for another application. This setting can also affect the performance of other parts of complicated applications. Each customer needs to fine-tune this setting to their specific needs. A good starting place is 200 or 300, then test and tweak as needed.
99
99
100
100
How to configure this setting:
101
101
102
-
* We recommend changing this setting programmatically by using the [ThreadPool.SetMinThreads (...)](/dotnet/api/system.threading.threadpool.setminthreads#System_Threading_ThreadPool_SetMinThreads_System_Int32_System_Int32_) method in .NET Framework and .NET Core applications.
102
+
_ We recommend changing this setting programmatically by using the [ThreadPool.SetMinThreads (...)](/dotnet/api/system.threading.threadpool.setminthreads#System_Threading_ThreadPool_SetMinThreads_System_Int32_System_Int32_) method in .NET Framework and .NET Core applications.
103
103
104
104
For example, in NET Framework, you set it in `Global.asax.cs` in the `Application_Start` method:
105
105
@@ -125,32 +125,36 @@ sections:
125
125
var builder = WebApplication.CreateBuilder(args);
126
126
// rest of application setup
127
127
```
128
-
> [!NOTE]
129
-
> The value specified by this method is a global setting, affecting the whole AppDomain. For example, if you have a machine with four cores and want to set *minWorkerThreads* and *minIoThreads* to 50 per CPU during run-time, use **ThreadPool.SetMinThreads(200, 200)**.
128
+
> [!NOTE]
129
+
> The value specified by this method is a global setting, affecting the whole AppDomain. For example, if you have a machine with four cores and want to set `minWorkerThreads` and `minIoThreads` to 50 per CPU during run-time, use `ThreadPool.SetMinThreads(200, 200)`.
130
+
>
130
131
131
-
* It's also possible to specify the minimum threads setting by using the [*minIoThreads* or *minWorkerThreads* configuration setting](/previous-versions/dotnet/netframework-4.0/7w2sway1(v=vs.100)) under the `<processModel>` configuration element in `Machine.config`. `Machine.config` is typically located at `%SystemRoot%\Microsoft.NET\Framework\[versionNumber]\CONFIG\`. **Setting the number of minimum threads in this way isn't recommended because it's a System-wide setting.**
132
+
- It's also possible to specify the minimum threads setting by using the `minIoThreads`` or `minWorkerThreads` [configuration setting](/previous-versions/dotnet/netframework-4.0/7w2sway1(v=vs.100)) under the `processModel` configuration element in `Machine.config`. `Machine.config` is typically located at `%SystemRoot%\Microsoft.NET\Framework\[versionNumber]\CONFIG\`.
133
+
134
+
Setting the number of minimum threads in this way isn't recommended because it's a System-wide setting.
132
135
133
136
> [!NOTE]
134
-
> The value specified in this configuration element is a *per-core* setting. For example, if you have a machine with four cores and want your *minIoThreads* setting to be 200 at runtime, you would use `<processModel minIoThreads="50"/>`.
137
+
> The value specified in this configuration element is a _per-core_ setting. For example, if you have a machine with four cores and want your `minIoThreads` setting to be 200 at runtime, you would use `processModel minIoThreads="50"`.
135
138
>
136
139
137
140
- question: |
138
141
Enable server GC to get more throughput on the client when using StackExchange.Redis
139
142
answer: |
140
143
Enabling server GC can optimize the client and provide better performance and throughput when using StackExchange.Redis. For more information on server GC and how to enable it, see the following articles:
141
144
142
-
* [To enable server GC](/dotnet/framework/configure-apps/file-schema/runtime/gcserver-element)
143
-
* [Fundamentals of Garbage Collection](/dotnet/standard/garbage-collection/fundamentals)
144
-
* [Garbage Collection and Performance](/dotnet/standard/garbage-collection/performance)
145
+
- [To enable server GC](/dotnet/framework/configure-apps/file-schema/runtime/gcserver-element)
146
+
- [Fundamentals of Garbage Collection](/dotnet/standard/garbage-collection/fundamentals)
147
+
- [Garbage Collection and Performance](/dotnet/standard/garbage-collection/performance)
145
148
146
149
- question: |
147
150
Performance considerations around connections
148
151
answer: |
149
-
Different SKUs might have different limits for client connections, memory, and bandwidth. While each size of cache allows *up to* some number of connections, each connection to Redis has overhead associated with it. An example of such overhead would be CPU and memory usage because of TLS/SSL encryption. The maximum connection limit for a given cache size assumes a lightly loaded cache. If load from connection overhead plus load from client operations exceeds capacity for the system, the cache can experience capacity issues even if you don't exceed the connection limit for the current cache size.
152
+
Different SKUs might have different limits for client connections, memory, and bandwidth. While each size of cache allows up to some number of connections, each connection to Redis has overhead associated with it. An example of such overhead would be CPU and memory usage because of TLS/SSL encryption. The maximum connection limit for a given cache size assumes a lightly loaded cache. If load from connection overhead plus load from client operations exceeds capacity for the system, the cache can experience capacity issues even if you don't exceed the connection limit for the current cache size.
150
153
151
154
For more information about the different connections limits for each tier, see [Azure Managed Redis pricing](https://azure.microsoft.com/pricing/details/cache/). For more information about connections and other default configurations, see [Default Redis server configuration](configure.md#default-redis-server-configuration).
152
155
153
156
additionalContent: |
154
157
155
-
## Next steps
156
-
Learn about other [Azure Managed Redis FAQs](faq.yml).
158
+
## Related content
159
+
160
+
- Learn about other [Azure Managed Redis FAQs](faq.yml).
0 commit comments