Skip to content

Commit ff408a2

Browse files
committed
touchups
1 parent bc2ece2 commit ff408a2

File tree

1 file changed

+24
-28
lines changed

1 file changed

+24
-28
lines changed

articles/azure-cache-for-redis/cache-management-faq.yml

Lines changed: 24 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -17,39 +17,35 @@ sections:
1717
- question: |
1818
How can I benchmark and test the performance of my cache?
1919
answer: |
20-
- Use `redis-benchmark.exe` to load test your Redis server.
21-
- Start by using `redis-benchmark.exe` to get a feel for possible throughput before writing your own performance tests.
22-
- If you use Transport Layer Security/Secure Socket Layer (TLS/SSL) on your cache instance, add the `--tls` parameter to your `redis-benchmark` command or use a proxy like `stunnel` to enable TLS/SSL.
23-
- `Redis-benchmark` uses port `6379` by default. Use the `-p` parameter to override this setting if your cache uses the SSL/TLS port `6380` or the Enterprise tier port `10000`.
24-
- If necessary, you can [enable the non-TLS port through the Azure portal](cache-configure.md#access-ports) before you run the test.
25-
- The client virtual machine (VM) you use for testing should be in the same region as your Azure Cache for Redis instance.
26-
- Use D-series and E-series VMs for your clients for best results.
27-
- Make sure your client VM has at least as much computing and bandwidth capability as the cache you're testing.
28-
- Enable Virtual Receive-side Scaling (VRSS) on the client machine if you're on Windows. For more information, see [Virtual Receive-side Scaling in Windows Server 2012 R2](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn383582(v=ws.11)).
20+
- Use `redis-benchmark.exe` to load test your Redis server. Use `redis-benchmark.exe` to get a feel for possible throughput before writing your own performance tests.
21+
- Use `redis-cli` to monitor the cache using the `INFO` command. For instructions on downloading the Redis tools, see [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)
22+
- If you use Transport Layer Security/Secure Socket Layer (TLS/SSL) on your cache instance, add the `--tls` parameter to your Redis tools commands, or use a proxy like `stunnel` to enable TLS/SSL.
23+
- `Redis-benchmark` uses port `6379` by default. Use the `-p` parameter to override this setting if your cache uses the SSL/TLS port `6380` or the Enterprise tier port `10000`.
24+
- If necessary, you can [enable the non-TLS port through the Azure portal](cache-configure.md#access-ports) before you run the load test.
25+
- Make sure the client virtual machine (VM) you use for testing is in the same region as your Azure Cache for Redis instance.
26+
- Ensure that your client VM has at least as much computing and bandwidth capability as the cache you're testing. For best results, use D-series and E-series VMs for your clients.
27+
- If you're on Windows, enable Virtual Receive-side Scaling (VRSS) on the client machine. For more information, see [Virtual Receive-side Scaling in Windows Server 2012 R2](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn383582(v=ws.11)).
2928
- Enable cache diagnostics so you can [monitor](../redis/monitor-cache.md) the health of your cache. You can view the metrics in the Azure portal, and you can also [download and review your metrics](https://github.com/rustd/RedisSamples/tree/master/CustomMonitoring) using the tools of your choice.
30-
- Ensure that the load testing client and the Azure Redis cache are in the same region.
31-
- Use `redis-cli` to monitor the cache using the `INFO` command.
3229
- If your load is causing high memory fragmentation, scale up to a larger cache size.
33-
- For instructions on downloading the Redis tools, see [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)
3430
35-
Here are some examples of using `redis-benchmark.exe`. Run these commands from a VM in the same region as your cache for accurate results.
31+
The following examples show how to use `redis-benchmark.exe`. Run these commands from a VM in the same region as your cache for accurate results.
3632
37-
- First, test pipelined `SET` requests using a 1k payload:
33+
First, test pipelined `SET` requests using a 1k payload:
3834
39-
`redis-benchmark.exe -h <yourcache>.redis.cache.windows.net -a <yourAccesskey> -t SET -n 1000000 -d 1024 -P 50`
35+
`redis-benchmark.exe -h <yourcache>.redis.cache.windows.net -a <yourAccesskey> -t SET -n 1000000 -d 1024 -P 50`
4036
41-
- After you run the `SET` test, run pipelined `GET` requests using a 1k payload:
37+
After you run the `SET` test, run pipelined `GET` requests using a 1k payload:
4238
43-
`redis-benchmark.exe -h <yourcache>.redis.cache.windows.net -a <yourAccesskey> -t GET -n 1000000 -d 1024 -P 50`
39+
`redis-benchmark.exe -h <yourcache>.redis.cache.windows.net -a <yourAccesskey> -t GET -n 1000000 -d 1024 -P 50`
4440
4541
- question: |
4642
How can I enable server GC to get more throughput on the client when using StackExchange.Redis?
4743
answer: |
4844
Enabling server garbage collection (GC) can optimize the client and provide better performance and throughput when you use StackExchange.Redis. For more information on server GC and how to enable it, see the following articles:
4945
50-
- [Enable server GC](/dotnet/framework/configure-apps/file-schema/runtime/gcserver-element)
51-
- [Fundamentals of Garbage Collection](/dotnet/standard/garbage-collection/fundamentals)
52-
- [Garbage Collection and Performance](/dotnet/standard/garbage-collection/performance)
46+
- [The \<gcServer> element](/dotnet/framework/configure-apps/file-schema/runtime/gcserver-element)
47+
- [Fundamentals of garbage collection](/dotnet/standard/garbage-collection/fundamentals)
48+
- [Garbage collection and performance](/dotnet/standard/garbage-collection/performance)
5349
5450
- question: |
5551
Should I enable the non-TLS/SSL port for connecting to Redis?
@@ -67,7 +63,7 @@ sections:
6763
- question: |
6864
What are some considerations for using common Redis commands?
6965
answer: |
70-
- Avoid using certain Redis commands that take a long time to complete, unless you fully understand the result of these commands. For example, don't run the [KEYS](https://redis.io/commands/keys) command in production. Depending on the number of keys, it could take a long time to return. Redis is a single-threaded server that processes commands one at a time. Redis doesn't process commands issued after `KEYS` until it finishes processing the `KEYS` command.
66+
- Avoid using certain Redis commands that take a long time to complete, unless you fully understand the result of these commands. For example, don't run the [KEYS](https://redis.io/commands/keys) command in production. Depending on the number of keys, it could take a long time to return. Redis is a single-threaded server that processes commands one at a time. If you issue the `KEYS` command, Redis doesn't process subsequent commands until it finishes processing the `KEYS` command.
7167
7268
The [redis.io](https://redis.io/commands/) site has time complexity details for each operation it supports. Select each command to see the complexity for each operation.
7369
- What size keys to use depends on the scenario. If your scenario requires larger keys, you can adjust the `ConnectionTimeout`, then retry values and adjust your retry logic. From a Redis server perspective, smaller key values give better performance.
@@ -104,14 +100,14 @@ sections:
104100
answer: |
105101
The Common Language Runtime (CLR) ThreadPool has two types of threads, Worker and I/O Completion Port (IOCP).
106102
107-
- Worker threads are used for things like processing the `Task.Run(…)`, or `ThreadPool.QueueUserWorkItem(…)` methods. Various components in the CLR also use these threads when work needs to happen on a background thread.
108-
- IOCP threads are used for asynchronous I/O, such as when reading from the network.
103+
- `WORKER` threads are used for things like processing the `Task.Run(…)`, or `ThreadPool.QueueUserWorkItem(…)` methods. Various components in the CLR also use these threads when work needs to happen on a background thread.
104+
- `IOCP` threads are used for asynchronous I/O, such as when reading from the network.
109105
110106
The thread pool provides new worker threads or I/O completion threads on demand without any throttling until it reaches the `minimum` setting for each type of thread. By default, the minimum number of threads is set to the number of processors on a system.
111107
112108
Once the number of existing busy threads hits the `minimum` threads number, the ThreadPool throttles the rate at which it injects new threads to one thread per 500 milliseconds.
113109
114-
Typically, if your system gets a burst of work that needs an IOCP thread, it processes that work quickly. However, if the burst is more than the configured `minimum` setting, there's some delay in processing some of the work as the ThreadPool waits for one of two possibilities:
110+
Typically, if your system gets a burst of work that needs an `IOCP` thread, it processes that work quickly. However, if the burst is more than the configured `minimum` setting, there's some delay in processing some of the work as the ThreadPool waits for one of two possibilities:
115111
116112
- An existing thread becomes free to process the work.
117113
- No existing thread becomes free for 500 ms so a new thread is created.
@@ -122,12 +118,12 @@ sections:
122118
123119
```output
124120
System.TimeoutException: Timeout performing GET MyKey, inst: 2, mgr: Inactive,
125-
`queue: 6, qu: 0, qs: 6, qc: 0, wr: 0, wq: 0, in: 0, ar: 0,
126-
`IOCP: (Busy=6,Free=994,Min=4,Max=1000),
127-
`WORKER: (Busy=3,Free=997,Min=4,Max=1000)
121+
queue: 6, qu: 0, qs: 6, qc: 0, wr: 0, wq: 0, in: 0, ar: 0,
122+
IOCP: (Busy=6,Free=994,Min=4,Max=1000),
123+
WORKER: (Busy=3,Free=997,Min=4,Max=1000)
128124
```
129125
130-
The example shows that for the IOCP thread, there are six busy threads and the system is configured to allow four minimum threads. In this case, the client is likely to see two 500-ms delays, because 6 > 4.
126+
The example shows that for the `IOCP` thread, there are six busy threads and the system is configured to allow four minimum threads. In this case, the client is likely to see two 500-ms delays, because 6 > 4.
131127
132128
> [!NOTE]
133129
> StackExchange.Redis can hit timeouts if growth of either `IOCP` or `WORKER` threads is throttled.

0 commit comments

Comments
 (0)