You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-cache-for-redis/cache-best-practices.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ author: joncole
5
5
6
6
ms.service: cache
7
7
ms.topic: conceptual
8
-
ms.date: 06/21/2019
8
+
ms.date: 01/06/2020
9
9
ms.author: joncole
10
10
---
11
11
@@ -15,19 +15,19 @@ By following these best practices, you can help maximize the performance and cos
15
15
## Configuration and concepts
16
16
***Use Standard or Premium tier for production systems.** The Basic tier is a single node system with no data replication and no SLA. Also, use at least a C1 cache. C0 caches are meant for simple dev/test scenarios since they have a shared CPU core, little memory, and are prone to "noisy neighbor" issues.
17
17
18
-
***Remember that Redis is an in-memory data store.**[This article](https://gist.github.com/JonCole/b6354d92a2d51c141490f10142884ea4#file-whathappenedtomydatainredis-md) outlines some scenarios where data loss can occur.
18
+
***Remember that Redis is an in-memory data store.**[This article](cache-troubleshoot-data-loss.md) outlines some scenarios where data loss can occur.
19
19
20
-
***Develop your system such that it can handle connection blips**[because of patching and failover](https://gist.github.com/JonCole/317fe03805d5802e31cfa37e646e419d#file-azureredis-patchingexplained-md).
20
+
***Develop your system such that it can handle connection blips**[because of patching and failover](cache-failover.md).
21
21
22
-
***Configure your [maxmemory-reserved setting](cache-configure.md#maxmemory-policy-and-maxmemory-reserved) to improve system responsiveness** under memory pressure conditions. This setting is especially important for write-heavy workloads or if you're storing larger values (100 KB or more) in Redis. It's recommended that you start with 10% of the size of your cache and then increase the percentage if you have write-heavy loads.
22
+
***Configure your [maxmemory-reserved setting](cache-configure.md#maxmemory-policy-and-maxmemory-reserved) to improve system responsiveness** under memory pressure conditions. A sufficient reservation setting is especially important for write-heavy workloads or if you're storing larger values (100 KB or more) in Redis. You should start with 10% of the size of your cache and increase this percentage if you have write-heavy loads.
23
23
24
24
***Redis works best with smaller values**, so consider chopping up bigger data into multiple keys. In [this Redis discussion](https://stackoverflow.com/questions/55517224/what-is-the-ideal-value-size-range-for-redis-is-100kb-too-large/), some considerations are listed that you should consider carefully. Read [this article](cache-troubleshoot-client.md#large-request-or-response-size) for an example problem that can be caused by large values.
25
25
26
26
***Locate your cache instance and your application in the same region.** Connecting to a cache in a different region can significantly increase latency and reduce reliability. While you can connect from outside of Azure, it not recommended *especially when using Redis as a cache*. If you're using Redis as just a key/value store, latency may not be the primary concern.
27
27
28
-
***Reuse connections**- Creating new connections is expensive and increases latency, so reuse connections as much as possible. If you choose to create new connections, make sure to close the old connections before you release them (even in managed memory languages like .NET or Java).
28
+
***Reuse connections.** Creating new connections is expensive and increases latency, so reuse connections as much as possible. If you choose to create new connections, make sure to close the old connections before you release them (even in managed memory languages like .NET or Java).
29
29
30
-
***Configure your client library to use a *connect timeout* of at least 15 seconds**, giving the system time to connect even under higher CPU conditions. A small connection timeout value doesn't guarantee that the connection is established in that time frame. If something goes wrong (high client CPU, high server CPU, and so on), then a short connection timeout value will cause the connection attempt to fail. This behavior often makes a bad situation worse. Instead of helping, shorter timeouts aggravate the problem by forcing the system to restart the process of trying to reconnect, which can lead to a *connect -> fail -> retry* loop. We generally recommend that you leave your connection Timeout at 15 seconds or higher. It's better to let your connection attempt succeed after 15 or 20 seconds than it is to have it fail quickly only to retry. Such a retry loop can cause your outage to last longer than if you let the system just take longer initially.
30
+
***Configure your client library to use a *connect timeout* of at least 15 seconds**, giving the system time to connect even under higher CPU conditions. A small connection timeout value doesn't guarantee that the connection is established in that time frame. If something goes wrong (high client CPU, high server CPU, and so on), then a short connection timeout value will cause the connection attempt to fail. This behavior often makes a bad situation worse. Instead of helping, shorter timeouts aggravate the problem by forcing the system to restart the process of trying to reconnect, which can lead to a *connect -> fail -> retry* loop. We generally recommend that you leave your connection Timeout at 15 seconds or higher. It's better to let your connection attempt succeed after 15 or 20 seconds than to have it fail quickly only to retry. Such a retry loop can cause your outage to last longer than if you let the system just take longer initially.
31
31
> [!NOTE]
32
32
> This guidance is specific to the *connection attempt* and not related to the time you're willing to wait for an *operation* like GET or SET to complete.
33
33
@@ -40,7 +40,7 @@ There are several things related to memory usage within your Redis server instan
40
40
41
41
***Choose an [eviction policy](https://redis.io/topics/lru-cache) that works for your application.** The default policy for Azure Redis is *volatile-lru*, which means that only keys that have a TTL value set will be eligible for eviction. If no keys have a TTL value, then the system won't evict any keys. If you want the system to allow any key to be evicted if under memory pressure, then you may want to consider the *allkeys-lru* policy.
42
42
43
-
***Set an expiration value on your keys.**This will remove keys proactively instead of waiting until there is memory pressure. When eviction does kick in because of memory pressure, it can cause additional load on your server. For more information, see the documentation for the [Expire](https://redis.io/commands/expire) and [ExpireAt](https://redis.io/commands/expireat) commands.
43
+
***Set an expiration value on your keys.**An expiration will remove keys proactively instead of waiting until there's memory pressure. When eviction does kick in because of memory pressure, it can cause additional load on your server. For more information, see the documentation for the [EXPIRE](https://redis.io/commands/expire) and [EXPIREAT](https://redis.io/commands/expireat) commands.
@@ -57,10 +57,10 @@ Unfortunately, there's no easy answer. Each application needs to decide what op
57
57
58
58
* You can get client-side errors even though Redis successfully ran the command you asked it to run. For example:
59
59
- Timeouts are a client-side concept. If the operation reached the server, the server will run the command even if the client gives up waiting.
60
-
- When an error occurs on the socket connection, it's not possible to know if the operation actually ran on the server. For example, the connection error can happen after the request is processed by the server but before the response is received by the client.
61
-
* How does my application react if I accidentally run the same operation twice? For instance, what if I increment an integer twice instead of just once? Is my application writing to the same key from multiple places? What if my retry logic overwrites a value set by some other part of my app?
60
+
- When an error occurs on the socket connection, it's not possible to know if the operation actually ran on the server. For example, the connection error can happen after the server processed the request but before the client receives the response.
61
+
* How does my application react if I accidentally run the same operation twice? For instance, what if I increment an integer twice instead of once? Is my application writing to the same key from multiple places? What if my retry logic overwrites a value set by some other part of my app?
62
62
63
-
If you would like to test how your code works under error conditions, consider using the [Reboot Feature](cache-administration.md#reboot). This allows you to see how connection blips affect your application.
63
+
If you would like to test how your code works under error conditions, consider using the [Reboot feature](cache-administration.md#reboot). Rebooting allows you to see how connection blips affect your application.
64
64
65
65
## Performance testing
66
66
***Start by using `redis-benchmark.exe`** to get a feel for possible throughput/latency before writing your own perf tests. Redis-benchmark documentation can be [found here](https://redis.io/topics/benchmarks). Note that redis-benchmark doesn't support SSL, so you'll have to [enable the Non-SSL port through the Portal](cache-configure.md#access-ports) before you run the test. [A windows compatible version of redis-benchmark.exe can be found here](https://github.com/MSOpenTech/redis/releases)
@@ -77,13 +77,13 @@ If you would like to test how your code works under error conditions, consider u
77
77
78
78
### Redis-Benchmark examples
79
79
**Pre-test setup**:
80
-
This will prepare the cache instance with data required for the latency and throughput testing commands listed below.
80
+
Prepare the cache instance with data required for the latency and throughput testing commands listed below.
81
81
> redis-benchmark.exe -h yourcache.redis.cache.windows.net -a yourAccesskey -t SET -n 10 -d 1024
82
82
83
83
**To test latency**:
84
-
This will test GET requests using a 1k payload.
84
+
Test GET requests using a 1k payload.
85
85
> redis-benchmark.exe -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -d 1024 -P 50 -c 4
86
86
87
87
**To test throughput:**
88
-
This uses Pipelined GET requests with 1k payload.
88
+
Pipelined GET requests with 1k payload.
89
89
> redis-benchmark.exe -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n 1000000 -d 1024 -P 50 -c 50
0 commit comments