Skip to content

Commit ec7d024

Browse files
authored
Merge pull request #100192 from asasine/user/asasine/redis/update-gist-links
Updated links from Jon's gists to doc articles
2 parents 7bf3388 + a8f6d55 commit ec7d024

File tree

1 file changed

+13
-13
lines changed

1 file changed

+13
-13
lines changed

articles/azure-cache-for-redis/cache-best-practices.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: joncole
55

66
ms.service: cache
77
ms.topic: conceptual
8-
ms.date: 06/21/2019
8+
ms.date: 01/06/2020
99
ms.author: joncole
1010
---
1111

@@ -15,19 +15,19 @@ By following these best practices, you can help maximize the performance and cos
1515
## Configuration and concepts
1616
* **Use Standard or Premium tier for production systems.** The Basic tier is a single node system with no data replication and no SLA. Also, use at least a C1 cache. C0 caches are meant for simple dev/test scenarios since they have a shared CPU core, little memory, and are prone to "noisy neighbor" issues.
1717

18-
* **Remember that Redis is an in-memory data store.** [This article](https://gist.github.com/JonCole/b6354d92a2d51c141490f10142884ea4#file-whathappenedtomydatainredis-md) outlines some scenarios where data loss can occur.
18+
* **Remember that Redis is an in-memory data store.** [This article](cache-troubleshoot-data-loss.md) outlines some scenarios where data loss can occur.
1919

20-
* **Develop your system such that it can handle connection blips** [because of patching and failover](https://gist.github.com/JonCole/317fe03805d5802e31cfa37e646e419d#file-azureredis-patchingexplained-md).
20+
* **Develop your system such that it can handle connection blips** [because of patching and failover](cache-failover.md).
2121

22-
* **Configure your [maxmemory-reserved setting](cache-configure.md#maxmemory-policy-and-maxmemory-reserved) to improve system responsiveness** under memory pressure conditions. This setting is especially important for write-heavy workloads or if you're storing larger values (100 KB or more) in Redis. It's recommended that you start with 10% of the size of your cache and then increase the percentage if you have write-heavy loads.
22+
* **Configure your [maxmemory-reserved setting](cache-configure.md#maxmemory-policy-and-maxmemory-reserved) to improve system responsiveness** under memory pressure conditions. A sufficient reservation setting is especially important for write-heavy workloads or if you're storing larger values (100 KB or more) in Redis. You should start with 10% of the size of your cache and increase this percentage if you have write-heavy loads.
2323

2424
* **Redis works best with smaller values**, so consider chopping up bigger data into multiple keys. In [this Redis discussion](https://stackoverflow.com/questions/55517224/what-is-the-ideal-value-size-range-for-redis-is-100kb-too-large/), some considerations are listed that you should consider carefully. Read [this article](cache-troubleshoot-client.md#large-request-or-response-size) for an example problem that can be caused by large values.
2525

2626
* **Locate your cache instance and your application in the same region.** Connecting to a cache in a different region can significantly increase latency and reduce reliability. While you can connect from outside of Azure, it not recommended *especially when using Redis as a cache*. If you're using Redis as just a key/value store, latency may not be the primary concern.
2727

28-
* **Reuse connections** - Creating new connections is expensive and increases latency, so reuse connections as much as possible. If you choose to create new connections, make sure to close the old connections before you release them (even in managed memory languages like .NET or Java).
28+
* **Reuse connections.** Creating new connections is expensive and increases latency, so reuse connections as much as possible. If you choose to create new connections, make sure to close the old connections before you release them (even in managed memory languages like .NET or Java).
2929

30-
* **Configure your client library to use a *connect timeout* of at least 15 seconds**, giving the system time to connect even under higher CPU conditions. A small connection timeout value doesn't guarantee that the connection is established in that time frame. If something goes wrong (high client CPU, high server CPU, and so on), then a short connection timeout value will cause the connection attempt to fail. This behavior often makes a bad situation worse. Instead of helping, shorter timeouts aggravate the problem by forcing the system to restart the process of trying to reconnect, which can lead to a *connect -> fail -> retry* loop. We generally recommend that you leave your connection Timeout at 15 seconds or higher. It's better to let your connection attempt succeed after 15 or 20 seconds than it is to have it fail quickly only to retry. Such a retry loop can cause your outage to last longer than if you let the system just take longer initially.
30+
* **Configure your client library to use a *connect timeout* of at least 15 seconds**, giving the system time to connect even under higher CPU conditions. A small connection timeout value doesn't guarantee that the connection is established in that time frame. If something goes wrong (high client CPU, high server CPU, and so on), then a short connection timeout value will cause the connection attempt to fail. This behavior often makes a bad situation worse. Instead of helping, shorter timeouts aggravate the problem by forcing the system to restart the process of trying to reconnect, which can lead to a *connect -> fail -> retry* loop. We generally recommend that you leave your connection Timeout at 15 seconds or higher. It's better to let your connection attempt succeed after 15 or 20 seconds than to have it fail quickly only to retry. Such a retry loop can cause your outage to last longer than if you let the system just take longer initially.
3131
> [!NOTE]
3232
> This guidance is specific to the *connection attempt* and not related to the time you're willing to wait for an *operation* like GET or SET to complete.
3333
@@ -40,7 +40,7 @@ There are several things related to memory usage within your Redis server instan
4040

4141
* **Choose an [eviction policy](https://redis.io/topics/lru-cache) that works for your application.** The default policy for Azure Redis is *volatile-lru*, which means that only keys that have a TTL value set will be eligible for eviction. If no keys have a TTL value, then the system won't evict any keys. If you want the system to allow any key to be evicted if under memory pressure, then you may want to consider the *allkeys-lru* policy.
4242

43-
* **Set an expiration value on your keys.** This will remove keys proactively instead of waiting until there is memory pressure. When eviction does kick in because of memory pressure, it can cause additional load on your server. For more information, see the documentation for the [Expire](https://redis.io/commands/expire) and [ExpireAt](https://redis.io/commands/expireat) commands.
43+
* **Set an expiration value on your keys.** An expiration will remove keys proactively instead of waiting until there's memory pressure. When eviction does kick in because of memory pressure, it can cause additional load on your server. For more information, see the documentation for the [EXPIRE](https://redis.io/commands/expire) and [EXPIREAT](https://redis.io/commands/expireat) commands.
4444

4545
## Client library specific guidance
4646
* [StackExchange.Redis (.NET)](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-stackexchange-redis-md)
@@ -57,10 +57,10 @@ Unfortunately, there's no easy answer. Each application needs to decide what op
5757

5858
* You can get client-side errors even though Redis successfully ran the command you asked it to run. For example:
5959
- Timeouts are a client-side concept. If the operation reached the server, the server will run the command even if the client gives up waiting.
60-
- When an error occurs on the socket connection, it's not possible to know if the operation actually ran on the server. For example, the connection error can happen after the request is processed by the server but before the response is received by the client.
61-
* How does my application react if I accidentally run the same operation twice? For instance, what if I increment an integer twice instead of just once? Is my application writing to the same key from multiple places? What if my retry logic overwrites a value set by some other part of my app?
60+
- When an error occurs on the socket connection, it's not possible to know if the operation actually ran on the server. For example, the connection error can happen after the server processed the request but before the client receives the response.
61+
* How does my application react if I accidentally run the same operation twice? For instance, what if I increment an integer twice instead of once? Is my application writing to the same key from multiple places? What if my retry logic overwrites a value set by some other part of my app?
6262

63-
If you would like to test how your code works under error conditions, consider using the [Reboot Feature](cache-administration.md#reboot). This allows you to see how connection blips affect your application.
63+
If you would like to test how your code works under error conditions, consider using the [Reboot feature](cache-administration.md#reboot). Rebooting allows you to see how connection blips affect your application.
6464

6565
## Performance testing
6666
* **Start by using `redis-benchmark.exe`** to get a feel for possible throughput/latency before writing your own perf tests. Redis-benchmark documentation can be [found here](https://redis.io/topics/benchmarks). Note that redis-benchmark doesn't support SSL, so you'll have to [enable the Non-SSL port through the Portal](cache-configure.md#access-ports) before you run the test. [A windows compatible version of redis-benchmark.exe can be found here](https://github.com/MSOpenTech/redis/releases)
@@ -77,13 +77,13 @@ If you would like to test how your code works under error conditions, consider u
7777
7878
### Redis-Benchmark examples
7979
**Pre-test setup**:
80-
This will prepare the cache instance with data required for the latency and throughput testing commands listed below.
80+
Prepare the cache instance with data required for the latency and throughput testing commands listed below.
8181
> redis-benchmark.exe -h yourcache.redis.cache.windows.net -a yourAccesskey -t SET -n 10 -d 1024
8282
8383
**To test latency**:
84-
This will test GET requests using a 1k payload.
84+
Test GET requests using a 1k payload.
8585
> redis-benchmark.exe -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -d 1024 -P 50 -c 4
8686
8787
**To test throughput:**
88-
This uses Pipelined GET requests with 1k payload.
88+
Pipelined GET requests with 1k payload.
8989
> redis-benchmark.exe -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n 1000000 -d 1024 -P 50 -c 50

0 commit comments

Comments
 (0)