You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-cache-for-redis/cache-best-practices-memory-management.md
+25-17Lines changed: 25 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,49 +6,57 @@ description: Learn how to manage your Azure Cache for Redis memory effectively.
6
6
ms.topic: conceptual
7
7
ms.custom:
8
8
- ignite-2024
9
-
ms.date: 03/22/2022
9
+
ms.date: 04/14/2025
10
10
appliesto:
11
11
- ✅ Azure Cache for Redis
12
12
13
13
---
14
14
15
15
# Memory management
16
16
17
-
## Eviction policy
17
+
This article describes best practices for memory management in Azure Cache for Redis.
18
18
19
-
Choose an [eviction policy](https://redis.io/topics/lru-cache) that works for your application. The default policy for Azure Cache for Redis is `volatile-lru`, which means that only keys that have a TTL value set with a command like [EXPIRE](https://redis.io/commands/expire) are eligible for eviction. If no keys have a TTL value, then the system won't evict any keys. If you want the system to allow any key to be evicted if under memory pressure, then you may want to consider the `allkeys-lru`policy.
19
+
## Choose the right evictionpolicy
20
20
21
-
## Keys expiration
21
+
Choose an [eviction policy](https://redis.io/topics/lru-cache) that works for your application. The default policy for Azure Cache for Redis is `volatile-lru`, which means that only keys that have a time to live (TTL) value set with a command like [EXPIRE](https://redis.io/commands/expire) are eligible for eviction. If no keys have a TTL value, the system doesn't evict any keys. If you want the system to allow any key to be evicted if under memory pressure, consider the `allkeys-lru` policy.
22
22
23
-
Set an expiration value on your keys. An expiration removes keys proactively instead of waiting until there's memory pressure. When eviction happens because of memory pressure, it can cause more load on your server. For more information, see the documentation for the [EXPIRE](https://redis.io/commands/expire) and [EXPIREAT](https://redis.io/commands/expireat) commands.
23
+
## Set a keys expiration date
24
+
25
+
Eviction due to memory pressure can cause more load on your server. Set an expiration value on your keys to remove keys proactively instead of waiting until there's memory pressure. For more information, see the documentation for the Redis [EXPIRE](https://redis.io/commands/expire) and [EXPIREAT](https://redis.io/commands/expireat) commands.
24
26
25
27
## Minimize memory fragmentation
26
28
27
-
Large values can leave memory fragmented on eviction and might lead to high memory usage and server load.
29
+
Large key values can leave memory fragmented on eviction and might lead to high memory usage and server load.
28
30
29
31
## Monitor memory usage
30
32
31
-
Add monitoring on memory usage to ensure that you don't run out of memory and have the chance to scale your cache before seeing issues.
33
+
[Monitor memory usage](/azure/redis/monitor-cache#view-cache-metrics) to ensure that you don't run out of memory. [Create alerts](/azure/redis/monitor-cache#create-alerts) to give you a chance to scale your cache before issues occur.
32
34
33
35
## Configure your maxmemory-reserved setting
34
36
35
-
Configure your [maxmemory-reserved setting](cache-configure.md#memory-policies) to improve system responsiveness:
37
+
Configure your [maxmemory-reserved settings](cache-configure.md#memory-policies) to maximize system responsiveness. Sufficient reservation settings are especially important for write-heavy workloads, or if you're storing values of 100 KB or more in your cache.
38
+
39
+
- The `maxmemory-reserved` setting configures the amount of memory, in MB per instance in a cluster, reserved for noncache operations such as replication during failover. Setting this value allows you to have a more consistent Redis server experience when your load varies.
40
+
41
+
- The `maxfragmentationmemory-reserved` setting configures the amount of memory, in MB per instance in a cluster, reserved to accommodate memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high.
36
42
37
-
- A sufficient reservation setting is especially important for write-heavy workloads or if you're storing values of 100 KB or more in your cache. By default when you create a cache, approximately 10% of the available memory is reserved for `maxmemory-reserved`. Another 10% is reserved for `maxfragmentationmemory-reserved`. You can increase the amount reserved if you have write-heavy loads.
43
+
When memory is reserved for these operations, it's unavailable for storing cached data. By default when you create a cache, approximately 10% of the available memory is reserved for `maxmemory-reserved`, and another 10% is reserved for `maxfragmentationmemory-reserved`. You can increase the amounts reserved if you have write-heavy loads.
38
44
39
-
-The`maxmemory-reserved` setting configures the amount of memory, in MB per instance in a cluster, that is reserved for non-cache operations, such as replication during failover. Setting this value allows you to have a more consistent Redis server experience when your load varies. This value should be set higher for workloads that write large amounts of data. When memory is reserved for such operations, it's unavailable for storage of cached data. The allowed range for `maxmemory-reserved`is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they are re-evaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes.
45
+
The allowed ranges for `maxmemory-reserved` and for `maxfragmentationmemory-reserved`are 10%-60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they're reevaluated and set to the 10% minimum and 60% maximum.
40
46
41
-
- The `maxfragmentationmemory-reserved` setting configures the amount of memory, in MB per instance in a cluster, that is reserved to accommodate for memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high. When memory is reserved for such operations, it's unavailable for storage of cached data. The allowed range for `maxfragmentationmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they are re-evaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes.
47
+
When you scale a cache up or down, both `maxmemory-reserved` and `maxfragmentationmemory-reserved` settings automatically scale in proportion to the cache size. For example, if `maxmemory-reserved` is set to 3 GB on a 6-GB cache, and you scale to a 12-GB cache, the setting automatically updates to 6 GB during scaling. If you scale down, the reverse happens.
42
48
43
-
- One thing to consider when choosing a memory reservation value (`maxmemory-reserved` or `maxfragmentationmemory-reserved`) is how this change might affect a cache with large amounts of data in it that is already running. For instance, if you have a 53-GB cache with with the reserved values set to the minimum of 10 GB, the max available memory for the system is approximately 42 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the limit of 42 GB, then the system must evict data until both `used_memory` and `used_memory_rss` are below 42 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Create your own metrics](../redis/monitor-cache.md#create-your-own-metrics).
49
+
Consider how changing a `maxmemory-reserved` or `maxfragmentationmemory-reserved` memory reservation value might affect a cache with a large amount of data in it that is already running. For instance, if you have a 53-GB cache with the reserved values set to the 10% minimums, the maximum available memory for the system is approximately 42 GB. If either your current `used_memory` or `used_memory_rss` values are higher than 42 GB, the system must evict data until both `used_memory` and `used_memory_rss` are below 42 GB.
50
+
51
+
Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Create your own metrics](/azure/redis/monitor-cache#create-your-own-metrics).
44
52
45
53
> [!NOTE]
46
-
> When you scale a cache up or down, both `maxmemory-reserved` and `maxfragmentationmemory-reserved` settings automatically scale in proportion to the cache size. For example, if `maxmemory-reserved` is set to 3 GB on a 6-GB cache, and you scale to 12-GB cache, the settings automatically get updated to 6 GB during scaling. When you scale down, the reverse happens.
47
-
> When you scale a cache up or down programmatically, using PowerShell, CLI or Rest API, any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed.
54
+
> When you scale a cache up or down programmatically by using Azure PowerShell, Azure CLI, or REST API, any included `maxmemory-reserved` or `maxfragmentationmemory-reserved` settings are ignored as part of the update request. Only your scaling change is honored. You can update the memory settings after the scaling operation completes.
0 commit comments