You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/develop/reference/eviction/index.md
+54-34Lines changed: 54 additions & 34 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,52 +15,67 @@ title: Key eviction
15
15
weight: 6
16
16
---
17
17
18
-
When Redis is used as a cache, it is often convenient to let it automatically
19
-
evict old data as you add new data. This behavior is well known in the
20
-
developer community, since it is the default behavior for the popular
21
-
*memcached* system.
18
+
Redis is commonly used as a cache to speed up read accesses to a slower server
19
+
or database. Since cache entries are copies of persistently-stored data, it
20
+
is usually safe to evict them when the cache runs out of RAM (they can be
21
+
cached again in the future if necessary).
22
22
23
-
This page covers the more general topic of the Redis `maxmemory` directive used to limit the memory usage to a fixed amount. It also extensively covers the LRU eviction algorithm used by Redis, which is actually an approximation of
24
-
the exact LRU.
23
+
Redis lets you specify an eviction policy to evict keys automatically
24
+
when the size of the cache exceeds a set memory limit. The sections below
25
+
explain how to [configure this limit](#maxmem) and also describe the available
26
+
[eviction policies](#eviction-policies) and when to use them.
25
27
26
-
## `Maxmemory` configuration directive
28
+
## `Maxmemory` configuration directive {#maxmem}
27
29
28
30
The `maxmemory` configuration directive configures Redis
29
31
to use a specified amount of memory for the data set. You can
30
-
set the configuration directive using the `redis.conf` file, or later using
31
-
the [`CONFIG SET`]({{< relref "/commands/config-set" >}}) command at runtime.
32
+
set `maxmemory` with the `redis.conf` file at startup time, or
33
+
with the [`CONFIG SET`]({{< relref "/commands/config-set" >}}) command at runtime.
32
34
33
35
For example, to configure a memory limit of 100 megabytes, you can use the
34
36
following directive inside the `redis.conf` file:
35
37
36
-
maxmemory 100mb
38
+
```
39
+
maxmemory 100mb
40
+
```
37
41
38
-
Setting`maxmemory` to zero results into no memory limits. This is the
39
-
default behavior for 64 bit systems, while 32 bit systems use an implicit
40
-
memory limit of 3GB.
42
+
Set`maxmemory` to zero to specify that you don't want to limit the memory
43
+
for the dataset. This is the default behavior for 64 bit systems, while 32 bit
44
+
systems use an implicit memory limit of 3GB.
41
45
42
-
When the specified amount of memory is reached, how **eviction policies** are configured determines the default behavior.
43
-
Redis can return errors for commands that could result in more memory
44
-
being used, or it can evict some old data to return back to the
45
-
specified limit every time new data is added.
46
+
When the size of your cache reaches the limit set by `maxmemory`, Redis will
47
+
enforce your chosen [eviction policy](#eviction-policies) to prevent any
48
+
further growth of the cache.
46
49
47
50
## Eviction policies
48
51
49
-
The exact behavior Redis follows when the `maxmemory` limit is reached is
50
-
configured using the `maxmemory-policy` configuration directive.
52
+
Use the `maxmemory-policy` configuration directive to choose the eviction
53
+
policy to use when the limit set by `maxmemory` is reached.
51
54
52
55
The following policies are available:
53
56
54
-
***noeviction**: New values aren’t saved when memory limit is reached. When a database uses replication, this applies to the primary database
55
-
***allkeys-lru**: Keeps most recently used keys; removes least recently used (LRU) keys
56
-
***allkeys-lfu**: Keeps frequently used keys; removes least frequently used (LFU) keys
57
-
***volatile-lru**: Removes least recently used keys with the `expire` field set to `true`.
58
-
***volatile-lfu**: Removes least frequently used keys with the `expire` field set to `true`.
59
-
***allkeys-random**: Randomly removes keys to make space for the new data added.
60
-
***volatile-random**: Randomly removes keys with `expire` field set to `true`.
61
-
***volatile-ttl**: Removes keys with `expire` field set to `true` and the shortest remaining time-to-live (TTL) value.
62
-
63
-
The policies **volatile-lru**, **volatile-lfu**, **volatile-random**, and **volatile-ttl** behave like **noeviction** if there are no keys to evict matching the prerequisites.
57
+
-`noeviction`: Keys are not evicted but the server won't execute any commands
58
+
that add new data to the cache. If your database uses replication then this
59
+
condition only applies to the primary database.
60
+
-`allkeys-lru`: Evict the least recently used (LRU) keys.
61
+
-`allkeys-lfu`: Evict the least frequently used (LFU) keys.
62
+
-`allkeys-random`: Evict keys at random.
63
+
-`volatile-lru`: Evict the least recently used keys that have the `expire` field
64
+
set to `true`.
65
+
-`volatile-lfu`: Evict the least frequently used keys that have the `expire` field
66
+
set to `true`.
67
+
-`volatile-random`: Evict keys at random only if they have the `expire` field set
68
+
to `true`.
69
+
-`volatile-ttl`: Evict keys with the `expire` field set to `true` that have the
70
+
shortest remaining time-to-live (TTL) value.
71
+
72
+
The `volatile-xxx` policies behave like `noeviction` if no keys have the `expire`
73
+
field set to true, or if no keys have a time-to-live value set in the case of
74
+
`volatile-ttl`.
75
+
76
+
You should choose an eviction policy that fits the way your app
77
+
accesses keys. You may be able to predict the access pattern in advance
78
+
but you can also
64
79
65
80
Picking the right eviction policy is important depending on the access pattern
66
81
of your application, however you can reconfigure the policy at runtime while
@@ -69,11 +84,16 @@ using the Redis [`INFO`]({{< relref "/commands/info" >}}) output to tune your se
69
84
70
85
In general as a rule of thumb:
71
86
72
-
* Use the **allkeys-lru** policy when you expect a power-law distribution in the popularity of your requests. That is, you expect a subset of elements will be accessed far more often than the rest. **This is a good pick if you are unsure**.
73
-
74
-
* Use the **allkeys-random** if you have a cyclic access where all the keys are scanned continuously, or when you expect the distribution to be uniform.
75
-
76
-
* Use the **volatile-ttl** if you want to be able to provide hints to Redis about what are good candidate for expiration by using different TTL values when you create your cache objects.
87
+
- Use `allkeys-lru` when you expect that a subset of elements will be accessed far
88
+
more often than the rest. This is a very common case according to the
89
+
[Pareto principle](https://en.wikipedia.org/wiki/Pareto_principle), so
90
+
`allkeys-lru` is a good default option if you have no reason to prefer any others.
91
+
- Use `allkeys-random` when you expect all keys to be accessed with roughly equal
92
+
frequency. An examples of this is when your app reads data items in a repeating cycle.
93
+
- Use `volatile-ttl`
94
+
if you want to be able to provide hints to Redis about what are
95
+
good candidate for expiration by using different TTL values when you create your
96
+
cache objects.
77
97
78
98
The **volatile-lru** and **volatile-random** policies are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem.
0 commit comments