Skip to content

Commit 58a675a

Browse files
DOC-4380 further rewriting
1 parent 6a81c3e commit 58a675a

File tree

1 file changed

+64
-42
lines changed
  • content/develop/reference/eviction

1 file changed

+64
-42
lines changed

content/develop/reference/eviction/index.md

Lines changed: 64 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -21,14 +21,23 @@ is usually safe to evict them when the cache runs out of RAM (they can be
2121
cached again in the future if necessary).
2222

2323
Redis lets you specify an eviction policy to evict keys automatically
24-
when the size of the cache exceeds a set memory limit. The sections below
25-
explain how to [configure this limit](#maxmem) and also describe the available
26-
[eviction policies](#eviction-policies) and when to use them.
24+
when the size of the cache exceeds a set memory limit. Whenever a client
25+
runs a new command that adds more data to the cache, Redis checks the memory usage.
26+
If it is greater than the limit, Redis evicts keys according to the chosen
27+
eviction policy until the used memory is back below the limit.
28+
29+
Note that when a command adds a lot of data to the cache (for example, a big set
30+
intersection stored into a new key), this might temporarily exceed the limit by
31+
a long way.
32+
33+
The sections below explain how to [configure the memory limit](#maxmem) for the cache
34+
and also describe the available [eviction policies](#eviction-policies) and when to
35+
use them.
2736

2837
## `Maxmemory` configuration directive {#maxmem}
2938

30-
The `maxmemory` configuration directive configures Redis
31-
to use a specified amount of memory for the data set. You can
39+
The `maxmemory` configuration directive specifies
40+
the maximum amount of memory to use for the cache data. You can
3241
set `maxmemory` with the `redis.conf` file at startup time, or
3342
with the [`CONFIG SET`]({{< relref "/commands/config-set" >}}) command at runtime.
3443

@@ -43,22 +52,42 @@ Set `maxmemory` to zero to specify that you don't want to limit the memory
4352
for the dataset. This is the default behavior for 64 bit systems, while 32 bit
4453
systems use an implicit memory limit of 3GB.
4554

46-
When the size of your cache reaches the limit set by `maxmemory`, Redis will
55+
When the size of your cache exceeds the limit set by `maxmemory`, Redis will
4756
enforce your chosen [eviction policy](#eviction-policies) to prevent any
4857
further growth of the cache.
4958

59+
### Setting `maxmemory` for a replicated instance
60+
61+
If you are using replication for an instance, Redis will use some
62+
RAM as a buffer to store the set of updates that must be written to the replicas.
63+
The memory used by this buffer is not included in the used memory total that
64+
is compared to `maxmemory` to see if eviction is required.
65+
66+
This is because the key evictions themselves generate updates that must be added
67+
to the buffer to send to the replicas. If the updates were counted among the used
68+
memory then in some circumstances, the memory saved by
69+
evicting keys would be immediately used up by the new data added to the buffer.
70+
This, in turn, would trigger even more evictions and the resulting feedback loop
71+
could evict many items from the cache unnecessarily.
72+
73+
If you are using replication, we recommend that you set `maxmemory` to leave a
74+
little RAM free to store the replication buffers unless you are also using the
75+
`noeviction` policy (see [the section below](#eviction-policies) for more
76+
information about eviction policies).
77+
5078
## Eviction policies
5179

52-
Use the `maxmemory-policy` configuration directive to choose the eviction
53-
policy to use when the limit set by `maxmemory` is reached.
80+
Use the `maxmemory-policy` configuration directive to select the eviction
81+
policy you want to use when the limit set by `maxmemory` is reached.
5482

5583
The following policies are available:
5684

5785
- `noeviction`: Keys are not evicted but the server won't execute any commands
5886
that add new data to the cache. If your database uses replication then this
59-
condition only applies to the primary database.
60-
- `allkeys-lru`: Evict the least recently used (LRU) keys.
61-
- `allkeys-lfu`: Evict the least frequently used (LFU) keys.
87+
condition only applies to the primary database. Note that commands that only
88+
read data still work as normal.
89+
- `allkeys-lru`: Evict the [least recently used](#apx-lru) (LRU) keys.
90+
- `allkeys-lfu`: Evict the [least frequently used](#lfu-eviction) (LFU) keys.
6291
- `allkeys-random`: Evict keys at random.
6392
- `volatile-lru`: Evict the least recently used keys that have the `expire` field
6493
set to `true`.
@@ -82,50 +111,43 @@ of your application, however you can reconfigure the policy at runtime while
82111
the application is running, and monitor the number of cache misses and hits
83112
using the Redis [`INFO`]({{< relref "/commands/info" >}}) output to tune your setup.
84113

85-
In general as a rule of thumb:
114+
As a rule of thumb:
86115

87116
- Use `allkeys-lru` when you expect that a subset of elements will be accessed far
88117
more often than the rest. This is a very common case according to the
89118
[Pareto principle](https://en.wikipedia.org/wiki/Pareto_principle), so
90119
`allkeys-lru` is a good default option if you have no reason to prefer any others.
91120
- Use `allkeys-random` when you expect all keys to be accessed with roughly equal
92121
frequency. An examples of this is when your app reads data items in a repeating cycle.
93-
- Use `volatile-ttl`
94-
if you want to be able to provide hints to Redis about what are
95-
good candidate for expiration by using different TTL values when you create your
96-
cache objects.
97-
98-
The **volatile-lru** and **volatile-random** policies are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem.
122+
- Use `volatile-ttl` if you can estimate which keys are good candidates for eviction
123+
from your code and assign short TTLs to them. Note also that if you make good use of
124+
key expiration, then you are less likely to run into the cache memory limit in the
125+
first place.
99126

100-
It is also worth noting that setting an `expire` value to a key costs memory, so using a policy like **allkeys-lru** is more memory efficient since there is no need for an `expire` configuration for the key to be evicted under memory pressure.
127+
The `volatile-lru` and `volatile-random` policies are mainly useful when you want to use
128+
a single Redis instance for both caching and for a set of persistent keys. However,
129+
you should consider running two separate Redis instances in a case like this, if possible.
101130

102-
## How the eviction process works
131+
Also note that setting an `expire` value for a key costs memory, so a
132+
policy like `allkeys-lru` is more memory efficient since it doesn't need an
133+
`expire` value to operate.
103134

104-
It is important to understand that the eviction process works like this:
135+
## Approximated LRU algorithm {#apx-lru}
105136

106-
* A client runs a new command, resulting in more data added.
107-
* Redis checks the memory usage, and if it is greater than the `maxmemory` limit , it evicts keys according to the policy.
108-
* A new command is executed, and so forth.
137+
The Redis LRU algorithm uses an approximation of the least recently used
138+
keys rather than calculating them exactly. It samples a small number of keys
139+
at random and then evicts the ones with the longest time since last access.
109140

110-
So we continuously cross the boundaries of the memory limit, by going over it, and then by evicting keys to return back under the limits.
141+
From Redis 3.0 onwards, the algorithm also tracks a pool of good
142+
candidates for eviction. This improves the performance of the algorithm, making
143+
it a close approximation to a true LRU algorithm.
111144

112-
If a command results in a lot of memory being used (like a big set intersection stored into a new key) for some time, the memory limit can be surpassed by a noticeable amount.
145+
You can tune the performance of the algorithm by changing the number of samples to check
146+
before every eviction with the `maxmemory-samples` configuration directive:
113147

114-
## Approximated LRU algorithm
115-
116-
Redis LRU algorithm is not an exact implementation. This means that Redis is
117-
not able to pick the *best candidate* for eviction, that is, the key that
118-
was accessed the furthest in the past. Instead it will try to run an approximation
119-
of the LRU algorithm, by sampling a small number of keys, and evicting the
120-
one that is the best (with the oldest access time) among the sampled keys.
121-
122-
However, since Redis 3.0 the algorithm was improved to also take a pool of good
123-
candidates for eviction. This improved the performance of the algorithm, making
124-
it able to approximate more closely the behavior of a real LRU algorithm.
125-
126-
What is important about the Redis LRU algorithm is that you **are able to tune** the precision of the algorithm by changing the number of samples to check for every eviction. This parameter is controlled by the following configuration directive:
127-
128-
maxmemory-samples 5
148+
```
149+
maxmemory-samples 5
150+
```
129151

130152
The reason Redis does not use a true LRU implementation is because it
131153
costs more memory. However, the approximation is virtually equivalent for an
@@ -159,7 +181,7 @@ difference in your cache misses rate.
159181
To experiment in production with different values for the sample size by using
160182
the `CONFIG SET maxmemory-samples <count>` command, is very simple.
161183

162-
## The new LFU mode
184+
## LFU eviction
163185

164186
Starting with Redis 4.0, the [Least Frequently Used eviction mode](http://antirez.com/news/109) is available. This mode may work better (provide a better
165187
hits/misses ratio) in certain cases. In LFU mode, Redis will try to track

0 commit comments

Comments
 (0)