You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When Redis is used as a cache, it is often convenient to let it automatically
19
-
evict old data as you add new data. This behavior is well known in the
20
-
developer community, since it is the default behavior for the popular
21
-
*memcached* system.
18
+
Redis is commonly used as a cache to speed up read accesses to a slower server
19
+
or database. Since cache entries are copies of persistently-stored data, it
20
+
is usually safe to evict them when the cache runs out of memory (they can be
21
+
cached again in the future if necessary).
22
22
23
-
This page covers the more general topic of the Redis `maxmemory` directive used to limit the memory usage to a fixed amount. It also extensively covers the LRU eviction algorithm used by Redis, which is actually an approximation of
24
-
the exact LRU.
23
+
Redis lets you specify an eviction policy to evict keys automatically
24
+
when the size of the cache exceeds a set memory limit. Whenever a client
25
+
runs a new command that adds more data to the cache, Redis checks the memory usage.
26
+
If it is greater than the limit, Redis evicts keys according to the chosen
27
+
eviction policy until the total memory used is back below the limit.
25
28
26
-
## `Maxmemory` configuration directive
29
+
Note that when a command adds a lot of data to the cache (for example, a big set
30
+
intersection stored into a new key), this might temporarily exceed the limit by
31
+
a large amount.
27
32
28
-
The `maxmemory` configuration directive configures Redis
29
-
to use a specified amount of memory for the data set. You can
30
-
set the configuration directive using the `redis.conf` file, or later using
31
-
the [`CONFIG SET`]({{< relref "/commands/config-set" >}}) command at runtime.
33
+
The sections below explain how to [configure the memory limit](#maxmem) for the cache
34
+
and also describe the available [eviction policies](#eviction-policies) and when to
35
+
use them.
32
36
33
-
For example, to configure a memory limit of 100 megabytes, you can use the
34
-
following directive inside the `redis.conf` file:
37
+
## Using the `maxmemory` configuration directive {#maxmem}
35
38
36
-
maxmemory 100mb
39
+
The `maxmemory` configuration directive specifies
40
+
the maximum amount of memory to use for the cache data. You can
41
+
set `maxmemory` with the [`redis.conf`](https://github.com/redis/redis/blob/7.4.0/redis.conf)
42
+
file at startup time. For example, to configure a memory limit of 100 megabytes,
43
+
you can use the following directive inside `redis.conf`:
37
44
38
-
Setting `maxmemory` to zero results into no memory limits. This is the
39
-
default behavior for 64 bit systems, while 32 bit systems use an implicit
40
-
memory limit of 3GB.
41
-
42
-
When the specified amount of memory is reached, how **eviction policies** are configured determines the default behavior.
43
-
Redis can return errors for commands that could result in more memory
44
-
being used, or it can evict some old data to return back to the
45
-
specified limit every time new data is added.
46
-
47
-
## Eviction policies
48
-
49
-
The exact behavior Redis follows when the `maxmemory` limit is reached is
50
-
configured using the `maxmemory-policy` configuration directive.
51
-
52
-
The following policies are available:
53
-
54
-
***noeviction**: New values aren’t saved when memory limit is reached. When a database uses replication, this applies to the primary database
55
-
***allkeys-lru**: Keeps most recently used keys; removes least recently used (LRU) keys
56
-
***allkeys-lfu**: Keeps frequently used keys; removes least frequently used (LFU) keys
57
-
***volatile-lru**: Removes least recently used keys with the `expire` field set to `true`.
58
-
***volatile-lfu**: Removes least frequently used keys with the `expire` field set to `true`.
59
-
***allkeys-random**: Randomly removes keys to make space for the new data added.
60
-
***volatile-random**: Randomly removes keys with `expire` field set to `true`.
61
-
***volatile-ttl**: Removes keys with `expire` field set to `true` and the shortest remaining time-to-live (TTL) value.
62
-
63
-
The policies **volatile-lru**, **volatile-lfu**, **volatile-random**, and **volatile-ttl** behave like **noeviction** if there are no keys to evict matching the prerequisites.
64
-
65
-
Picking the right eviction policy is important depending on the access pattern
66
-
of your application, however you can reconfigure the policy at runtime while
67
-
the application is running, and monitor the number of cache misses and hits
68
-
using the Redis [`INFO`]({{< relref "/commands/info" >}}) output to tune your setup.
69
-
70
-
In general as a rule of thumb:
71
-
72
-
* Use the **allkeys-lru** policy when you expect a power-law distribution in the popularity of your requests. That is, you expect a subset of elements will be accessed far more often than the rest. **This is a good pick if you are unsure**.
73
-
74
-
* Use the **allkeys-random** if you have a cyclic access where all the keys are scanned continuously, or when you expect the distribution to be uniform.
75
-
76
-
* Use the **volatile-ttl** if you want to be able to provide hints to Redis about what are good candidate for expiration by using different TTL values when you create your cache objects.
77
-
78
-
The **volatile-lru** and **volatile-random** policies are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem.
79
-
80
-
It is also worth noting that setting an `expire` value to a key costs memory, so using a policy like **allkeys-lru** is more memory efficient since there is no need for an `expire` configuration for the key to be evicted under memory pressure.
45
+
```
46
+
maxmemory 100mb
47
+
```
81
48
82
-
## How the eviction process works
49
+
You can also use [`CONFIG SET`]({{< relref "/commands/config-set" >}}) to
50
+
set `maxmemory` at runtime using [`redis-cli`]({{< relref "/develop/connect/cli" >}}):
83
51
84
-
It is important to understand that the eviction process works like this:
52
+
```bash
53
+
> CONFIG SET maxmemory 100mb
54
+
```
85
55
86
-
* A client runs a new command, resulting in more data added.
87
-
* Redis checks the memory usage, and if it is greater than the `maxmemory` limit , it evicts keys according to the policy.
88
-
* A new command is executed, and so forth.
56
+
Set `maxmemory` to zero to specify that you don't want to limit the memory
57
+
for the dataset. This is the default behavior for 64-bit systems, while 32-bit
58
+
systems use an implicit memory limit of 3GB.
59
+
60
+
When the size of your cache exceeds the limit set by `maxmemory`, Redis will
61
+
enforce your chosen [eviction policy](#eviction-policies) to prevent any
62
+
further growth of the cache.
63
+
64
+
### Setting `maxmemory` for a replicated or persisted instance
or [persistence]({{< relref "/operate/rs/databases/configure/database-persistence" >}})
69
+
for a server, Redis will use some RAM as a buffer to store the set of updates waiting
70
+
to be written to the replicas or AOF files.
71
+
The memory used by this buffer is not included in the total that
72
+
is compared to `maxmemory` to see if eviction is required.
73
+
74
+
This is because the key evictions themselves generate updates that must be added
75
+
to the buffer. If the updates were counted among the used
76
+
memory then in some circumstances, the memory saved by
77
+
evicting keys would be immediately used up by the update data added to the buffer.
78
+
This, in turn, would trigger even more evictions and the resulting feedback loop
79
+
could evict many items from the cache unnecessarily.
80
+
81
+
If you are using replication or persistence, we recommend that you set
82
+
`maxmemory` to leave a little RAM free to store the buffers. Note that this is not
83
+
necessary for the `noeviction` policy (see [the section below](#eviction-policies)
84
+
for more information about eviction policies).
85
+
86
+
The [`INFO`]({{< relref "/commands/info" >}}) command returns a
87
+
`mem_not_counted_for_evict` value in the `memory` section (you can use
88
+
the `INFO memory` option to see just this section). This is the amount of
89
+
memory currently used by the buffers. Although the exact amount will vary,
90
+
you can use it to estimate how much to subtract from the total available RAM
91
+
before setting `maxmemory`.
89
92
90
-
So we continuously cross the boundaries of the memory limit, by going over it, and then by evicting keys to return back under the limits.
93
+
## Eviction policies
91
94
92
-
If a command results in a lot of memory being used (like a big set intersection stored into a new key) for some time, the memory limit can be surpassed by a noticeable amount.
95
+
Use the `maxmemory-policy` configuration directive to select the eviction
96
+
policy you want to use when the limit set by `maxmemory` is reached.
93
97
94
-
## Approximated LRU algorithm
98
+
The following policies are available:
95
99
96
-
Redis LRU algorithm is not an exact implementation. This means that Redis is
97
-
not able to pick the *best candidate* for eviction, that is, the key that
98
-
was accessed the furthest in the past. Instead it will try to run an approximation
99
-
of the LRU algorithm, by sampling a small number of keys, and evicting the
100
-
one that is the best (with the oldest access time) among the sampled keys.
100
+
-`noeviction`: Keys are not evicted but the server will return an error
101
+
when you try to execute commands that cache new data. If your database uses replication
102
+
then this condition only applies to the primary database. Note that commands that only
103
+
read existing data still work as normal.
104
+
-`allkeys-lru`: Evict the [least recently used](#apx-lru) (LRU) keys.
105
+
-`allkeys-lfu`: Evict the [least frequently used](#lfu-eviction) (LFU) keys.
106
+
-`allkeys-random`: Evict keys at random.
107
+
-`volatile-lru`: Evict the least recently used keys that have the `expire` field
108
+
set to `true`.
109
+
-`volatile-lfu`: Evict the least frequently used keys that have the `expire` field
110
+
set to `true`.
111
+
-`volatile-random`: Evict keys at random only if they have the `expire` field set
112
+
to `true`.
113
+
-`volatile-ttl`: Evict keys with the `expire` field set to `true` that have the
114
+
shortest remaining time-to-live (TTL) value.
115
+
116
+
The `volatile-xxx` policies behave like `noeviction` if no keys have the `expire`
117
+
field set to true, or for `volatile-ttl`, if no keys have a time-to-live value set.
118
+
119
+
You should choose an eviction policy that fits the way your app
120
+
accesses keys. You may be able to predict the access pattern in advance
121
+
but you can also use information from the `INFO` command at runtime to
122
+
check or improve your choice of policy (see
123
+
[Using the `INFO` command](#using-the-info-command) below for more information).
124
+
125
+
As a rule of thumb:
126
+
127
+
- Use `allkeys-lru` when you expect that a subset of elements will be accessed far
128
+
more often than the rest. This is a very common case according to the
129
+
[Pareto principle](https://en.wikipedia.org/wiki/Pareto_principle), so
130
+
`allkeys-lru` is a good default option if you have no reason to prefer any others.
131
+
- Use `allkeys-random` when you expect all keys to be accessed with roughly equal
132
+
frequency. An example of this is when your app reads data items in a repeating cycle.
133
+
- Use `volatile-ttl` if your code can estimate which keys are good candidates for eviction
134
+
and assign short TTLs to them. Note also that if you make good use of
135
+
key expiration, then you are less likely to run into the cache memory limit because keys
136
+
will often expire before they need to be evicted.
137
+
138
+
The `volatile-lru` and `volatile-random` policies are mainly useful when you want to use
139
+
a single Redis instance for both caching and for a set of persistent keys. However,
140
+
you should consider running two separate Redis instances in a case like this, if possible.
141
+
142
+
Also note that setting an `expire` value for a key costs memory, so a
143
+
policy like `allkeys-lru` is more memory efficient since it doesn't need an
144
+
`expire` value to operate.
145
+
146
+
### Using the `INFO` command
147
+
148
+
The [`INFO`]({{< relref "/commands/info" >}}) command provides several pieces
149
+
of data that are useful for checking the performance of your cache. In particular,
150
+
the `INFO stats` section includes two important entries, `keyspace_hits` (the number of
151
+
times keys were successfully found in the cache) and `keyspace_misses` (the number
152
+
of times a key was requested but was not in the cache). The calculation below gives
153
+
the percentage of attempted accesses that were satisfied from the cache:
101
154
102
-
However, since Redis 3.0 the algorithm was improved to also take a pool of good
103
-
candidates for eviction. This improved the performance of the algorithm, making
104
-
it able to approximate more closely the behavior of a real LRU algorithm.
What is important about the Redis LRU algorithm is that you **are able to tune** the precision of the algorithm by changing the number of samples to check for every eviction. This parameter is controlled by the following configuration directive:
159
+
Check that this is roughly equal to what you would expect for your app
160
+
(naturally, a higher percentage indicates better cache performance).
161
+
162
+
{{< note >}} When the [`EXISTS`]({{< relref "/commands/exists" >}})
163
+
command reports that a key is absent then this is counted as a keyspace miss.
164
+
{{< /note >}}
165
+
166
+
If the percentage of hits is lower than expected, then this might
167
+
mean you are not using the best eviction policy. For example, if
168
+
you believe that a small subset of "hot" data (that will easily fit into the
169
+
cache) should account for about 75% of accesses, you could reasonably
170
+
expect the percentage of keyspace hits to be around 75%. If the actual
171
+
percentage is lower, check the value of `evicted_keys` (also returned by
172
+
`INFO stats`). A high proportion of evictions would suggest that the
173
+
wrong keys are being evicted too often by your chosen policy
174
+
(so `allkeys-lru` might be a good option here). If the
175
+
value of `evicted_keys` is low and you are using key expiration, check
176
+
`expired_keys` to see how many keys have expired. If this number is high,
177
+
you might be using a TTL that is too low or you are choosing the wrong
178
+
keys to expire and this is causing keys to disappear from the cache
179
+
before they should.
180
+
181
+
Other useful pieces of information returned by `INFO` include:
182
+
183
+
-`used_memory_dataset`: (`memory` section) The amount of memory used for
184
+
cached data. If this is greater than `maxmemory`, then the difference
185
+
is the amount by which `maxmemory` has been exceeded.
186
+
-`current_eviction_exceeded_time`: (`stats` section) The time since
187
+
the cache last started to exceed `maxmemory`.
188
+
-`commandstats` section: Among other things, this reports the number of
189
+
times each command issued to the server has been rejected. If you are
190
+
using `noeviction` or one of the `volatile_xxx` policies, you can use
191
+
this to find which commands are being stopped by the `maxmemory` limit
192
+
and how often it is happening.
193
+
194
+
## Approximated LRU algorithm {#apx-lru}
195
+
196
+
The Redis LRU algorithm uses an approximation of the least recently used
197
+
keys rather than calculating them exactly. It samples a small number of keys
198
+
at random and then evicts the ones with the longest time since last access.
199
+
200
+
From Redis 3.0 onwards, the algorithm also tracks a pool of good
201
+
candidates for eviction. This improves the performance of the algorithm, making
202
+
it a close approximation to a true LRU algorithm.
203
+
204
+
You can tune the performance of the algorithm by changing the number of samples to check
205
+
before every eviction with the `maxmemory-samples` configuration directive:
107
206
108
-
maxmemory-samples 5
207
+
```
208
+
maxmemory-samples 5
209
+
```
109
210
110
211
The reason Redis does not use a true LRU implementation is because it
111
212
costs more memory. However, the approximation is virtually equivalent for an
@@ -139,7 +240,7 @@ difference in your cache misses rate.
139
240
To experiment in production with different values for the sample size by using
140
241
the `CONFIG SET maxmemory-samples <count>` command, is very simple.
141
242
142
-
## The new LFU mode
243
+
## LFU eviction
143
244
144
245
Starting with Redis 4.0, the [Least Frequently Used eviction mode](http://antirez.com/news/109) is available. This mode may work better (provide a better
145
246
hits/misses ratio) in certain cases. In LFU mode, Redis will try to track
0 commit comments