@@ -28,7 +28,7 @@ eviction policy until the total memory used is back below the limit.
2828
2929Note that when a command adds a lot of data to the cache (for example, a big set
3030intersection stored into a new key), this might temporarily exceed the limit by
31- a long way .
31+ a large amount .
3232
3333The sections below explain how to [ configure the memory limit] ( #maxmem ) for the cache
3434and also describe the available [ eviction policies] ( #eviction-policies ) and when to
@@ -61,25 +61,35 @@ When the size of your cache exceeds the limit set by `maxmemory`, Redis will
6161enforce your chosen [ eviction policy] ( #eviction-policies ) to prevent any
6262further growth of the cache.
6363
64- ### Setting ` maxmemory ` for a replicated instance
64+ ### Setting ` maxmemory ` for a replicated or persisted instance
6565
66- If you are using replication for an instance, Redis will use some
67- RAM as a buffer to store the set of updates waiting to be written to the replicas.
66+ If you are using
67+ [ replication] ({{< relref "/operate/rs/databases/durability-ha/replication" >}})
68+ or [ persistence] ({{< relref "/operate/rs/databases/configure/database-persistence" >}})
69+ for a server, Redis will use some RAM as a buffer to store the set of updates waiting
70+ to be written to the replicas or AOF files.
6871The memory used by this buffer is not included in the total that
6972is compared to ` maxmemory ` to see if eviction is required.
7073
7174This is because the key evictions themselves generate updates that must be added
72- to the buffer to send to the replicas . If the updates were counted among the used
75+ to the buffer. If the updates were counted among the used
7376memory then in some circumstances, the memory saved by
7477evicting keys would be immediately used up by the update data added to the buffer.
7578This, in turn, would trigger even more evictions and the resulting feedback loop
7679could evict many items from the cache unnecessarily.
7780
78- If you are using replication, we recommend that you set ` maxmemory ` to leave a
79- little RAM free to store the replication buffers. Note that this is not
81+ If you are using replication or persistence , we recommend that you set
82+ ` maxmemory ` to leave a little RAM free to store the buffers. Note that this is not
8083necessary for the ` noeviction ` policy (see [ the section below] ( #eviction-policies )
8184for more information about eviction policies).
8285
86+ The [ ` INFO ` ] ({{< relref "/commands/info" >}}) command returns a
87+ ` mem_not_counted_for_evict ` data item in the ` memory ` section (you can use
88+ the ` INFO memory ` option to see just this section). This is the amount of
89+ memory currently used by the buffers. Although the exact amount will vary,
90+ you can use it to estimate how much to subtract from the total available RAM
91+ before setting ` maxmemory ` .
92+
8393## Eviction policies
8494
8595Use the ` maxmemory-policy ` configuration directive to select the eviction
@@ -90,7 +100,7 @@ The following policies are available:
90100- ` noeviction ` : Keys are not evicted but the server will return an error
91101 when you try to execute commands that cache new data. If your database uses replication
92102 then this condition only applies to the primary database. Note that commands that only
93- read data still work as normal.
103+ read existing data still work as normal.
94104- ` allkeys-lru ` : Evict the [ least recently used] ( #apx-lru ) (LRU) keys.
95105- ` allkeys-lfu ` : Evict the [ least frequently used] ( #lfu-eviction ) (LFU) keys.
96106- ` allkeys-random ` : Evict keys at random.
@@ -108,12 +118,8 @@ field set to true, or for `volatile-ttl`, if no keys have a time-to-live value s
108118
109119You should choose an eviction policy that fits the way your app
110120accesses keys. You may be able to predict the access pattern in advance
111- but you can also
112-
113- Picking the right eviction policy is important depending on the access pattern
114- of your application, however you can reconfigure the policy at runtime while
115- the application is running, and monitor the number of cache misses and hits
116- using the Redis [ ` INFO ` ] ({{< relref "/commands/info" >}}) output to tune your setup.
121+ but you can also use information from the [ ` INFO ` ] ( #using-the-info-command )
122+ command at runtime to check or improve your choice of policy.
117123
118124As a rule of thumb:
119125
@@ -122,11 +128,11 @@ As a rule of thumb:
122128 [ Pareto principle] ( https://en.wikipedia.org/wiki/Pareto_principle ) , so
123129 ` allkeys-lru ` is a good default option if you have no reason to prefer any others.
124130- Use ` allkeys-random ` when you expect all keys to be accessed with roughly equal
125- frequency. An examples of this is when your app reads data items in a repeating cycle.
126- - Use ` volatile-ttl ` if you can estimate which keys are good candidates for eviction
127- from your code and assign short TTLs to them. Note also that if you make good use of
128- key expiration, then you are less likely to run into the cache memory limit in the
129- first place .
131+ frequency. An example of this is when your app reads data items in a repeating cycle.
132+ - Use ` volatile-ttl ` if your code can estimate which keys are good candidates for eviction
133+ and assign short TTLs to them. Note also that if you make good use of
134+ key expiration, then you are less likely to run into the cache memory limit because keys
135+ will often expire before they need to be evicted .
130136
131137The ` volatile-lru ` and ` volatile-random ` policies are mainly useful when you want to use
132138a single Redis instance for both caching and for a set of persistent keys. However,
@@ -136,6 +142,54 @@ Also note that setting an `expire` value for a key costs memory, so a
136142policy like ` allkeys-lru ` is more memory efficient since it doesn't need an
137143` expire ` value to operate.
138144
145+ ### Using the ` INFO ` command
146+
147+ The [ ` INFO ` ] ({{< relref "/commands/info" >}}) command provides several pieces
148+ of data that are useful for checking the performance of your cache. In particular,
149+ the ` INFO stats ` section includes two important entries, ` keyspace_hits ` (the number of
150+ times keys were successfully found in the cache) and ` keyspace_misses ` (the number
151+ of times a key was requested but was not in the cache). The calculation below gives
152+ the percentage of attempted accesses that were satisfied from the cache:
153+
154+ ```
155+ keyspace_hits / (keyspace_hits + keyspace_misses) * 100
156+ ```
157+
158+ Check that this is roughly equal to what you would expect for your app
159+ (naturally, a higher percentage indicates better cache performance).
160+
161+ {{< note >}} When the [ ` EXISTS ` ] ({{< relref "/commands/exists" >}})
162+ command reports that a key is absent then this is counted as a keyspace miss.
163+ {{< /note >}}
164+
165+ If the percentage of hits is lower than expected, then this might
166+ mean you are not using the best eviction policy. For example, if
167+ you believe that a small subset of "hot" data (that will easily fit into the
168+ cache) should account for about 75% of accesses, you could reasonably
169+ expect the percentage of keyspace hits to be around 75%. If the actual
170+ percentage is lower, check the value of ` evicted_keys ` (also returned by
171+ ` INFO stats ` ). A high proportion of evictions would suggest that the
172+ wrong keys are being evicted too often by your chosen policy
173+ (so ` allkeys-lru ` might be a good option here). If the
174+ value of ` evicted_keys ` is low and you are using key expiration, check
175+ ` expired_keys ` to see how many keys have expired. If this number is high,
176+ you might be using a TTL that is too low or you are choosing the wrong
177+ keys to expire and this is causing keys to disappear from the cache
178+ before they should.
179+
180+ Other useful pieces of information returned by ` INFO ` include:
181+
182+ - ` used_memory_dataset ` : (` memory ` section) The amount of memory used for
183+ cached data. If this is greater than ` maxmemory ` , then the difference
184+ is the amount by which ` maxmemory ` has been exceeded.
185+ - ` current_eviction_exceeded_time ` : (` stats ` section) The time since
186+ the cache last started to exceed ` maxmemory ` .
187+ - ` commandstats ` section: Among other things, this reports the number of
188+ times each command issued to the server has been rejected. If you are
189+ using ` noeviction ` or one of the ` volatile_xxx ` policies, you can use
190+ this to find which commands are being stopped by the ` maxmemory ` limit
191+ and how often it is happening.
192+
139193## Approximated LRU algorithm {#apx-lru}
140194
141195The Redis LRU algorithm uses an approximation of the least recently used
0 commit comments