You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/kv/api/read-key-value-pairs.mdx
+45-2Lines changed: 45 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -255,9 +255,52 @@ The effective `cacheTtl` of an already cached item can be reduced by getting it
255
255
256
256
### Requesting more keys per Worker invocation with bulk requests
257
257
258
-
Workers are limited to 1000 operations to external services per invocation. This applies to Workers KV, as documented in [Workers KV limits](/kv/platform/limits/).
258
+
Workers are limited to 1000 operations to external services per invocation. This applies to Workers KV, as documented in [Workers KV limits](/kv/platform/limits/).
259
259
260
-
To read more than 1000 keys per operation, you can use the bulk read operations to read multiple keys in a single operation. These count as a single operation against the 1000 operation limit.
260
+
To read more than 1000 keys per operation, you can use the bulk read operations to read multiple keys in a single operation. These count as a single operation against the 1000 operation limit.
261
+
262
+
### Reducing cardinality by coalescing keys
263
+
264
+
If you have a set of related key-value pairs that have a mixed usage pattern (some hot keys and some cold keys), consider coalescing the need to fetch them, so that a single cached fetch retrieves all the values even if you only need one of the values. This is useful because the cooler keys share access patterns with the hotter keys, and are therefore more likely to be present in the cache. Some approaches to accomplishing this are described below.
265
+
266
+
#### Merging into a "super" KV entry
267
+
268
+
One coalescing technique is to make all the keys and values part of a super key-value object. An example is shown below.
269
+
270
+
```
271
+
key1: value1
272
+
key2: value2
273
+
key3: value3
274
+
```
275
+
276
+
becomes
277
+
278
+
```
279
+
coalesced: {
280
+
key1: value1,
281
+
key2: value2,
282
+
key3: value3,
283
+
}
284
+
```
285
+
286
+
By coalescing the values, the cold keys benefit from being kept alive in the cache because of access patterns of the warmer keys.
287
+
288
+
This works best if you are not expecting the need to update the values independently of each other, which can pose race conditions unless you are careful about how you synchronize.
289
+
290
+
- **Advantage**: Infrequently accessed keys are kept in the cache.
291
+
- **Disadvantage**: Size of the resultant value can push your worker out of its memory limits. Safely updating the value requires a [locking mechanism](#concurrent-writers) of some kind.
292
+
293
+
#### Storing in metadata and shared prefix
294
+
295
+
If you do not want to merge into a single KV entry as described above, and your associated values fit within the [metadata limit](/workers/platform/limits/#kv-limits), then you can store the values within the metadata instead of the body. If you then name the keys with a shared unique prefix, your list operation will contain the value, letting you bulk read multiple keys at once through a single, cacheable list operation.
296
+
297
+
:::note[List performance]
298
+
List operations are not "write aware". This means that while they are subject to tiering, they only stay cached for up to one minute past when it was last read, even at upper tiers.
299
+
300
+
By comparison, `get` operations are cached at the upper tiers for a service managed duration that is always longer than your cacheTtl. Additionally, the cacheTtl lets you extend the duration of a single key lookup at the data center closest to the request.
301
+
:::
302
+
303
+
Since list operations are not "write aware" as described above, they are only ever cached for 1 minute. They are still subject to [tiered caching](https://blog.cloudflare.com/faster-workers-kv-architecture#a-new-horizontally-scaled-tiered-cache) as described in our blog post, so requests within the region and globally are amortized to keep the asset closer to your request. However, you still need to be reading the value about once every 30 seconds to make sure it is always present within Cloudflare's caches.
0 commit comments