You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/kv/api/read-key-value-pairs.mdx
+4-15Lines changed: 4 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -261,7 +261,7 @@ To read more than 1000 keys per operation, you can use the bulk read operations
261
261
262
262
### Reducing cardinality by coalescing keys
263
263
264
-
If you have a set of related key-value pairs that have a mixed usage pattern (some hot keys and some cold keys), consider coalescing the need to fetch them, so that a single cached fetch retrieves all the values even if you only need one of the values. This is useful because the cooler keys share access patterns with the hotter keys, and are therefore more likely to be present in the cache. Some approaches to accomplishing this are described below.
264
+
If you have a set of related key-value pairs that have a mixed usage pattern (some hot keys and some cold keys), consider coalescing them. By coalescing cold keys with hot keys, cold keys will be cached alongside hot keys which can provide faster reads than if they were uncached as individual keys.
265
265
266
266
#### Merging into a "super" KV entry
267
267
@@ -283,24 +283,13 @@ coalesced: {
283
283
}
284
284
```
285
285
286
-
By coalescing the values, the cold keys benefit from being kept alive in the cache because of access patterns of the warmer keys.
286
+
By coalescing the values, the cold keys benefit from being kept warm in the cache because of access patterns of the warmer keys.
287
287
288
-
This works best if you are not expecting the need to update the values independently of each other, which can pose race conditions unless you are careful about how you synchronize.
288
+
This works best if you are not expecting the need to update the values independently of each other, which can pose race conditions.
289
289
290
290
- **Advantage**: Infrequently accessed keys are kept in the cache.
291
-
- **Disadvantage**: Size of the resultant value can push your worker out of its memory limits. Safely updating the value requires a [locking mechanism](#concurrent-writers) of some kind.
291
+
- **Disadvantage**: Size of the resultant value can push your worker out of its memory limits. Safely updating the value requires a locking mechanism of some kind.
292
292
293
-
#### Storing in metadata and shared prefix
294
-
295
-
If you do not want to merge into a single KV entry as described above, and your associated values fit within the [metadata limit](/workers/platform/limits/#kv-limits), then you can store the values within the metadata instead of the body. If you then name the keys with a shared unique prefix, your list operation will contain the value, letting you bulk read multiple keys at once through a single, cacheable list operation.
296
-
297
-
:::note[List performance]
298
-
List operations are not "write aware". This means that while they are subject to tiering, they only stay cached for up to one minute past when it was last read, even at upper tiers.
299
-
300
-
By comparison, `get` operations are cached at the upper tiers for a service managed duration that is always longer than your cacheTtl. Additionally, the cacheTtl lets you extend the duration of a single key lookup at the data center closest to the request.
301
-
:::
302
-
303
-
Since list operations are not "write aware" as described above, they are only ever cached for 1 minute. They are still subject to [tiered caching](https://blog.cloudflare.com/faster-workers-kv-architecture#a-new-horizontally-scaled-tiered-cache) as described in our blog post, so requests within the region and globally are amortized to keep the asset closer to your request. However, you still need to be reading the value about once every 30 seconds to make sure it is always present within Cloudflare's caches.
0 commit comments