From 715d9acf7241675050ce2d9ec1e348278326b95b Mon Sep 17 00:00:00 2001 From: Jun Lee Date: Fri, 16 May 2025 15:27:30 +0100 Subject: [PATCH 1/4] Including coalescing keys guidance --- .../docs/kv/api/read-key-value-pairs.mdx | 47 ++++++++++++++++++- 1 file changed, 45 insertions(+), 2 deletions(-) diff --git a/src/content/docs/kv/api/read-key-value-pairs.mdx b/src/content/docs/kv/api/read-key-value-pairs.mdx index 2d6ae78ee627942..305008e5397f0f9 100644 --- a/src/content/docs/kv/api/read-key-value-pairs.mdx +++ b/src/content/docs/kv/api/read-key-value-pairs.mdx @@ -255,9 +255,52 @@ The effective `cacheTtl` of an already cached item can be reduced by getting it ### Requesting more keys per Worker invocation with bulk requests -Workers are limited to 1000 operations to external services per invocation. This applies to Workers KV, as documented in [Workers KV limits](/kv/platform/limits/). +Workers are limited to 1000 operations to external services per invocation. This applies to Workers KV, as documented in [Workers KV limits](/kv/platform/limits/). -To read more than 1000 keys per operation, you can use the bulk read operations to read multiple keys in a single operation. These count as a single operation against the 1000 operation limit. +To read more than 1000 keys per operation, you can use the bulk read operations to read multiple keys in a single operation. These count as a single operation against the 1000 operation limit. + +### Reducing cardinality by coalescing keys + +If you have a set of related key-value pairs that have a mixed usage pattern (some hot keys and some cold keys), consider coalescing the need to fetch them, so that a single cached fetch retrieves all the values even if you only need one of the values. This is useful because the cooler keys share access patterns with the hotter keys, and are therefore more likely to be present in the cache. Some approaches to accomplishing this are described below. + +#### Merging into a "super" KV entry + +One coalescing technique is to make all the keys and values part of a super key-value object. An example is shown below. + +``` +key1: value1 +key2: value2 +key3: value3 +``` + +becomes + +``` +coalesced: { + key1: value1, + key2: value2, + key3: value3, +} +``` + +By coalescing the values, the cold keys benefit from being kept alive in the cache because of access patterns of the warmer keys. + +This works best if you are not expecting the need to update the values independently of each other, which can pose race conditions unless you are careful about how you synchronize. + +- **Advantage**: Infrequently accessed keys are kept in the cache. +- **Disadvantage**: Size of the resultant value can push your worker out of its memory limits. Safely updating the value requires a [locking mechanism](#concurrent-writers) of some kind. + +#### Storing in metadata and shared prefix + +If you do not want to merge into a single KV entry as described above, and your associated values fit within the [metadata limit](/workers/platform/limits/#kv-limits), then you can store the values within the metadata instead of the body. If you then name the keys with a shared unique prefix, your list operation will contain the value, letting you bulk read multiple keys at once through a single, cacheable list operation. + +:::note[List performance] +List operations are not "write aware". This means that while they are subject to tiering, they only stay cached for up to one minute past when it was last read, even at upper tiers. + +By comparison, `get` operations are cached at the upper tiers for a service managed duration that is always longer than your cacheTtl. Additionally, the cacheTtl lets you extend the duration of a single key lookup at the data center closest to the request. +::: + +Since list operations are not "write aware" as described above, they are only ever cached for 1 minute. They are still subject to [tiered caching](https://blog.cloudflare.com/faster-workers-kv-architecture#a-new-horizontally-scaled-tiered-cache) as described in our blog post, so requests within the region and globally are amortized to keep the asset closer to your request. However, you still need to be reading the value about once every 30 seconds to make sure it is always present within Cloudflare's caches. ## Other methods to access KV From 1d2a9d45cf7770f30b3924be87bac0e35fe5d019 Mon Sep 17 00:00:00 2001 From: Jun Lee Date: Fri, 16 May 2025 16:09:06 +0100 Subject: [PATCH 2/4] Apply suggestions from code review Co-authored-by: Thomas Gauvin <35609369+thomasgauvin@users.noreply.github.com> --- .../docs/kv/api/read-key-value-pairs.mdx | 19 ++++--------------- 1 file changed, 4 insertions(+), 15 deletions(-) diff --git a/src/content/docs/kv/api/read-key-value-pairs.mdx b/src/content/docs/kv/api/read-key-value-pairs.mdx index 305008e5397f0f9..b5446f615968358 100644 --- a/src/content/docs/kv/api/read-key-value-pairs.mdx +++ b/src/content/docs/kv/api/read-key-value-pairs.mdx @@ -261,7 +261,7 @@ To read more than 1000 keys per operation, you can use the bulk read operations ### Reducing cardinality by coalescing keys -If you have a set of related key-value pairs that have a mixed usage pattern (some hot keys and some cold keys), consider coalescing the need to fetch them, so that a single cached fetch retrieves all the values even if you only need one of the values. This is useful because the cooler keys share access patterns with the hotter keys, and are therefore more likely to be present in the cache. Some approaches to accomplishing this are described below. +If you have a set of related key-value pairs that have a mixed usage pattern (some hot keys and some cold keys), consider coalescing them. By coalescing cold keys with hot keys, cold keys will be cached alongside hot keys which can provide faster reads than if they were uncached as individual keys. #### Merging into a "super" KV entry @@ -283,24 +283,13 @@ coalesced: { } ``` -By coalescing the values, the cold keys benefit from being kept alive in the cache because of access patterns of the warmer keys. +By coalescing the values, the cold keys benefit from being kept warm in the cache because of access patterns of the warmer keys. -This works best if you are not expecting the need to update the values independently of each other, which can pose race conditions unless you are careful about how you synchronize. +This works best if you are not expecting the need to update the values independently of each other, which can pose race conditions. - **Advantage**: Infrequently accessed keys are kept in the cache. -- **Disadvantage**: Size of the resultant value can push your worker out of its memory limits. Safely updating the value requires a [locking mechanism](#concurrent-writers) of some kind. +- **Disadvantage**: Size of the resultant value can push your worker out of its memory limits. Safely updating the value requires a locking mechanism of some kind. -#### Storing in metadata and shared prefix - -If you do not want to merge into a single KV entry as described above, and your associated values fit within the [metadata limit](/workers/platform/limits/#kv-limits), then you can store the values within the metadata instead of the body. If you then name the keys with a shared unique prefix, your list operation will contain the value, letting you bulk read multiple keys at once through a single, cacheable list operation. - -:::note[List performance] -List operations are not "write aware". This means that while they are subject to tiering, they only stay cached for up to one minute past when it was last read, even at upper tiers. - -By comparison, `get` operations are cached at the upper tiers for a service managed duration that is always longer than your cacheTtl. Additionally, the cacheTtl lets you extend the duration of a single key lookup at the data center closest to the request. -::: - -Since list operations are not "write aware" as described above, they are only ever cached for 1 minute. They are still subject to [tiered caching](https://blog.cloudflare.com/faster-workers-kv-architecture#a-new-horizontally-scaled-tiered-cache) as described in our blog post, so requests within the region and globally are amortized to keep the asset closer to your request. However, you still need to be reading the value about once every 30 seconds to make sure it is always present within Cloudflare's caches. ## Other methods to access KV From bafc2df7a0b5b1c220139fb16ff9ef255e03b120 Mon Sep 17 00:00:00 2001 From: Jun Lee Date: Mon, 19 May 2025 14:07:12 +0100 Subject: [PATCH 3/4] Updating internal link. --- src/content/docs/kv/api/read-key-value-pairs.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/kv/api/read-key-value-pairs.mdx b/src/content/docs/kv/api/read-key-value-pairs.mdx index 305008e5397f0f9..5aba58e94438ea3 100644 --- a/src/content/docs/kv/api/read-key-value-pairs.mdx +++ b/src/content/docs/kv/api/read-key-value-pairs.mdx @@ -288,7 +288,7 @@ By coalescing the values, the cold keys benefit from being kept alive in the cac This works best if you are not expecting the need to update the values independently of each other, which can pose race conditions unless you are careful about how you synchronize. - **Advantage**: Infrequently accessed keys are kept in the cache. -- **Disadvantage**: Size of the resultant value can push your worker out of its memory limits. Safely updating the value requires a [locking mechanism](#concurrent-writers) of some kind. +- **Disadvantage**: Size of the resultant value can push your worker out of its memory limits. Safely updating the value requires a [locking mechanism](/kv/api/write-key-value-pairs/#concurrent-writes-to-the-same-key) of some kind. #### Storing in metadata and shared prefix From 7c8ac3bf6581ccf9f8f2557c90f4376319ac2348 Mon Sep 17 00:00:00 2001 From: Jun Lee Date: Mon, 19 May 2025 14:18:23 +0100 Subject: [PATCH 4/4] Apply suggestions from code review Co-authored-by: marciocloudflare <83226960+marciocloudflare@users.noreply.github.com> --- src/content/docs/kv/api/read-key-value-pairs.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/content/docs/kv/api/read-key-value-pairs.mdx b/src/content/docs/kv/api/read-key-value-pairs.mdx index 5bed9a87f2c2210..d2feb112498f0ea 100644 --- a/src/content/docs/kv/api/read-key-value-pairs.mdx +++ b/src/content/docs/kv/api/read-key-value-pairs.mdx @@ -255,9 +255,9 @@ The effective `cacheTtl` of an already cached item can be reduced by getting it ### Requesting more keys per Worker invocation with bulk requests -Workers are limited to 1000 operations to external services per invocation. This applies to Workers KV, as documented in [Workers KV limits](/kv/platform/limits/). +Workers are limited to 1,000 operations to external services per invocation. This applies to Workers KV, as documented in [Workers KV limits](/kv/platform/limits/). -To read more than 1000 keys per operation, you can use the bulk read operations to read multiple keys in a single operation. These count as a single operation against the 1000 operation limit. +To read more than 1,000 keys per operation, you can use the bulk read operations to read multiple keys in a single operation. These count as a single operation against the 1,000 operation limit. ### Reducing cardinality by coalescing keys