Skip to content

Commit 6632396

Browse files
committed
docs: move caching to getting started
1 parent 2344e01 commit 6632396

File tree

2 files changed

+25
-23
lines changed

2 files changed

+25
-23
lines changed

docs/utilities/idempotency.md

Lines changed: 22 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -245,6 +245,28 @@ The output serializer supports any JSON serializable data, **Python Dataclasses*
245245
2. This function does the following <br><br>**1**. Receives the dictionary saved into the persistent storage <br>**1** Serializes to `OrderOutput` before `@idempotent` returns back to the caller.
246246
3. This serializer receives both functions so it knows who to call when to serialize to and from dictionary.
247247

248+
### Using in-memory cache
249+
250+
!!! note "In-memory cache is local to each Lambda execution environment."
251+
252+
You can enable caching with the `use_local_cache` parameter in `IdempotencyConfig`. When enabled, you can adjust cache capacity _(256)_ with `local_cache_max_items`.
253+
254+
By default, caching is disabled since we don't know how big your response could be in relation to your configured memory size.
255+
256+
=== "Enabling cache"
257+
258+
```python hl_lines="12"
259+
--8<-- "examples/idempotency/src/working_with_local_cache.py"
260+
```
261+
262+
1. You can adjust cache capacity with [`local_cache_max_items`](#customizing-the-default-behavior) parameter.
263+
264+
=== "Sample event"
265+
266+
```json
267+
--8<-- "examples/idempotency/src/working_with_local_cache_payload.json"
268+
```
269+
248270
### Choosing a payload subset for idempotency
249271

250272
???+ tip "Tip: Dealing with always changing payloads"
@@ -743,27 +765,6 @@ This utility will raise an **`IdempotencyAlreadyInProgressError`** exception if
743765

744766
This is a locking mechanism for correctness. Since we don't know the result from the first invocation yet, we can't safely allow another concurrent execution.
745767

746-
### Using in-memory cache
747-
748-
**By default, in-memory local caching is disabled**, since we don't know how much memory you consume per invocation compared to the maximum configured in your Lambda function.
749-
750-
???+ note "Note: This in-memory cache is local to each Lambda execution environment"
751-
This means it will be effective in cases where your function's concurrency is low in comparison to the number of "retry" invocations with the same payload, because cache might be empty.
752-
753-
You can enable in-memory caching with the **`use_local_cache`** parameter:
754-
755-
=== "Caching idempotent transactions in-memory to prevent multiple calls to storage"
756-
757-
```python hl_lines="11"
758-
--8<-- "examples/idempotency/src/working_with_local_cache.py"
759-
```
760-
761-
=== "Sample event"
762-
763-
```json
764-
--8<-- "examples/idempotency/src/working_with_local_cache_payload.json"
765-
```
766-
767768
When enabled, the default is to cache a maximum of 256 records in each Lambda execution environment - You can change it with the **`local_cache_max_items`** parameter.
768769

769770
### Expiring idempotency records

examples/idempotency/src/working_with_local_cache.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,9 @@
77

88
persistence_layer = DynamoDBPersistenceLayer(table_name="IdempotencyTable")
99
config = IdempotencyConfig(
10-
event_key_jmespath="body",
11-
use_local_cache=True,
10+
event_key_jmespath="powertools_json(body)",
11+
# by default, it holds 256 items in a Least-Recently-Used (LRU) manner
12+
use_local_cache=True, # (1)!
1213
)
1314

1415

0 commit comments

Comments
 (0)