You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/features/batch.md
+71-47Lines changed: 71 additions & 47 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ If your function fails to process any message from the batch, the entire batch r
28
28
This behavior changes when you enable the [ReportBatchItemFailures feature](https://docs.aws.amazon.com/lambda/latest/dg/services-sqs-errorhandling.html#services-sqs-batchfailurereporting) in your Lambda function event source configuration:
29
29
30
30
*[**SQS queues**](#sqs-standard). Only messages reported as failure will return to the queue for a retry, while successful ones will be deleted.
31
-
*[**Kinesis data streams**](#kinesis-and-dynamodb-streams) and [**DynamoDB streams**](#kinesis-and-dynamodb-streams). Single reported failure will use its sequence number as the stream checkpoint. Multiple reported failures will use the lowest sequence number as checkpoint.
31
+
*[**Kinesis data streams**](#kinesis-and-dynamodb-streams) and [**DynamoDB streams**](#kinesis-and-dynamodb-streams). Single reported failure will use its sequence number as the stream checkpoint. Multiple reported failures will use the lowest sequence number as checkpoint.
32
32
33
33
<!-- HTML tags are required in admonition content thus increasing line length beyond our limits -->
34
34
<!-- markdownlint-disable MD013 -->
@@ -213,7 +213,7 @@ By default, we catch any exception raised by your record handler function. This
213
213
214
214
1. Any exception works here. See [extending `BatchProcessor` section, if you want to override this behavior.](#extending-batchprocessor)
215
215
216
-
2. Exceptions raised in `recordHandler` will propagate to `process_partial_response`. <br/><br/> We catch them and include each failed batch item identifier in the response dictionary (see `Sample response` tab).
216
+
2. Exceptions raised in `recordHandler` will propagate to `processPartialResponse`. <br/><br/> We catch them and include each failed batch item identifier in the response dictionary (see `Sample response` tab).
217
217
218
218
=== "Sample response"
219
219
@@ -296,81 +296,105 @@ The behavior changes slightly when there are multiple item failures. Stream chec
296
296
297
297
### Parser integration
298
298
299
-
Thanks to the [Parser utility](./parser.md)integration, you can pass a [Standard Schema](https://standardschema.dev){target="_blank"}-compatible schema when instantiating the `BatchProcessor` and we will use it to validate each item in the batch before passing it to your record handler.
299
+
The Batch Processing utility integrates with the [Parser utility](./parser.md)to automatically validate and parse each batch record before processing. This ensures your record handler receives properly typed and validated data, eliminating the need for manual parsing and validation.
300
300
301
-
Since this is an opt-in feature, you will need to import the `parser` function from `@aws-lambda-powertools/batch/parser`, this allows us to keep the parsing logic separate from the main processing logic and avoid increasing the bundle size.
301
+
To enable parser integration, import the `parser` function from `@aws-lambda-powertools/batch/parser` and pass it along with a schema when instantiating the `BatchProcessor`.
1.**Item schema only** (`innerSchema`) - Focus on your payload schema, we handle extending the base event structure
310
+
2.**Full event schema** (`schema`) - Validate the entire event record structure with complete control
311
+
312
+
#### Benefits of parser integration
313
+
314
+
Parser integration eliminates runtime errors from malformed data and provides compile-time type safety, making your code more reliable and easier to maintain. Invalid records are automatically marked as failed and won't reach your handler, reducing defensive coding.
console.log(record.body.name); // Full type safety and autocomplete
334
+
};
335
+
```
302
336
303
337
#### Using item schema only
304
338
305
-
When you only want to customize the schema of the item's payload you can pass an `innerSchema` objecta and we will use it to extend the base schema based on the `EventType` passed to the `BatchProcessor`.
339
+
When you want to focus on validating your payload without dealing with the full event structure, use `innerSchema`. We automatically extend the base event schema for you, reducing boilerplate while still validating the entire record.
340
+
341
+
Available transformers by event type:
306
342
307
-
When doing this, you can also specify a `transformer` to tell us how to transform the payload before validation.
343
+
| Event Type | Base Schema | Available Transformers | When to use transformer |
When you want more control over the schema, you can extend a [built-in schema](./parser.md#built-in-schemas) for SQS, Kinesis Data Streams, or DynamoDB Streams with your own custom schema for the payload and we'll parse each item before passing it to your record handler. If the payload does not match the schema, the item will be marked as failed.
363
+
For complete control over validation, extend the built-in schemaswith your custom payload schema. This approach gives you full control over the entire event structure.
0 commit comments