You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The batch processing utility handles partial failures when processing batches from Amazon SQS, Amazon Kinesis Data Streams, and Amazon DynamoDB Streams.
@@ -42,17 +22,13 @@ When using SQS, Kinesis Data Streams, or DynamoDB Streams as a Lambda event sour
42
22
If your function fails to process any message from the batch, the entire batch returns to your queue or stream. This same batch is then retried until either condition happens first: **a)** your Lambda function returns a successful response, **b)** record reaches maximum retry attempts, or **c)** when records expire.
This behavior changes when you enable the [ReportBatchItemFailures feature](https://docs.aws.amazon.com/lambda/latest/dg/services-sqs-errorhandling.html#services-sqs-batchfailurereporting) in your Lambda function event source configuration:
53
29
54
30
*[**SQS queues**](#sqs-standard). Only messages reported as failure will return to the queue for a retry, while successful ones will be deleted.
55
-
*[**Kinesis data streams**](#kinesis-and-dynamodb-streams) and [**DynamoDB streams**](#kinesis-and-dynamodb-streams). Single reported failure will use its sequence number as the stream checkpoint. Multiple reported failures will use the lowest sequence number as checkpoint.
31
+
*[**Kinesis data streams**](#kinesis-and-dynamodb-streams) and [**DynamoDB streams**](#kinesis-and-dynamodb-streams). Single reported failure will use its sequence number as the stream checkpoint. Multiple reported failures will use the lowest sequence number as the checkpoint.
56
32
57
33
<!-- HTML tags are required in admonition content thus increasing line length beyond our limits -->
58
34
<!-- markdownlint-disable MD013 -->
@@ -237,7 +213,7 @@ By default, we catch any exception raised by your record handler function. This
237
213
238
214
1. Any exception works here. See [extending `BatchProcessor` section, if you want to override this behavior.](#extending-batchprocessor)
239
215
240
-
2. Exceptions raised in `recordHandler` will propagate to `process_partial_response`. <br/><br/> We catch them and include each failed batch item identifier in the response dictionary (see `Sample response` tab).
216
+
2. Errors raised in `recordHandler` will propagate to `processPartialResponse`. <br/><br/> We catch them and include each failed batch item identifier in the response object (see `Sample response` tab).
241
217
242
218
=== "Sample response"
243
219
@@ -263,19 +239,7 @@ Sequence diagram to explain how [`BatchProcessor` works](#processing-messages-fr
<i>Kinesis and DynamoDB streams mechanism with multiple batch item failures</i>
392
293
</center>
393
294
394
295
## Advanced
395
296
297
+
### Parser integration
298
+
299
+
The Batch Processing utility integrates with the [Parser utility](./parser.md) to automatically validate and parse each batch record before processing. This ensures your record handler receives properly typed and validated data, eliminating the need for manual parsing and validation.
300
+
301
+
To enable parser integration, import the `parser` function from `@aws-lambda-powertools/batch/parser` and pass it along with a schema when instantiating the `BatchProcessor`. This requires you to also [install the Parser utility](./parser.md#getting-started).
1.**Item schema only** (`innerSchema`) - Focus on your payload schema, we handle extending the base event structure
310
+
2.**Full event schema** (`schema`) - Validate the entire event record structure with complete control
311
+
312
+
#### Benefits of parser integration
313
+
314
+
Parser integration eliminates runtime errors from malformed data and provides compile-time type safety, making your code more reliable and easier to maintain. Invalid records are automatically marked as failed and won't reach your handler, reducing defensive coding.
console.log(record.body.name); // Full type safety and autocomplete
334
+
};
335
+
```
336
+
337
+
#### Using item schema only
338
+
339
+
When you want to focus on validating your payload without dealing with the full event structure, use `innerSchema`. We automatically extend the base event schema for you, reducing boilerplate code while still validating the entire record.
340
+
341
+
Available transformers by event type:
342
+
343
+
| Event Type | Base Schema | Available Transformers | When to use transformer |
For complete control over validation, extend the built-in schemas with your custom payload schema. This approach gives you full control over the entire event structure.
0 commit comments