You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Resolves#1994
Add an option to strip previous exception headers when republishing a
dead letter record.
* Fix typos.
* Fix constant.
* Fix typo in doc.
Copy file name to clipboardExpand all lines: spring-kafka-docs/src/main/asciidoc/kafka.adoc
+18Lines changed: 18 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5607,6 +5607,24 @@ Starting with version 2.7, the recoverer checks that the partition selected by t
5607
5607
If the partition is not present, the partition in the `ProducerRecord` is set to `null`, allowing the `KafkaProducer` to select the partition.
5608
5608
You can disable this check by setting the `verifyPartition` property to `false`.
5609
5609
5610
+
[[dlpr-headers]]
5611
+
===== Managing Dead Letter Record Headers
5612
+
5613
+
Referring to <<dead-letters>> above, the `DeadLetterPublishingRecoverer` has two properties used to manage headers when those headers already exist (such as when reprocessing a dead letter record that failed, including when using <<retry-topic>>).
5614
+
5615
+
* `appendOriginalHeaders` (default `true`)
5616
+
* `stripPreviousExceptionHeaders` (default `false` - will be `true` in version 2.8 and later)
5617
+
5618
+
Apache Kafka supports multiple headers with the same name; to obtain the "latest" value, you can use `headers.lastHeader(headerName)`; to get an iterator over multiple headers, use `headers.headers(headerName).iterator()`.
5619
+
5620
+
When repeatedly republishing a failed record, these headers can grow (and eventually cause publication to fail due to a `RecordTooLargeException`); this is especially true for the exception headers and particularly for the stack trace headers.
5621
+
5622
+
The reason for the two properties is because, while you might want to retain only the last exception information, you might want to retain the history of which topic(s) the record passed through for each failure.
5623
+
5624
+
`appendOriginalHeaders` is applied to all headers named `*ORIGINAL*` while `stripPreviousExceptionHeaders` is applied to all headers named `*EXCEPTION*`.
Copy file name to clipboardExpand all lines: spring-kafka-docs/src/main/asciidoc/retrytopic.adoc
+27Lines changed: 27 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -357,6 +357,33 @@ public RetryTopicConfiguration myOtherRetryTopic(KafkaTemplate<Integer, MyPojo>
357
357
358
358
NOTE: By default the topics are autocreated with one partition and a replication factor of one.
359
359
360
+
[[retry-headers]]
361
+
===== Failure Header Management
362
+
363
+
When considering how to manage failure headers (original headers and exception headers), the framework delegates to the `DeadLetterPublishingRecover` to decide whether to append or replace the headers.
364
+
365
+
By default, it explicitly sets `appendOriginalHeaders` to `false` and leaves `stripPreviousExceptionHeaders` to the default used by the `DeadLetterPublishingRecover`.
366
+
367
+
This means that, currently, records published to multiple retry topics may grow to large size, especially when the stack trace is large.
368
+
369
+
See <<dlpr-headers>> for more information.
370
+
371
+
To reconfigure the framework to use different settings for these properties, replace standard `DeadLetterPublishingRecovererFactory` bean by adding a `recovererCustomizer`:
0 commit comments