Skip to content

Commit 1b2d948

Browse files
github-actions[bot]emmettbutlergnufede
authored
fix(kafka): catch typeerror during tombstone tagging [backport 2.5] (#8508)
Backport cf942b4 from #8506 to 2.5. This pull request resolves #8016 by catching an error that can occur during message consumption due to confluentinc/confluent-kafka-python#1192. ## Checklist - [x] Change(s) are motivated and described in the PR description - [x] Testing strategy is described if automated tests are not included in the PR - [x] Risks are described (performance impact, potential for breakage, maintainability) - [x] Change is maintainable (easy to change, telemetry, documentation) - [x] [Library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) are followed or label `changelog/no-changelog` is set - [x] Documentation is included (in-code, generated user docs, [public corp docs](https://github.com/DataDog/documentation/)) - [x] Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) - [x] If this PR changes the public interface, I've notified `@DataDog/apm-tees`. - [x] If change touches code that signs or publishes builds or packages, or handles credentials of any kind, I've requested a review from `@DataDog/security-design-and-guidance`. ## Reviewer Checklist - [x] Title is accurate - [x] All changes are related to the pull request's stated goal - [x] Description motivates each change - [x] Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - [x] Testing strategy adequately addresses listed risks - [x] Change is maintainable (easy to change, telemetry, documentation) - [x] Release note makes sense to a user of the library - [x] Author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - [x] Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) Co-authored-by: Emmett Butler <[email protected]> Co-authored-by: Federico Mon <[email protected]>
1 parent 20c672f commit 1b2d948

File tree

3 files changed

+12
-2
lines changed

3 files changed

+12
-2
lines changed

ddtrace/contrib/kafka/patch.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -244,7 +244,12 @@ def traced_poll(func, instance, args, kwargs):
244244
):
245245
span.set_tag_str(kafkax.MESSAGE_KEY, message_key)
246246
span.set_tag(kafkax.PARTITION, message.partition())
247-
span.set_tag_str(kafkax.TOMBSTONE, str(len(message) == 0))
247+
is_tombstone = False
248+
try:
249+
is_tombstone = len(message) == 0
250+
except TypeError: # https://github.com/confluentinc/confluent-kafka-python/issues/1192
251+
pass
252+
span.set_tag_str(kafkax.TOMBSTONE, str(is_tombstone))
248253
span.set_tag(kafkax.MESSAGE_OFFSET, message_offset)
249254
span.set_tag(SPAN_MEASURED_KEY)
250255
rate = config.kafka.get_analytics_sample_rate()
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
fixes:
3+
- |
4+
kafka: This fix resolves an issue where the use of a Kafka ``DeserializingConsumer`` could result in
5+
a crash when the deserializer in use returns a type without a ``__len__`` attribute.

tests/contrib/kafka/test_kafka.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -833,7 +833,7 @@ def json_deserializer(as_bytes, ctx):
833833
try:
834834
return json.loads(as_bytes)
835835
except json.decoder.JSONDecodeError:
836-
return as_bytes
836+
return # return a type that has no __len__ because such types caused a crash at one point
837837

838838
conf = {
839839
"bootstrap.servers": BOOTSTRAP_SERVERS,

0 commit comments

Comments
 (0)