Skip to content

Commit 9f439cc

Browse files
fix(kafka): handle produce(key=None) [backport 1.20] (#7614)
Backport 68da170 from #7576 to 1.20. This change fixes #7236 by falling back to the empty string when the partition key passed to `produce()` is falsey. ## Checklist - [x] Change(s) are motivated and described in the PR description. - [x] Testing strategy is described if automated tests are not included in the PR. - [x] Risk is outlined (performance impact, potential for breakage, maintainability, etc). - [x] Change is maintainable (easy to change, telemetry, documentation). - [x] [Library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) are followed. If no release note is required, add label `changelog/no-changelog`. - [x] Documentation is included (in-code, generated user docs, [public corp docs](https://github.com/DataDog/documentation/)). - [x] Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Title is accurate. - [x] No unnecessary changes are introduced. - [x] Description motivates each change. - [x] Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes unless absolutely necessary. - [x] Testing strategy adequately addresses listed risk(s). - [x] Change is maintainable (easy to change, telemetry, documentation). - [x] Release note makes sense to a user of the library. - [x] Reviewer has explicitly acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment. - [x] Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) - [x] If this PR touches code that signs or publishes builds or packages, or handles credentials of any kind, I've requested a review from `@DataDog/security-design-and-guidance`. - [x] This PR doesn't touch any of that. Co-authored-by: Emmett Butler <[email protected]>
1 parent 6253b3f commit 9f439cc

File tree

3 files changed

+15
-1
lines changed

3 files changed

+15
-1
lines changed

ddtrace/contrib/kafka/patch.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ def traced_produce(func, instance, args, kwargs):
115115
value = get_argument_value(args, kwargs, 1, "value")
116116
except ArgumentError:
117117
value = None
118-
message_key = kwargs.get("key", "")
118+
message_key = kwargs.get("key", "") or ""
119119
partition = kwargs.get("partition", -1)
120120
core.dispatch("kafka.produce.start", [instance, args, kwargs])
121121

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
fixes:
3+
- |
4+
kafka: This fix resolves an issue where calls to ``confluent_kafka``'s ``produce`` method with ``key=None``
5+
would cause an exception to be raised.

tests/contrib/kafka/test_kafka.py

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -191,6 +191,15 @@ def test_produce_single_server(dummy_tracer, producer, kafka_topic):
191191
assert produce_span.get_tag("messaging.kafka.bootstrap.servers") == BOOTSTRAP_SERVERS
192192

193193

194+
def test_produce_none_key(dummy_tracer, producer, kafka_topic):
195+
Pin.override(producer, tracer=dummy_tracer)
196+
producer.produce(kafka_topic, PAYLOAD, key=None)
197+
producer.flush()
198+
199+
traces = dummy_tracer.pop_traces()
200+
assert 1 == len(traces), "key=None does not cause produce() call to raise an exception"
201+
202+
194203
def test_produce_multiple_servers(dummy_tracer, kafka_topic):
195204
producer = confluent_kafka.Producer({"bootstrap.servers": ",".join([BOOTSTRAP_SERVERS] * 3)})
196205
Pin.override(producer, tracer=dummy_tracer)

0 commit comments

Comments
 (0)