You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Refactor KafkaClient to use new Kafka configs
Motivation:
Currently, the new strongly-typed config structs such as ProducerConfig
are just used to create a new KafkaConfig. We want to get rid of
KafkaConfig entirely and only create librdkafka configs right before
rd_kafka_new() is called to avoid having to bother with the memory
management of a rd_kafka_conf_t object.
Modifications:
* deprecate KafkaConfig in favour of ProducerConfig, ConsumerConfig etc.
Result:
* everything should work as before, just with a slightly nicer syntax
* Temporarily remove socket.connection.setup.timeout.ms
Motivation:
* This option is only available with newer librdkafka versions, not
including the librdkafka package used in our Docker image
Modifications:
* Temporarily commented out the option to pass unit tests on Linux
* Remove KafkaConfig
Modifications:
* remove KafkaConfig
* remove unused Tests
* update DocC to use ConsumerConfig and ProducerConfig
* Create internal wrapper for rd_kafka_conf funcs
Modifications:
* new struct `RDKafkaConfig` containing helper functions wrapping
`rd_kafka_conf*` functions for easier use inside of `KafkaClient`
* Replace KafkaTopicConfig with new TopicConfig
Motivation:
* similar to Producer/Consumer configs, the librdkafka
rd_kafka_topic_conf_t config shall only be created when needed by the
rd_kafka_topic_new, for all other cases a light-weight Swift struct
TopicConfig shall be used
Modifications:
* delete `KafkaTopicConfig` and `KafkaTopicConfigTests`
* create a new wrapper `RDKafkaTopicConfig` that wraps the common
`rd_kafka_topic_conf_*` functions
* refactor `KafkaProducer` into using the new `TopicConfig` type
* Remove TODOs
* Rearrange folders + rename config structs
Motivation:
* create a clearer file structure
* add `Kafka*` prefix to configs to have a clear Kafka namespace and
avoid name collisions when used in other projects
Modifications:
* created folders for Configuration, RDKafka and Utilities
* rename ConsumerConfig -> KafkaConsumerConfig
* rename ProducerConfig -> KafkaProducerConfig
* rename TopicConfig -> KafkaTopicConfig
* rename ConfigEnums -> KafkaConfigEnums
* RDKafkaConfig.setDeliveryCallback -> .setDeliveryReportCallback
* Update README to use new config types
* Remove unused DocC comment
* * re-enable socketConnectionSetupTimeoutMs configuration option in Kafka(Consumer|Producer)Config
* Review Franz
Modifications:
* rename `KafkaConfigEnums` to `KafkaSharedConfiguration`
* remove `protocol StringDictionaryRepresentable`
* KafkaProducer: ensure deliveryCallback is retained
Motivation:
With the current code architecture, `KafkaClient` has a var `opaque`
that references a `CapturedClosure` which in turn captures the delivery
callback that gets invoked by Kafka upon successful message delivery.
This callback has to be retained in memory while our producer is
running.
Modifications:
* make `KafkaClient` a `class` in which the captured closure is a `let`
and not a `var`
* create a factory method called `RDKafka.createClient` that creates
a new `KafkaClient` for a given set of parameters
Copy file name to clipboardExpand all lines: README.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,8 +9,7 @@ SwiftKafka is a Swift Package in development that provides a convenient way to c
9
9
The `sendAsync(_:)` method of `KafkaProducer` returns a message-id that can later be used to identify the corresponding acknowledgement. Acknowledgements are received through the `acknowledgements`[`AsyncSequence`](https://developer.apple.com/documentation/swift/asyncsequence). Each acknowledgement indicates that producing a message was successful or returns an error.
After initializing the `KafkaConsumer` with a topic-partition pair to read from, messages can be consumed using the `messages`[`AsyncSequence`](https://developer.apple.com/documentation/swift/asyncsequence).
@@ -87,13 +86,14 @@ for await messageResult in consumer.messages {
87
86
By default, the `KafkaConsumer` automatically commits message offsets after receiving the corresponding message. However, we allow users to disable this setting and commit message offsets manually.
@@ -54,7 +54,7 @@ public struct ConsumerConfig: Hashable, Equatable, StringDictionaryRepresentable
54
54
}
55
55
56
56
/// Action to take when there is no initial offset in offset store or the desired offset is out of range. See ``ConfigEnums/AutoOffsetReset`` for more information.
0 commit comments