You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While rdkafka-ruby aims to simplify the use of librdkafka in Ruby applications, it's important to understand the limitations of this library:
39
+
40
+
-**No Complex Producers/Consumers**: This library does not intend to offer complex producers or consumers. The aim is to stick closely to the functionalities provided by librdkafka itself.
41
+
42
+
-**Focus on librdkafka Capabilities**: Features that can be achieved directly in Ruby, without specific needs from librdkafka, are outside the scope of this library.
43
+
44
+
-**Existing High-Level Functionalities**: Certain high-level functionalities like producer metadata cache and simple consumer are already part of the library. Although they fall slightly outside the primary goal, they will remain part of the contract, given their existing usage.
45
+
35
46
36
47
## Installation
37
48
@@ -42,7 +53,7 @@ If you have any problems installing the gem, please open an issue.
42
53
43
54
See the [documentation](https://karafka.io/docs/code/rdkafka-ruby/) for full details on how to use this gem. Two quick examples:
44
55
45
-
### Consuming messages
56
+
### Consuming Messages
46
57
47
58
Subscribe to a topic and get messages. Kafka will automatically spread
48
59
the available partitions over consumers with the same group id.
@@ -60,7 +71,7 @@ consumer.each do |message|
60
71
end
61
72
```
62
73
63
-
### Producing messages
74
+
### Producing Messages
64
75
65
76
Produce a number of messages, put the delivery handles in an array, and
66
77
wait for them before exiting. This way the messages will be batched and
@@ -87,41 +98,42 @@ Note that creating a producer consumes some resources that will not be
87
98
released until it `#close` is explicitly called, so be sure to call
88
99
`Config#producer` only as necessary.
89
100
90
-
## Higher level libraries
101
+
## Higher Level Libraries
91
102
92
103
Currently, there are two actively developed frameworks based on rdkafka-ruby, that provide higher-level API that can be used to work with Kafka messages and one library for publishing messages.
93
104
94
-
### Message processing frameworks
105
+
### Message Processing Frameworks
95
106
96
107
*[Karafka](https://github.com/karafka/karafka) - Ruby and Rails efficient Kafka processing framework.
97
108
*[Racecar](https://github.com/zendesk/racecar) - A simple framework for Kafka consumers in Ruby
98
109
99
-
### Message publishing libraries
110
+
### Message Publishing Libraries
100
111
101
112
*[WaterDrop](https://github.com/karafka/waterdrop) – Standalone Karafka library for producing Kafka messages.
102
113
103
114
## Development
104
115
105
-
A Docker Compose file is included to run Kafka. To run
106
-
that:
116
+
Contributors are encouraged to focus on enhancements that align with the core goal of the library. We appreciate contributions but will likely not accept pull requests for features that:
117
+
118
+
- Implement functionalities that can achieved using standard Ruby capabilities without changes to the underlying rdkafka-ruby bindings.
119
+
- Deviate significantly from the primary aim of providing librdkafka bindings with Ruby-friendly interfaces.
120
+
121
+
A Docker Compose file is included to run Kafka. To run that:
107
122
108
123
```
109
124
docker-compose up
110
125
```
111
126
112
-
Run `bundle` and `cd ext && bundle exec rake && cd ..` to download and
113
-
compile `librdkafka`.
127
+
Run `bundle` and `cd ext && bundle exec rake && cd ..` to download and compile `librdkafka`.
114
128
115
-
You can then run `bundle exec rspec` to run the tests. To see rdkafka
116
-
debug output:
129
+
You can then run `bundle exec rspec` to run the tests. To see rdkafka debug output:
117
130
118
131
```
119
132
DEBUG_PRODUCER=true bundle exec rspec
120
133
DEBUG_CONSUMER=true bundle exec rspec
121
134
```
122
135
123
-
After running the tests, you can bring the cluster down to start with a
124
-
clean slate:
136
+
After running the tests, you can bring the cluster down to start with a clean slate:
0 commit comments