Skip to content

Commit ad87e73

Browse files
authored
Merge pull request #467 from raytung/docs/fix-up-docs
docs(out_kafka2): added examples and added `topic` parameter to README
2 parents 009be4e + e410943 commit ad87e73

File tree

5 files changed

+104
-2
lines changed

5 files changed

+104
-2
lines changed

README.md

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -187,6 +187,10 @@ If `ruby-kafka` doesn't fit your kafka environment, check `rdkafka2` plugin inst
187187
@type kafka2
188188

189189
brokers <broker1_host>:<broker1_port>,<broker2_host>:<broker2_port>,.. # Set brokers directly
190+
191+
# Kafka topic, placerholders are supported. Chunk keys are required in the Buffer section inorder for placeholders
192+
# to work.
193+
topic (string) :default => nil
190194
topic_key (string) :default => 'topic'
191195
partition_key (string) :default => 'partition'
192196
partition_key_key (string) :default => 'partition_key'
@@ -243,13 +247,12 @@ ruby-kafka's log is routed to fluentd log so you can see ruby-kafka's log in flu
243247

244248
Supports following ruby-kafka's producer options.
245249

246-
- max_send_retries - default: 1 - Number of times to retry sending of messages to a leader.
250+
- max_send_retries - default: 2 - Number of times to retry sending of messages to a leader.
247251
- required_acks - default: -1 - The number of acks required per request. If you need flush performance, set lower value, e.g. 1, 2.
248252
- ack_timeout - default: nil - How long the producer waits for acks. The unit is seconds.
249253
- compression_codec - default: nil - The codec the producer uses to compress messages.
250254
- max_send_limit_bytes - default: nil - Max byte size to send message to avoid MessageSizeTooLarge. For example, if you set 1000000(message.max.bytes in kafka), Message more than 1000000 byes will be dropped.
251255
- discard_kafka_delivery_failed - default: false - discard the record where [Kafka::DeliveryFailed](http://www.rubydoc.info/gems/ruby-kafka/Kafka/DeliveryFailed) occurred
252-
- monitoring_list - default: [] - library to be used to monitor. statsd and datadog are supported
253256

254257
If you want to know about detail of monitoring, see also https://github.com/zendesk/ruby-kafka#monitoring
255258

@@ -420,6 +423,16 @@ Support of fluentd v0.12 has ended. `kafka_buffered` will be an alias of `kafka2
420423
monitoring_list (array) :default => []
421424
</match>
422425

426+
`kafka_buffered` supports the following `ruby-kafka` parameters:
427+
428+
- max_send_retries - default: 2 - Number of times to retry sending of messages to a leader.
429+
- required_acks - default: -1 - The number of acks required per request. If you need flush performance, set lower value, e.g. 1, 2.
430+
- ack_timeout - default: nil - How long the producer waits for acks. The unit is seconds.
431+
- compression_codec - default: nil - The codec the producer uses to compress messages.
432+
- max_send_limit_bytes - default: nil - Max byte size to send message to avoid MessageSizeTooLarge. For example, if you set 1000000(message.max.bytes in kafka), Message more than 1000000 byes will be dropped.
433+
- discard_kafka_delivery_failed - default: false - discard the record where [Kafka::DeliveryFailed](http://www.rubydoc.info/gems/ruby-kafka/Kafka/DeliveryFailed) occurred
434+
- monitoring_list - default: [] - library to be used to monitor. statsd and datadog are supported
435+
423436
`kafka_buffered` has two additional parameters:
424437

425438
- kafka_agg_max_bytes - default: 4096 - Maximum value of total message size to be included in one batch transmission.

examples/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Examples
2+
3+
This directory contains example Fluentd config for this plugin
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
<source>
2+
@type sample
3+
sample {"hello": "world"}
4+
rate 7000
5+
tag sample.hello.world
6+
</source>
7+
8+
<match sample.**>
9+
@type kafka2
10+
11+
brokers "broker:29092"
12+
13+
# Writes to topic `events.sample.hello.world`
14+
topic "events.${tag}"
15+
16+
# Writes to topic `hello.world`
17+
# topic "${tag[1]}.${tag[2]}"
18+
19+
<format>
20+
@type json
21+
</format>
22+
23+
<buffer tag>
24+
flush_at_shutdown true
25+
flush_mode interval
26+
flush_interval 1s
27+
chunk_limit_size 3MB
28+
chunk_full_threshold 1
29+
total_limit_size 1024MB
30+
overflow_action block
31+
</buffer>
32+
</match>
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
<source>
2+
@type sample
3+
sample {"hello": "world", "some_record":{"event":"message"}}
4+
rate 7000
5+
tag sample.hello.world
6+
</source>
7+
8+
<match sample.**>
9+
@type kafka2
10+
11+
brokers "broker:29092"
12+
13+
record_key "some_record"
14+
default_topic "events"
15+
16+
<format>
17+
# requires the fluent-plugin-formatter-protobuf gem
18+
# see its docs for full usage
19+
@type protobuf
20+
class_name SomeRecord
21+
include_paths ["/opt/fluent-plugin-formatter-protobuf/some_record_pb.rb"]
22+
</format>
23+
</match>
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
<source>
2+
@type sample
3+
sample {"hello": "world", "some_record":{"event":"message"}}
4+
rate 7000
5+
tag sample.hello.world
6+
</source>
7+
8+
<match sample.**>
9+
@type kafka2
10+
11+
brokers "broker:29092"
12+
13+
# {"event": "message"} will be formatted and sent to Kafka
14+
record_key "some_record"
15+
16+
default_topic "events"
17+
18+
<format>
19+
@type json
20+
</format>
21+
22+
<buffer>
23+
flush_at_shutdown true
24+
flush_mode interval
25+
flush_interval 1s
26+
chunk_limit_size 3MB
27+
chunk_full_threshold 1
28+
total_limit_size 1024MB
29+
overflow_action block
30+
</buffer>
31+
</match>

0 commit comments

Comments
 (0)