You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<<enabling-tls,TLS is enabled>> by using the `rabbitmq-stream+tls` scheme in the URI.
54
53
55
-
When using one URI, the corresponding node will be the main entry point to connect to. The
56
-
`Environment` will then use the stream protocol to find out more about streams topology
57
-
(leaders and replicas) when asked to create `Producer` and `Consumer` instances.
58
-
The `Environment` may become blind if this node goes down though, so it may be more appropriate to specify several other URIs to try in case of failure of a node:
54
+
When using one URI, the corresponding node will be the main entry point to connect to.
55
+
The `Environment` will then use the stream protocol to find out more about streams topology (leaders and replicas) when asked to create `Producer` and `Consumer` instances.
56
+
57
+
If this node fails, the `Environment` will lose connectivity.
58
+
To improve resilience, specify multiple URIs as fallback options:
59
59
60
60
.Creating an environment with several URIs
61
61
[source,java,indent=0]
@@ -70,11 +70,11 @@ will pick a new URI randomly in case of disconnection.
70
70
[[understanding-connection-logic]]
71
71
==== Understanding Connection Logic
72
72
73
-
Creating the environment to connect to a cluster node works usually seamlessly.
74
-
Creating publishers and consumers can cause problems as the client uses hints from the cluster to find the nodes where stream leaders and replicas are located to connect to the appropriate nodes.
73
+
Creating the environment to connect to a cluster node usually works seamlessly.
74
+
Creating publishers and consumers may encounter connection issues because the client relies on cluster hints to locate stream leaders and replicas.
75
75
76
76
These connection hints can be accurate or less appropriate depending on the infrastructure.
77
-
If you hit connection problems at some point – like hostnames impossible to resolve for client applications - this https://www.rabbitmq.com/blog/2021/07/23/connecting-to-streams[blog post] should help you understand what is going on and fix the issues.
77
+
If you encounter connection problems (such as unresolvable hostnames), this https://www.rabbitmq.com/blog/2021/07/23/connecting-to-streams[blog post] explains the root causes and solutions.
78
78
Setting the `advertised_host` and `advertised_port` https://www.rabbitmq.com/blog/2021/07/23/connecting-to-streams#advertised-host-and-port[configuration entries] should solve the most common connection problems.
79
79
80
80
To make the local development experience simple, the client library can choose to always use `localhost` for producers and consumers.
<2> Create the message with `Producer#messageBuilder()`
500
499
<3> Define the behavior on publish confirmation
501
500
502
-
Messages are not only made of a `byte[]` payload, we will see in
503
-
<<working-with-complex-messages,the next section>>
504
-
they can also carry pre-defined and application properties.
501
+
Messages are not only made of a `byte[]` payload, as shown in <<working-with-complex-messages,the next section>> they can also carry pre-defined and application properties.
505
502
506
503
[NOTE]
507
504
.Use a `MessageBuilder` instance only once
@@ -511,11 +508,8 @@ need to create a new instance of `MessageBuilder` for every message
511
508
you want to create.
512
509
====
513
510
514
-
The `ConfirmationHandler` defines an asynchronous callback invoked
515
-
when the client received from the broker the confirmation the message
516
-
has been taken into account. The `ConfirmationHandler` is the place
517
-
for any logic on publishing confirmation, including
518
-
re-publishing the message if it is negatively acknowledged.
511
+
The `ConfirmationHandler` defines an asynchronous callback invoked when the broker confirms message receipt.
512
+
The `ConfirmationHandler` is the place for any logic on publishing confirmation, including re-publishing the message if it is negatively acknowledged.
519
513
520
514
[WARNING]
521
515
.Keep the confirmation callback as short as possible
@@ -532,7 +526,7 @@ a separate thread (e.g. with an asynchronous `ExecutorService`).
532
526
533
527
The publishing example above showed that messages are made of
534
528
a byte array payload, but it did not go much further. Messages in RabbitMQ Stream
535
-
can actually be more sophisticated, as they comply to the
529
+
can actually be more sophisticated, as they comply with the
In a nutshell, a message in RabbitMQ Stream has the following structure:
@@ -594,12 +588,12 @@ on 2 client-side elements: the _producer name_ and the _message publishing ID_.
594
588
595
589
[[deduplication-multithreading]]
596
590
[WARNING]
597
-
.Only one publisher instance with a given name and no multithreading to guarantee deduplication
591
+
.Deduplication Requirements: Single Publisher Instance and Single Thread
598
592
====
599
593
We'll see below that deduplication works using a strictly increasing sequence for messages.
600
594
This means messages must be published in order, so there must be only _one publisher instance with a given name_ and this instance must publish messages _within a single thread_.
601
595
602
-
With several publisher instances with the same name, one instance can be "ahead" of the others for the sequence ID: if it publishes a message with sequence ID 100, any message from any instance with a smaller lower sequence ID will be filtered out.
596
+
With several publisher instances with the same name, one instance can be "ahead" of the others for the sequence ID: if it publishes a message with sequence ID 100, any message from any instance with a lower sequence ID will be filtered out.
603
597
604
598
If there is only one publisher instance with a given name, it should publish messages in a single thread.
605
599
Even if messages are _created_ in order, with the proper sequence ID, they can get out of order if they are published in several threads, e.g. message 5 can be _published_ before message 2.
@@ -612,11 +606,8 @@ If you worry about performance, note it is possible to publish hundreds of thous
612
606
[WARNING]
613
607
.Deduplication is not guaranteed when using sub-entries batching
614
608
====
615
-
It is not possible to guarantee deduplication when
616
-
<<sub-entry-batching-and-compression, sub-entry batching>> is in use.
617
-
Sub-entry batching is disabled by default and it does not prevent from
618
-
batching messages in a single publish frame, which can already provide
619
-
very high throughput.
609
+
It is not possible to guarantee deduplication when <<sub-entry-batching-and-compression, sub-entry batching>> is in use.
610
+
Sub-entry batching is disabled by default and it does not prevent batching messages in a single publish frame, which can already provide very high throughput.
The broker start sending messages as soon as the `Consumer` instance is created.
814
+
The broker starts sending messages as soon as the `Consumer` instance is created.
824
815
825
816
[WARNING]
826
817
.The message processing callback can take its time, but not too much
@@ -962,7 +953,7 @@ made of 2 chunks:
962
953
....
963
954
964
955
Each chunk contains a timestamp of its creation time.
965
-
This is this timestamp the broker uses to find the appropriate chunk to start from when using a timestamp specification.
956
+
The broker uses this timestamp to find the appropriate chunk to start from when using a timestamp specification.
966
957
The broker chooses the closest chunk _before_ the specified timestamp, that is why consumers may see messages published a bit before what they specified.
967
958
968
959
[[consumer-offset-tracking]]
@@ -1037,7 +1028,7 @@ manual tracking.
1037
1028
[[consumer-manual-offset-tracking]]
1038
1029
===== Manual Offset Tracking
1039
1030
1040
-
The manual tracking strategy lets the developer in charge of storing offsets
1031
+
The manual tracking strategy gives the developer control of storing offsets
1041
1032
whenever they want, not only after a given number of messages has been received
1042
1033
and supposedly processed, like automatic tracking does.
1043
1034
@@ -1082,7 +1073,7 @@ make the stream grow unnecessarily, as the broker persists offset
1082
1073
tracking entries in the stream itself.
1083
1074
1084
1075
A good rule of thumb is to store the offset every few thousands
1085
-
of messages. Of course, when the consumer will restart consuming in a new incarnation, the
1076
+
of messages. Of course, when the consumer restarts consuming in a new incarnation, the
1086
1077
last tracked offset may be a little behind the very last message the previous incarnation
1087
1078
actually processed, so the consumer may see some messages that have been already processed.
1088
1079
@@ -1091,8 +1082,9 @@ last duplicated messages.
1091
1082
1092
1083
'''
1093
1084
1094
-
_Is the offset a reliable absolute value?_ Message offsets may not be contiguous.
1095
-
This implies that the message at offset 500 in a stream may
1085
+
_Is the offset a reliable absolute value?_
1086
+
Message offsets may not be contiguous.
1087
+
This means the message at offset 500 in a stream may
1096
1088
not be the 501 message in the stream (offsets start at 0).
1097
1089
There can be different types of entries in a stream storage, a message is
1098
1090
just one of them. For example, storing an offset creates an offset tracking
0 commit comments