@@ -52,8 +52,8 @@ Availability](media/event-hubs-federation-overview/regional-availability.png)
52
52
53
53
While maximum availability and reliability are the top operational priorities
54
54
for Event Hubs, there are nevertheless many ways in which a producer or consumer
55
- might be prevented from talking to its assigned "primary" Event Hub because of
56
- networking or name resolution issues, or where an Event Hub might indeed be
55
+ might be prevented from talking to its assigned "primary" Event Hubs because of
56
+ networking or name resolution issues, or where an Event Hubs might indeed be
57
57
temporarily unresponsive or returning errors.
58
58
59
59
Such conditions aren't "disastrous" such that you'll want to abandon the
@@ -65,16 +65,16 @@ than a few minutes or even seconds.
65
65
There are two foundational patterns to address such scenarios:
66
66
67
67
- The [ replication] [ 4 ] pattern is about replicating the contents of a primary
68
- Event Hub to a secondary Event Hub , whereby the primary Event Hub is generally
68
+ Event Hubs to a secondary Event Hubs , whereby the primary Event Hubs is generally
69
69
used by the application for both producing and consuming events and the
70
- secondary serves as a fallback option in case the primary Event Hub is
70
+ secondary serves as a fallback option in case the primary Event Hubs is
71
71
becoming unavailable. Since replication is unidirectional, from the primary to
72
72
the secondary, a switchover of both producers and consumers from an
73
73
unavailable primary to the secondary will cause the old primary to no
74
74
longer receive new events and it will therefore be no longer current.
75
75
Pure replication is therefore only suitable for one-way failover scenarios. Once
76
76
the failover has been performed, the old primary is abandoned and a new
77
- secondary Event Hub needs to be created in a different target region.
77
+ secondary Event Hubs needs to be created in a different target region.
78
78
- The [ merge] [ 5 ] pattern extends the replication pattern by performing a
79
79
continuous merge of the contents of two or more Event Hubs. Each event
80
80
originally produced into one of the Event Hubs included in the scheme is
@@ -91,16 +91,16 @@ may differ. This is especially true for scenarios where the partition count of
91
91
source and target Event Hubs differ, which is desirable for several of the
92
92
extended patterns described here. A [ splitter or
93
93
router] ( #splitting-and-routing-of-event-streams ) may obtain a slice of a much
94
- larger Event Hub with hundreds of partitions and funnel into a smaller Event Hub
94
+ larger Event Hubs with hundreds of partitions and funnel into a smaller Event Hubs
95
95
with just a handful of partitions, more suitable for handling the subset with
96
96
limited processing resources. Conversely, a
97
97
[ consolidation] ( #consolidation-and-normalization-of-event-streams ) may funnel
98
- data from several smaller Event Hubs into a single, larger Event Hub with more
98
+ data from several smaller Event Hubs into a single, larger Event Hubs with more
99
99
partitions to cope with the consolidated throughput and processing needs.
100
100
101
101
The criterion for keeping events together is the partition key and not the
102
102
original partition ID. Further considerations about relative order and how to
103
- perform a failover from one Event Hub to the next without relying on the same
103
+ perform a failover from one Event Hubs to the next without relying on the same
104
104
scope of stream offsets is discussed in [ replication] [ 4 ] pattern description.
105
105
106
106
@@ -135,7 +135,7 @@ Guidance:
135
135
136
136
![ Validation, reduction, enrichment] ( media/event-hubs-federation-overview/validation-enrichment.png )
137
137
138
- Event streams may be submitted into an Event Hub by clients external to your own
138
+ Event streams may be submitted into an Event Hubs by clients external to your own
139
139
solution. Such event streams may require for externally submitted events to be
140
140
checked for compliance with a given schema, and for non-compliant events to be
141
141
dropped.
@@ -167,8 +167,8 @@ and Apache Storm.
167
167
168
168
If your solution primarily uses Service Bus or Event Grid, you can make these
169
169
events easily accessible to such analytics systems and also for archival with
170
- Event Hubs Capture if you funnel them into Event Hub . Event Grid can do so
171
- natively with its [ Event Hub integration] ( ../event-grid/handler-event-hubs.md ) ,
170
+ Event Hubs Capture if you funnel them into Event Hubs . Event Grid can do so
171
+ natively with its [ Event Hubs integration] ( ../event-grid/handler-event-hubs.md ) ,
172
172
for Service Bus you follow the [ Service Bus replication
173
173
guidance] ( https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/config/ServiceBusCopyToEventHub ) .
174
174
@@ -213,7 +213,7 @@ pattern.
213
213
While a true "publish-subscribe" capability leaves it to subscribers to pick the
214
214
events they want, the splitting pattern has the producer map events to
215
215
partitions by a predetermined distribution model and designated consumers then
216
- exclusively pull from "their" partition. With the Event Hub buffering the
216
+ exclusively pull from "their" partition. With the Event Hubs buffering the
217
217
overall traffic, the content of a particular partition, representing a fraction
218
218
of the original throughput volume, may then be replicated into a queue for
219
219
reliable, transactional, competing consumer consumption.
@@ -222,7 +222,7 @@ Many scenarios where Event Hubs is primarily used for moving events within an
222
222
application within a region have some cases where select events, maybe just from
223
223
a single partition, also have to be made available elsewhere. This scenario is similar to
224
224
the splitting scenario, but might use a scalable router that considers all the
225
- messages arriving in an Event Hub and cherry-picks just a few for onward routing
225
+ messages arriving in an Event Hubs and cherry-picks just a few for onward routing
226
226
and might differentiate routing targets by event metadata or content.
227
227
228
228
Guidance:
@@ -413,4 +413,4 @@ between Event Hubs and various other eventing and messaging systems:
413
413
[ 8 ] : event-hubs-federation-patterns.md#log-projection
414
414
[ 9 ] : process-data-azure-stream-analytics.md
415
415
[ 10 ] : event-hubs-federation-patterns.md#replication
416
- [ 11 ] : event-hubs-kafka-mirror-maker-tutorial.md
416
+ [ 11 ] : event-hubs-kafka-mirror-maker-tutorial.md
0 commit comments