@@ -48,12 +48,12 @@ the respective pattern.
48
48
### Resiliency against regional availability events
49
49
50
50
![ Regional
51
- Availability] ( media/event-hubs-federation-overview/regional-availability.jpg )
51
+ Availability] ( media/event-hubs-federation-overview/regional-availability.png )
52
52
53
53
While maximum availability and reliability are the top operational priorities
54
54
for Event Hubs, there are nevertheless many ways in which a producer or consumer
55
- might be prevented from talking to its assigned "primary" Event Hub because of
56
- networking or name resolution issues, or where an Event Hub might indeed be
55
+ might be prevented from talking to its assigned "primary" Event Hubs because of
56
+ networking or name resolution issues, or where an Event Hubs might indeed be
57
57
temporarily unresponsive or returning errors.
58
58
59
59
Such conditions aren't "disastrous" such that you'll want to abandon the
@@ -65,16 +65,16 @@ than a few minutes or even seconds.
65
65
There are two foundational patterns to address such scenarios:
66
66
67
67
- The [ replication] [ 4 ] pattern is about replicating the contents of a primary
68
- Event Hub to a secondary Event Hub , whereby the primary Event Hub is generally
68
+ Event Hubs to a secondary Event Hubs , whereby the primary Event Hubs is generally
69
69
used by the application for both producing and consuming events and the
70
- secondary serves as a fallback option in case the primary Event Hub is
70
+ secondary serves as a fallback option in case the primary Event Hubs is
71
71
becoming unavailable. Since replication is unidirectional, from the primary to
72
72
the secondary, a switchover of both producers and consumers from an
73
73
unavailable primary to the secondary will cause the old primary to no
74
74
longer receive new events and it will therefore be no longer current.
75
75
Pure replication is therefore only suitable for one-way failover scenarios. Once
76
76
the failover has been performed, the old primary is abandoned and a new
77
- secondary Event Hub needs to be created in a different target region.
77
+ secondary Event Hubs needs to be created in a different target region.
78
78
- The [ merge] [ 5 ] pattern extends the replication pattern by performing a
79
79
continuous merge of the contents of two or more Event Hubs. Each event
80
80
originally produced into one of the Event Hubs included in the scheme is
@@ -91,16 +91,16 @@ may differ. This is especially true for scenarios where the partition count of
91
91
source and target Event Hubs differ, which is desirable for several of the
92
92
extended patterns described here. A [ splitter or
93
93
router] ( #splitting-and-routing-of-event-streams ) may obtain a slice of a much
94
- larger Event Hub with hundreds of partitions and funnel into a smaller Event Hub
94
+ larger Event Hubs with hundreds of partitions and funnel into a smaller Event Hubs
95
95
with just a handful of partitions, more suitable for handling the subset with
96
96
limited processing resources. Conversely, a
97
97
[ consolidation] ( #consolidation-and-normalization-of-event-streams ) may funnel
98
- data from several smaller Event Hubs into a single, larger Event Hub with more
98
+ data from several smaller Event Hubs into a single, larger Event Hubs with more
99
99
partitions to cope with the consolidated throughput and processing needs.
100
100
101
101
The criterion for keeping events together is the partition key and not the
102
102
original partition ID. Further considerations about relative order and how to
103
- perform a failover from one Event Hub to the next without relying on the same
103
+ perform a failover from one Event Hubs to the next without relying on the same
104
104
scope of stream offsets is discussed in [ replication] [ 4 ] pattern description.
105
105
106
106
@@ -111,7 +111,7 @@ Guidance:
111
111
### Latency optimization
112
112
113
113
![ Latency
114
- Optimization] ( media/event-hubs-federation-overview/latency-optimization.jpg )
114
+ Optimization] ( media/event-hubs-federation-overview/latency-optimization.png )
115
115
116
116
Event streams are written once by producers, but may be read any number of times
117
117
by event consumers. For scenarios where an event stream in a region is shared by
@@ -133,9 +133,9 @@ Guidance:
133
133
134
134
### Validation, reduction, and enrichment
135
135
136
- ![ Validation, reduction, enrichment] ( media/event-hubs-federation-overview/validation-enrichment.jpg )
136
+ ![ Validation, reduction, enrichment] ( media/event-hubs-federation-overview/validation-enrichment.png )
137
137
138
- Event streams may be submitted into an Event Hub by clients external to your own
138
+ Event streams may be submitted into an Event Hubs by clients external to your own
139
139
solution. Such event streams may require for externally submitted events to be
140
140
checked for compliance with a given schema, and for non-compliant events to be
141
141
dropped.
@@ -157,7 +157,7 @@ Guidance:
157
157
158
158
### Integration with analytics services
159
159
160
- ![ Integration with analytics services] ( media/event-hubs-federation-overview/integration.jpg )
160
+ ![ Integration with analytics services] ( media/event-hubs-federation-overview/integration.png )
161
161
162
162
Several of Azure's cloud-native analytics services like Azure Stream Analytics
163
163
or Azure Synapse work best with streamed or pre-batched data served up from
@@ -167,8 +167,8 @@ and Apache Storm.
167
167
168
168
If your solution primarily uses Service Bus or Event Grid, you can make these
169
169
events easily accessible to such analytics systems and also for archival with
170
- Event Hubs Capture if you funnel them into Event Hub . Event Grid can do so
171
- natively with its [ Event Hub integration] ( ../event-grid/handler-event-hubs.md ) ,
170
+ Event Hubs Capture if you funnel them into Event Hubs . Event Grid can do so
171
+ natively with its [ Event Hubs integration] ( ../event-grid/handler-event-hubs.md ) ,
172
172
for Service Bus you follow the [ Service Bus replication
173
173
guidance] ( https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/config/ServiceBusCopyToEventHub ) .
174
174
@@ -179,7 +179,7 @@ Guidance:
179
179
180
180
### Consolidation and normalization of event streams
181
181
182
- ![ Consolidation and normalization of event streams] ( media/event-hubs-federation-overview/consolidation.jpg )
182
+ ![ Consolidation and normalization of event streams] ( media/event-hubs-federation-overview/consolidation.png )
183
183
184
184
Global solutions are often composed of regional footprints that are largely
185
185
independent including having their own analytics capabilities, but
@@ -202,7 +202,7 @@ Guidance:
202
202
203
203
### Splitting and routing of event streams
204
204
205
- ![ Splitting and routing of event streams] ( media/event-hubs-federation-overview/splitting.jpg )
205
+ ![ Splitting and routing of event streams] ( media/event-hubs-federation-overview/splitting.png )
206
206
207
207
Azure Event Hubs is occasionally used in "publish-subscribe" style scenarios
208
208
where an incoming torrent of ingested events far exceeds the capacity of Azure
@@ -213,7 +213,7 @@ pattern.
213
213
While a true "publish-subscribe" capability leaves it to subscribers to pick the
214
214
events they want, the splitting pattern has the producer map events to
215
215
partitions by a predetermined distribution model and designated consumers then
216
- exclusively pull from "their" partition. With the Event Hub buffering the
216
+ exclusively pull from "their" partition. With the Event Hubs buffering the
217
217
overall traffic, the content of a particular partition, representing a fraction
218
218
of the original throughput volume, may then be replicated into a queue for
219
219
reliable, transactional, competing consumer consumption.
@@ -222,15 +222,15 @@ Many scenarios where Event Hubs is primarily used for moving events within an
222
222
application within a region have some cases where select events, maybe just from
223
223
a single partition, also have to be made available elsewhere. This scenario is similar to
224
224
the splitting scenario, but might use a scalable router that considers all the
225
- messages arriving in an Event Hub and cherry-picks just a few for onward routing
225
+ messages arriving in an Event Hubs and cherry-picks just a few for onward routing
226
226
and might differentiate routing targets by event metadata or content.
227
227
228
228
Guidance:
229
229
- [ Routing pattern] [ 7 ]
230
230
231
231
### Log projections
232
232
233
- ![ Log projection] ( media/event-hubs-federation-overview/log-projection.jpg )
233
+ ![ Log projection] ( media/event-hubs-federation-overview/log-projection.png )
234
234
235
235
In some scenarios, you will want to have access to the latest value sent for any
236
236
substream of an event, and commonly distinguished by the partition key. In
@@ -413,4 +413,4 @@ between Event Hubs and various other eventing and messaging systems:
413
413
[ 8 ] : event-hubs-federation-patterns.md#log-projection
414
414
[ 9 ] : process-data-azure-stream-analytics.md
415
415
[ 10 ] : event-hubs-federation-patterns.md#replication
416
- [ 11 ] : event-hubs-kafka-mirror-maker-tutorial.md
416
+ [ 11 ] : event-hubs-kafka-mirror-maker-tutorial.md
0 commit comments