Skip to content

Commit 6359afc

Browse files
authored
Merge pull request #205983 from spelluru/egridmsi0721
JPG -> PNG
2 parents ad67182 + fa778ce commit 6359afc

15 files changed

+21
-21
lines changed

articles/event-hubs/event-hubs-federation-overview.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -48,12 +48,12 @@ the respective pattern.
4848
### Resiliency against regional availability events
4949

5050
![Regional
51-
Availability](media/event-hubs-federation-overview/regional-availability.jpg)
51+
Availability](media/event-hubs-federation-overview/regional-availability.png)
5252

5353
While maximum availability and reliability are the top operational priorities
5454
for Event Hubs, there are nevertheless many ways in which a producer or consumer
55-
might be prevented from talking to its assigned "primary" Event Hub because of
56-
networking or name resolution issues, or where an Event Hub might indeed be
55+
might be prevented from talking to its assigned "primary" Event Hubs because of
56+
networking or name resolution issues, or where an Event Hubs might indeed be
5757
temporarily unresponsive or returning errors.
5858

5959
Such conditions aren't "disastrous" such that you'll want to abandon the
@@ -65,16 +65,16 @@ than a few minutes or even seconds.
6565
There are two foundational patterns to address such scenarios:
6666

6767
- The [replication][4] pattern is about replicating the contents of a primary
68-
Event Hub to a secondary Event Hub, whereby the primary Event Hub is generally
68+
Event Hubs to a secondary Event Hubs, whereby the primary Event Hubs is generally
6969
used by the application for both producing and consuming events and the
70-
secondary serves as a fallback option in case the primary Event Hub is
70+
secondary serves as a fallback option in case the primary Event Hubs is
7171
becoming unavailable. Since replication is unidirectional, from the primary to
7272
the secondary, a switchover of both producers and consumers from an
7373
unavailable primary to the secondary will cause the old primary to no
7474
longer receive new events and it will therefore be no longer current.
7575
Pure replication is therefore only suitable for one-way failover scenarios. Once
7676
the failover has been performed, the old primary is abandoned and a new
77-
secondary Event Hub needs to be created in a different target region.
77+
secondary Event Hubs needs to be created in a different target region.
7878
- The [merge][5] pattern extends the replication pattern by performing a
7979
continuous merge of the contents of two or more Event Hubs. Each event
8080
originally produced into one of the Event Hubs included in the scheme is
@@ -91,16 +91,16 @@ may differ. This is especially true for scenarios where the partition count of
9191
source and target Event Hubs differ, which is desirable for several of the
9292
extended patterns described here. A [splitter or
9393
router](#splitting-and-routing-of-event-streams) may obtain a slice of a much
94-
larger Event Hub with hundreds of partitions and funnel into a smaller Event Hub
94+
larger Event Hubs with hundreds of partitions and funnel into a smaller Event Hubs
9595
with just a handful of partitions, more suitable for handling the subset with
9696
limited processing resources. Conversely, a
9797
[consolidation](#consolidation-and-normalization-of-event-streams) may funnel
98-
data from several smaller Event Hubs into a single, larger Event Hub with more
98+
data from several smaller Event Hubs into a single, larger Event Hubs with more
9999
partitions to cope with the consolidated throughput and processing needs.
100100

101101
The criterion for keeping events together is the partition key and not the
102102
original partition ID. Further considerations about relative order and how to
103-
perform a failover from one Event Hub to the next without relying on the same
103+
perform a failover from one Event Hubs to the next without relying on the same
104104
scope of stream offsets is discussed in [replication][4] pattern description.
105105

106106

@@ -111,7 +111,7 @@ Guidance:
111111
### Latency optimization
112112

113113
![Latency
114-
Optimization](media/event-hubs-federation-overview/latency-optimization.jpg)
114+
Optimization](media/event-hubs-federation-overview/latency-optimization.png)
115115

116116
Event streams are written once by producers, but may be read any number of times
117117
by event consumers. For scenarios where an event stream in a region is shared by
@@ -133,9 +133,9 @@ Guidance:
133133

134134
### Validation, reduction, and enrichment
135135

136-
![Validation, reduction, enrichment](media/event-hubs-federation-overview/validation-enrichment.jpg)
136+
![Validation, reduction, enrichment](media/event-hubs-federation-overview/validation-enrichment.png)
137137

138-
Event streams may be submitted into an Event Hub by clients external to your own
138+
Event streams may be submitted into an Event Hubs by clients external to your own
139139
solution. Such event streams may require for externally submitted events to be
140140
checked for compliance with a given schema, and for non-compliant events to be
141141
dropped.
@@ -157,7 +157,7 @@ Guidance:
157157

158158
### Integration with analytics services
159159

160-
![Integration with analytics services](media/event-hubs-federation-overview/integration.jpg)
160+
![Integration with analytics services](media/event-hubs-federation-overview/integration.png)
161161

162162
Several of Azure's cloud-native analytics services like Azure Stream Analytics
163163
or Azure Synapse work best with streamed or pre-batched data served up from
@@ -167,8 +167,8 @@ and Apache Storm.
167167

168168
If your solution primarily uses Service Bus or Event Grid, you can make these
169169
events easily accessible to such analytics systems and also for archival with
170-
Event Hubs Capture if you funnel them into Event Hub. Event Grid can do so
171-
natively with its [Event Hub integration](../event-grid/handler-event-hubs.md),
170+
Event Hubs Capture if you funnel them into Event Hubs. Event Grid can do so
171+
natively with its [Event Hubs integration](../event-grid/handler-event-hubs.md),
172172
for Service Bus you follow the [Service Bus replication
173173
guidance](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/config/ServiceBusCopyToEventHub).
174174

@@ -179,7 +179,7 @@ Guidance:
179179

180180
### Consolidation and normalization of event streams
181181

182-
![Consolidation and normalization of event streams](media/event-hubs-federation-overview/consolidation.jpg)
182+
![Consolidation and normalization of event streams](media/event-hubs-federation-overview/consolidation.png)
183183

184184
Global solutions are often composed of regional footprints that are largely
185185
independent including having their own analytics capabilities, but
@@ -202,7 +202,7 @@ Guidance:
202202

203203
### Splitting and routing of event streams
204204

205-
![Splitting and routing of event streams](media/event-hubs-federation-overview/splitting.jpg)
205+
![Splitting and routing of event streams](media/event-hubs-federation-overview/splitting.png)
206206

207207
Azure Event Hubs is occasionally used in "publish-subscribe" style scenarios
208208
where an incoming torrent of ingested events far exceeds the capacity of Azure
@@ -213,7 +213,7 @@ pattern.
213213
While a true "publish-subscribe" capability leaves it to subscribers to pick the
214214
events they want, the splitting pattern has the producer map events to
215215
partitions by a predetermined distribution model and designated consumers then
216-
exclusively pull from "their" partition. With the Event Hub buffering the
216+
exclusively pull from "their" partition. With the Event Hubs buffering the
217217
overall traffic, the content of a particular partition, representing a fraction
218218
of the original throughput volume, may then be replicated into a queue for
219219
reliable, transactional, competing consumer consumption.
@@ -222,15 +222,15 @@ Many scenarios where Event Hubs is primarily used for moving events within an
222222
application within a region have some cases where select events, maybe just from
223223
a single partition, also have to be made available elsewhere. This scenario is similar to
224224
the splitting scenario, but might use a scalable router that considers all the
225-
messages arriving in an Event Hub and cherry-picks just a few for onward routing
225+
messages arriving in an Event Hubs and cherry-picks just a few for onward routing
226226
and might differentiate routing targets by event metadata or content.
227227

228228
Guidance:
229229
- [Routing pattern][7]
230230

231231
### Log projections
232232

233-
![Log projection](media/event-hubs-federation-overview/log-projection.jpg)
233+
![Log projection](media/event-hubs-federation-overview/log-projection.png)
234234

235235
In some scenarios, you will want to have access to the latest value sent for any
236236
substream of an event, and commonly distinguished by the partition key. In
@@ -413,4 +413,4 @@ between Event Hubs and various other eventing and messaging systems:
413413
[8]: event-hubs-federation-patterns.md#log-projection
414414
[9]: process-data-azure-stream-analytics.md
415415
[10]: event-hubs-federation-patterns.md#replication
416-
[11]: event-hubs-kafka-mirror-maker-tutorial.md
416+
[11]: event-hubs-kafka-mirror-maker-tutorial.md
Binary file not shown.
132 KB
Loading
Binary file not shown.
71.7 KB
Loading
Binary file not shown.
91.3 KB
Loading
Binary file not shown.
96.4 KB
Loading
Binary file not shown.

0 commit comments

Comments
 (0)