You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/event-hubs/event-processor-balance-partition-load.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ ms.devlang: na
11
11
ms.topic: conceptual
12
12
ms.tgt_pltfrm: na
13
13
ms.workload: na
14
-
ms.date: 01/16/2020
14
+
ms.date: 05/28/2020
15
15
ms.author: shvija
16
16
17
17
---
@@ -41,7 +41,7 @@ When designing the consumer in a distributed environment, the scenario must hand
41
41
42
42
## Event processor or consumer client
43
43
44
-
You don't need to build your own solution to meet these requirements. The Azure Event Hubs SDKs provide this functionality. In .NET or Java SDKs, you use an event processor client (EventProcessorClient), and in Python and Java Script SDKs, you use EventHubConsumerClient. In the old version of SDK, it was the event processor host (EventProcessorHost) that supported these features.
44
+
You don't need to build your own solution to meet these requirements. The Azure Event Hubs SDKs provide this functionality. In .NET or Java SDKs, you use an event processor client (EventProcessorClient), and in Python and JavaScript SDKs, you use EventHubConsumerClient. In the old version of SDK, it was the event processor host (EventProcessorHost) that supported these features.
45
45
46
46
For the majority of production scenarios, we recommend that you use the event processor client for reading and processing events. The processor client is intended to provide a robust experience for processing events across all partitions of an event hub in a performant and fault tolerant manner while providing a means to checkpoint its progress. Event processor clients are also capable of working cooperatively within the context of a consumer group for a given event hub. Clients will automatically manage distribution and balancing of work as instances become available or unavailable for the group.
47
47
@@ -51,7 +51,7 @@ An event processor instance typically owns and processes events from one or more
51
51
52
52
Each event processor is given a unique identifier and claims ownership of partitions by adding or updating an entry in a checkpoint store. All event processor instances communicate with this store periodically to update its own processing state as well as to learn about other active instances. This data is then used to balance the load among the active processors. New instances can join the processing pool to scale up. When instances go down, either due to failures or to scale down, partition ownership is gracefully transferred to other active processors.
53
53
54
-
Partition ownership records in the checkpoint store keeps track of Event Hubs namespace, event hub name, consumer group, event processor identifier (also known as owner), partition id and the last modified time.
54
+
Partition ownership records in the checkpoint store keep track of Event Hubs namespace, event hub name, consumer group, event processor identifier (also known as owner), partition ID and the last modified time.
55
55
56
56
57
57
@@ -89,7 +89,7 @@ When the checkpoint is performed to mark an event as processed, an entry in chec
89
89
90
90
## Thread safety and processor instances
91
91
92
-
By default, event processor or consumer is thread safe and behaves in a synchronous manner. When events arrive for a partition, the function that processes the events is called. Subsequent messages and calls to this function queue up behind the scenes as the message pump continues to run in the background on other threads. This thread safety removes the need for thread-safe collections and dramatically increases performance.
92
+
By default, the function that processes the events is called sequentially for a given partition. Subsequent events and calls to this function from the same partition queue up behind the scenes as the event pump continues to run in the background on other threads. Note that events from different partitions can be processed concurrently and any shared state that is accessed across partitions have to be synchronized.
0 commit comments