You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/event-hubs/dynamically-add-partitions.md
+6-10Lines changed: 6 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ You can specify the number of partitions at the time of creating an event hub. I
24
24
25
25
26
26
## Update the partition count
27
-
This section shows you how to update partition count of an event hub in different ways (PowerShell, CLI, etc.).
27
+
This section shows you how to update partition count of an event hub in different ways (PowerShell, CLI, and so on.).
28
28
29
29
### PowerShell
30
30
Use the [Set-AzureRmEventHub](/powershell/module/azurerm.eventhub/Set-AzureRmEventHub?view=azurermps-6.13.0) PowerShell command to update partitions in an event hub.
@@ -59,12 +59,6 @@ Update value of the `partitionCount` property in the Resource Manager template a
59
59
}
60
60
```
61
61
62
-
### .NET SDK
63
-
Use the `PartitionCount` property of the `EventHub` class in the management SDK to set the new partition count for the event hub.
64
-
65
-
If you are using the [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager?view=azure-dotnet) class of the older Microsoft.ServiceBus.Messaging library, use the [UpdateEventHub](/dotnet/api/microsoft.servicebus.namespacemanager.updateeventhub?view=azure-dotnet#Microsoft_ServiceBus_NamespaceManager_UpdateEventHub_Microsoft_ServiceBus_Messaging_EventHubDescription_) method after specifying the new value for the `PartitionCount` property of the [EventHubDescription](/dotnet/api/microsoft.servicebus.messaging.eventhubdescription?view=azure-dotnet) object.
66
-
67
-
68
62
### Apache Kafka
69
63
Use the `AlterTopics` API (for example, via **kafka-topics** CLI tool) to increase the partition count. For details, see [Modifying Kafka topics](http://kafka.apache.org/documentation/#basic_ops_modify_topic).
70
64
@@ -76,8 +70,8 @@ When you add a partition to an existing even hub, the event hub client receives
76
70
### Sender/producer clients
77
71
Event Hubs provides three sender options:
78
72
79
-
-**Partition sender** – In this scenario, clients send events directly to a partition. Although partitions are identifiable and events can be sent directly to them, we don't recommend this pattern. Adding partitions doesn't impact this scenario.
80
-
-**Partition key sender** – in this scenario, clients sends the events with a key so that all events belonging to that key end up in the same partition. In this case, service hashes the key and routes to the corresponding partition.
73
+
-**Partition sender** – In this scenario, clients send events directly to a partition. Although partitions are identifiable and events can be sent directly to them, we don't recommend this pattern. Adding partitions doesn't impact this scenario. We recommend that you restart applications so that they can detect newly added partitions.
74
+
-**Partition key sender** – in this scenario, clients sends the events with a key so that all events belonging to that key end up in the same partition. In this case, service hashes the key and routes to the corresponding partition. The partition count update can cause out-of-order issues due to hashing change. So, if you care about ordering, ensure that your application consumes all events from existing partitions before you increase the partition count.
81
75
-**Round-robin sender (default)** – In this scenario, the Event Hubs service round robins the events across partitions. Event Hubs service is aware of partition count changes and will send to new partitions within seconds of altering partition count.
82
76
83
77
### Receiver/consumer clients
@@ -86,13 +80,15 @@ Event Hubs provides direct receivers and an easy consumer library called the [Ev
86
80
-**Direct receivers** – The direct receivers listen to specific partitions. Their runtime behavior isn't affected when partitions are scaled out for an event hub.
87
81
-**Event processor host** – This client doesn't automatically refresh the entity metadata. So, it wouldn't pick up on partition count increase. Recreating an event processor instance will cause an entity metadata fetch, which in turn will create new blobs for the newly added partitions. Pre-existing blobs won't be affected. Restarting all event processor instances is recommended to ensure that all instances are aware of the newly added partitions, and load-balancing is handled correctly among consumers.
88
82
83
+
If you're using the old version of .NET SDK ([WindowsAzure.ServiceBus](https://www.nuget.org/packages/WindowsAzure.ServiceBus/)), the event processor host removes an existing checkpoint upon restart if partition count in the checkpoint doesn't match the partition count fetched from the service. This behavior may have an impact on your application.
84
+
89
85
## Apache Kafka clients
90
86
This section describes how Apache Kafka clients that use the Kafka endpoint of Azure Event Hubs behave when the partition count is updated for an event hub.
91
87
92
88
Kafka clients that use Event Hubs with the Apache Kafka protocol behave differently from event hub clients that use AMQP protocol. Kafka clients update their metadata once every `metadata.max.age.ms` milliseconds. You specify this value in the client configurations. The `librdkafka` libraries also use the same configuration. Metadata updates inform the clients of service changes including the partition count increases. For a list of configurations, see [Apache Kafka configurations for Event Hubs](https://github.com/Azure/azure-event-hubs-for-kafka/blob/master/CONFIGURATION.md)
93
89
94
90
### Sender/producer clients
95
-
Producers always dictate that send requests contain the partition destination for each set of produced records. So, all produce partitioning is done on client-side with producer’s view of broker's metadata. Once the new partitions are added to the producer’s metadata view, they will be available for producer requests.
91
+
Producers always dictate that send requests contain the partition destination for each set of produced records. So, all produce partitioning is done on client-side with producer’s view of broker's metadata. Once the new partitions are added to the producer’s metadata view, they'll be available for producer requests.
96
92
97
93
### Consumer/receiver clients
98
94
When a consumer group member performs a metadata refresh and picks up the newly created partitions, that member initiates a group rebalance. Consumer metadata then will be refreshed for all group members, and the new partitions will be assigned by the allotted rebalance leader.
0 commit comments