You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cost-management-billing/reservations/fabric-capacity.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -98,7 +98,7 @@ If you bought an Azure Synapse Analytics Dedicated SQL pool reservation and you
98
98
99
99
The new reservation's lifetime commitment should equal or be greater than the returned reservation's remaining commitment. For example, assume you have a three-year reservation that costs $100 per month. You exchange it after the 18th payment. The new reservation's lifetime commitment should be $1,800 or more (paid monthly or upfront).
100
100
101
-
The exchange value of your Azure Synapse Analytics reserved capacity is based on the prorated remaining term and the current price of the reservation. The exchange value is applied as a credit to your Azure account. If the exchange value is less than the cost of the Fabric capacity reservation, you must pay the difference.
101
+
An exchange is processed as a refund and a repurchase – different transactions are created for the cancellation and the new reservation purchase. The prorated reservation amount is refunded for the reservations that's traded-in. You're charged fully for the new purchase. The prorated reservation amount is the daily prorated residual value of the reservation being returned.
102
102
103
103
After you exchange the reservation, the Fabric capacity reservation is applied to your Fabric capacity automatically. You can view and manage your reservations on the Reservations page in the Azure portal.
description: This article provides recommended Apache Kafka configurations for clients interacting with Azure Event Hubs for Apache Kafka.
4
4
ms.topic: reference
5
+
ms.subservice: kafka
5
6
ms.custom: devx-track-extended-java
6
-
ms.date: 03/30/2022
7
+
ms.date: 03/06/2025
7
8
---
8
9
9
10
# Recommended configurations for Apache Kafka clients
@@ -16,16 +17,16 @@ Here are the recommended configurations for using Azure Event Hubs from Apache K
16
17
Property | Recommended values | Permitted range | Notes
17
18
---|---:|-----:|---
18
19
`metadata.max.age.ms` | 180000 (approximate) | < 240000 | Can be lowered to pick up metadata changes sooner.
19
-
`connections.max.idle.ms` | 180000 | < 240000 | Azure closes inbound Transmission Control Protocol (TCP) idle > 240,000 ms, which can result in sending on dead connections (shown as expired batches because of send timeout).
20
+
`connections.max.idle.ms` | 180000 | < 240000 | Azure closes inbound Transmission Control Protocol (TCP) idle > 240,000 ms, which can result in sending on dead connections (shown as expired batches because of send time-out).
20
21
21
22
### Producer configurations only
22
23
Producer configs can be found [here](https://kafka.apache.org/documentation/#producerconfigs).
23
24
24
25
|Property | Recommended Values | Permitted Range | Notes|
25
26
|---|---:|---:|---|
26
27
|`max.request.size`| 1000000 | < 1046528 | The service closes connections if requests larger than 1,046,528 bytes are sent. *This value **must** be changed and causes issues in high-throughput produce scenarios.*|
|`request.timeout.ms`| 30000 .. 60000 | > 20000| Event Hubs internally defaults to a minimum of 20,000 ms. *While requests with lower timeout values are accepted, client behavior isn't guaranteed.* <p>Make sure that your **request.timeout.ms** is at least the recommended value of 60000 and your **session.timeout.ms** is at least the recommended value of 30000. Having these settings too low could cause consumer timeouts, which then cause rebalances (which then cause more timeouts, which cause more rebalancing, and so on).</p>|
|`request.timeout.ms`| 30000 .. 60000 | > 20000| Event Hubs internally defaults to a minimum of 20,000 ms. *While requests with lower time out values are accepted, client behavior isn't guaranteed.* <p>Make sure that your **request.timeout.ms** is at least the recommended value of 60000 and your **session.timeout.ms** is at least the recommended value of 30000. Having these settings too low could cause consumer time-outs, which then cause rebalances (which then cause more time-outs, which cause more rebalancing, and so on).</p>|
29
30
|`metadata.max.idle.ms`| 180000 | > 5000 | Controls how long the producer caches metadata for a topic that's idle. If the elapsed time since a topic was last produced exceeds the metadata idle duration, then the topic's metadata is forgotten and the next access to it will force a metadata fetch request.|
30
31
|`linger.ms`| > 0 || For high throughput scenarios, linger value should be equal to the highest tolerable value to take advantage of batching.|
31
32
|`delivery.timeout.ms`||| Set according to the formula (`request.timeout.ms` + `linger.ms`) * `retries`.|
@@ -37,8 +38,8 @@ Consumer configs can be found [here](https://kafka.apache.org/documentation/#con
37
38
Property | Recommended Values | Permitted Range | Notes
38
39
---|---:|-----:|---
39
40
`heartbeat.interval.ms` | 3000 | | 3000 is the default value and shouldn't be changed.
40
-
`session.timeout.ms` | 30000 |6000 .. 300000| Start with 30000, increase if seeing frequent rebalancing because of missed heartbeats.<p>Make sure that your request.timeout.ms is at least the recommended value of 60000 and your session.timeout.ms is at least the recommended value of 30000. Having these settings too low could cause consumer timeouts, which then cause rebalances (which then cause more timeouts, which cause more rebalancing, and so on).</p>
41
-
`max.poll.interval.ms` | 300000 (default) |>session.timeout.ms| Used for rebalance timeout, so it shouldn't be set too low. Must be greater than session.timeout.ms.
41
+
`session.timeout.ms` | 30000 |6000 .. 300000| Start with 30000, increase if seeing frequent rebalancing because of missed heartbeats.<p>Make sure that your request.timeout.ms is at least the recommended value of 60000 and your session.timeout.ms is at least the recommended value of 30000. Having these settings too low could cause consumer time-outs, which then cause rebalances (which then cause more time-outs, which cause more rebalancing, and so on).</p>
42
+
`max.poll.interval.ms` | 300000 (default) |>session.timeout.ms| Used for rebalance time-out, so it shouldn't be set too low. Must be greater than session.timeout.ms.
42
43
43
44
## librdkafka configuration properties
44
45
The main `librdkafka` configuration file ([link](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md)) contains extended descriptions for the properties described in the following sections.
|Property | Recommended Values | Permitted Range | Notes|
56
57
|---|---:|-----:|---|
57
58
|`retries`| > 0 || Default is 2147483647.|
58
-
|`request.timeout.ms`| 30000 .. 60000 | > 20000| Event Hubs internally defaults to a minimum of 20,000 ms. `librdkafka` default value is 5000, which can be problematic. *While requests with lower timeout values are accepted, client behavior isn't guaranteed.*|
59
+
|`request.timeout.ms`| 30000 .. 60000 | > 20000| Event Hubs internally defaults to a minimum of 20,000 ms. `librdkafka` default value is 5000, which can be problematic. *While requests with lower time-out values are accepted, client behavior isn't guaranteed.*|
59
60
|`partitioner`|`consistent_random`| See librdkafka documentation |`consistent_random` is default and best. Empty and null keys are handled ideally for most cases.|
60
61
|`compression.codec`|`none, gzip`|| Only gzip compression is currently supported.|
`heartbeat.interval.ms` | 3000 || 3000 is the default value and shouldn't be changed.
67
68
`session.timeout.ms` | 30000 |6000 .. 300000| Start with 30000, increase if seeing frequent rebalancing because of missed heartbeats.
68
-
`max.poll.interval.ms` | 300000 (default) |>session.timeout.ms| Used for rebalance timeout, so it shouldn't be set too low. Must be greater than session.timeout.ms.
69
+
`max.poll.interval.ms` | 300000 (default) |>session.timeout.ms| Used for rebalance time out, so it shouldn't be set too low. Must be greater than session.timeout.ms.
69
70
70
71
71
72
## Further notes
@@ -74,7 +75,7 @@ Check the following table of common configuration-related error scenarios.
74
75
75
76
Symptoms | Problem | Solution
76
77
----|---|-----
77
-
Offset commit failures because of rebalancing | Your consumer is waiting too long in between calls to poll() and the service is kicking the consumer out of the group. | You have several options: <ul><li>Increase poll processing timeout (`max.poll.interval.ms`)</li><li>Decrease message batch size to speed up processing</li><li>Improve processing parallelization to avoid blocking consumer.poll()</li></ul> Applying some combination of the three is likely wisest.
78
+
Offset commit failures because of rebalancing | Your consumer is waiting too long in between calls to poll() and the service is kicking the consumer out of the group. | You have several options: <ul><li>Increase poll processing time out (`max.poll.interval.ms`)</li><li>Decrease message batch size to speed up processing</li><li>Improve processing parallelization to avoid blocking consumer.poll()</li></ul> Applying some combination of the three is likely wisest.
78
79
Network exceptions at high produce throughput | If you're using Java client + default max.request.size, your requests might be too large. | See Java configs mentioned earlier.
description: This article answers frequent questions asked about Azure Event Hubs' support for Apache Kafka clients not covered elsewhere.
5
5
ms.topic: faq
6
+
ms.subservice: kafka
6
7
7
-
ms.date: 10/14/2022
8
+
ms.date: 03/06/2025
8
9
title: Frequently asked questions - Event Hubs for Apache Kafka
9
10
summary: This article provides answers to some of the frequently asked questions on migrating to Event Hubs for Apache Kafka.
10
11
@@ -33,8 +34,8 @@ sections:
33
34
- They're autocreated. Kafka groups can be managed via the Kafka consumer group APIs.
34
35
- They can store offsets in the Event Hubs service.
35
36
- They're used as keys in what is effectively an offset key-value store. For a unique pair of `group.id` and `topic-partition`, we store an offset in Azure Storage (3x replication). Event Hubs users don't incur extra storage costs from storing Kafka offsets. Offsets are manipulable via the Kafka consumer group APIs, but the offset storage *accounts* aren't directly visible or manipulable for Event Hubs users.
36
-
- They span a namespace. Using the same Kafka group name for multiple applications on multiple topics means that all applications and their Kafka clients will be rebalanced whenever only a single application needs rebalancing. Choose your group names wisely.
37
-
- They fully distinct from Event Hubs consumer groups. You **don't** need to use '$Default', nor do you need to worry about Kafka clients interfering with AMQP workloads.
37
+
- They span a namespace. Using the same Kafka group name for multiple applications on multiple topics means that all applications and their Kafka clients are rebalanced whenever only a single application needs rebalancing. Choose your group names wisely.
38
+
- They fully distinct from Event Hubs consumer groups. You **don't** need to use `$Default`, nor do you need to worry about Kafka clients interfering with AMQP workloads.
38
39
- They aren't viewable in the Azure portal. Consumer group info is accessible via Kafka APIs.
0 commit comments