Skip to content

Commit 4312519

Browse files
authored
Merge branch 'MicrosoftDocs:main' into nzthiagopatch1
2 parents 26f9d99 + cac9dfd commit 4312519

30 files changed

+156
-106
lines changed
Binary file not shown.
Binary file not shown.
Binary file not shown.

articles/cost-management-billing/reservations/fabric-capacity.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ If you bought an Azure Synapse Analytics Dedicated SQL pool reservation and you
9898

9999
The new reservation's lifetime commitment should equal or be greater than the returned reservation's remaining commitment. For example, assume you have a three-year reservation that costs $100 per month. You exchange it after the 18th payment. The new reservation's lifetime commitment should be $1,800 or more (paid monthly or upfront).
100100

101-
The exchange value of your Azure Synapse Analytics reserved capacity is based on the prorated remaining term and the current price of the reservation. The exchange value is applied as a credit to your Azure account. If the exchange value is less than the cost of the Fabric capacity reservation, you must pay the difference.
101+
An exchange is processed as a refund and a repurchase – different transactions are created for the cancellation and the new reservation purchase. The prorated reservation amount is refunded for the reservations that's traded-in. You're charged fully for the new purchase. The prorated reservation amount is the daily prorated residual value of the reservation being returned.
102102

103103
After you exchange the reservation, the Fabric capacity reservation is applied to your Fabric capacity automatically. You can view and manage your reservations on the Reservations page in the Azure portal.
104104

articles/event-hubs/apache-kafka-configurations.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,9 @@
22
title: Recommended configurations for Apache Kafka clients - Azure Event Hubs
33
description: This article provides recommended Apache Kafka configurations for clients interacting with Azure Event Hubs for Apache Kafka.
44
ms.topic: reference
5+
ms.subservice: kafka
56
ms.custom: devx-track-extended-java
6-
ms.date: 03/30/2022
7+
ms.date: 03/06/2025
78
---
89

910
# Recommended configurations for Apache Kafka clients
@@ -16,16 +17,16 @@ Here are the recommended configurations for using Azure Event Hubs from Apache K
1617
Property | Recommended values | Permitted range | Notes
1718
---|---:|-----:|---
1819
`metadata.max.age.ms` | 180000 (approximate) | < 240000 | Can be lowered to pick up metadata changes sooner.
19-
`connections.max.idle.ms` | 180000 | < 240000 | Azure closes inbound Transmission Control Protocol (TCP) idle > 240,000 ms, which can result in sending on dead connections (shown as expired batches because of send timeout).
20+
`connections.max.idle.ms` | 180000 | < 240000 | Azure closes inbound Transmission Control Protocol (TCP) idle > 240,000 ms, which can result in sending on dead connections (shown as expired batches because of send time-out).
2021

2122
### Producer configurations only
2223
Producer configs can be found [here](https://kafka.apache.org/documentation/#producerconfigs).
2324

2425
|Property | Recommended Values | Permitted Range | Notes|
2526
|---|---:|---:|---|
2627
|`max.request.size` | 1000000 | < 1046528 | The service closes connections if requests larger than 1,046,528 bytes are sent. *This value **must** be changed and causes issues in high-throughput produce scenarios.*|
27-
|`retries` | > 0 | | Might require increasing delivery.timeout.ms value, see documentation.|
28-
|`request.timeout.ms` | 30000 .. 60000 | > 20000| Event Hubs internally defaults to a minimum of 20,000 ms. *While requests with lower timeout values are accepted, client behavior isn't guaranteed.* <p>Make sure that your **request.timeout.ms** is at least the recommended value of 60000 and your **session.timeout.ms** is at least the recommended value of 30000. Having these settings too low could cause consumer timeouts, which then cause rebalances (which then cause more timeouts, which cause more rebalancing, and so on).</p>|
28+
|`retries` | > 0 | | Might require increasing `delivery.timeout.ms` value, see documentation.|
29+
|`request.timeout.ms` | 30000 .. 60000 | > 20000| Event Hubs internally defaults to a minimum of 20,000 ms. *While requests with lower time out values are accepted, client behavior isn't guaranteed.* <p>Make sure that your **request.timeout.ms** is at least the recommended value of 60000 and your **session.timeout.ms** is at least the recommended value of 30000. Having these settings too low could cause consumer time-outs, which then cause rebalances (which then cause more time-outs, which cause more rebalancing, and so on).</p>|
2930
|`metadata.max.idle.ms` | 180000 | > 5000 | Controls how long the producer caches metadata for a topic that's idle. If the elapsed time since a topic was last produced exceeds the metadata idle duration, then the topic's metadata is forgotten and the next access to it will force a metadata fetch request.|
3031
|`linger.ms` | > 0 | | For high throughput scenarios, linger value should be equal to the highest tolerable value to take advantage of batching.|
3132
|`delivery.timeout.ms` | | | Set according to the formula (`request.timeout.ms` + `linger.ms`) * `retries`.|
@@ -37,8 +38,8 @@ Consumer configs can be found [here](https://kafka.apache.org/documentation/#con
3738
Property | Recommended Values | Permitted Range | Notes
3839
---|---:|-----:|---
3940
`heartbeat.interval.ms` | 3000 | | 3000 is the default value and shouldn't be changed.
40-
`session.timeout.ms` | 30000 |6000 .. 300000| Start with 30000, increase if seeing frequent rebalancing because of missed heartbeats.<p>Make sure that your request.timeout.ms is at least the recommended value of 60000 and your session.timeout.ms is at least the recommended value of 30000. Having these settings too low could cause consumer timeouts, which then cause rebalances (which then cause more timeouts, which cause more rebalancing, and so on).</p>
41-
`max.poll.interval.ms` | 300000 (default) |>session.timeout.ms| Used for rebalance timeout, so it shouldn't be set too low. Must be greater than session.timeout.ms.
41+
`session.timeout.ms` | 30000 |6000 .. 300000| Start with 30000, increase if seeing frequent rebalancing because of missed heartbeats.<p>Make sure that your request.timeout.ms is at least the recommended value of 60000 and your session.timeout.ms is at least the recommended value of 30000. Having these settings too low could cause consumer time-outs, which then cause rebalances (which then cause more time-outs, which cause more rebalancing, and so on).</p>
42+
`max.poll.interval.ms` | 300000 (default) |>session.timeout.ms| Used for rebalance time-out, so it shouldn't be set too low. Must be greater than session.timeout.ms.
4243

4344
## librdkafka configuration properties
4445
The main `librdkafka` configuration file ([link](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md)) contains extended descriptions for the properties described in the following sections.
@@ -55,7 +56,7 @@ Property | Recommended Values | Permitted Range | Notes
5556
|Property | Recommended Values | Permitted Range | Notes|
5657
|---|---:|-----:|---|
5758
|`retries` | > 0 | | Default is 2147483647.|
58-
|`request.timeout.ms` | 30000 .. 60000 | > 20000| Event Hubs internally defaults to a minimum of 20,000 ms. `librdkafka` default value is 5000, which can be problematic. *While requests with lower timeout values are accepted, client behavior isn't guaranteed.*|
59+
|`request.timeout.ms` | 30000 .. 60000 | > 20000| Event Hubs internally defaults to a minimum of 20,000 ms. `librdkafka` default value is 5000, which can be problematic. *While requests with lower time-out values are accepted, client behavior isn't guaranteed.*|
5960
|`partitioner` | `consistent_random` | See librdkafka documentation | `consistent_random` is default and best. Empty and null keys are handled ideally for most cases.|
6061
|`compression.codec` | `none, gzip` || Only gzip compression is currently supported.|
6162

@@ -65,7 +66,7 @@ Property | Recommended Values | Permitted Range | Notes
6566
---|---:|-----:|---
6667
`heartbeat.interval.ms` | 3000 || 3000 is the default value and shouldn't be changed.
6768
`session.timeout.ms` | 30000 |6000 .. 300000| Start with 30000, increase if seeing frequent rebalancing because of missed heartbeats.
68-
`max.poll.interval.ms` | 300000 (default) |>session.timeout.ms| Used for rebalance timeout, so it shouldn't be set too low. Must be greater than session.timeout.ms.
69+
`max.poll.interval.ms` | 300000 (default) |>session.timeout.ms| Used for rebalance time out, so it shouldn't be set too low. Must be greater than session.timeout.ms.
6970

7071

7172
## Further notes
@@ -74,7 +75,7 @@ Check the following table of common configuration-related error scenarios.
7475

7576
Symptoms | Problem | Solution
7677
----|---|-----
77-
Offset commit failures because of rebalancing | Your consumer is waiting too long in between calls to poll() and the service is kicking the consumer out of the group. | You have several options: <ul><li>Increase poll processing timeout (`max.poll.interval.ms`)</li><li>Decrease message batch size to speed up processing</li><li>Improve processing parallelization to avoid blocking consumer.poll()</li></ul> Applying some combination of the three is likely wisest.
78+
Offset commit failures because of rebalancing | Your consumer is waiting too long in between calls to poll() and the service is kicking the consumer out of the group. | You have several options: <ul><li>Increase poll processing time out (`max.poll.interval.ms`)</li><li>Decrease message batch size to speed up processing</li><li>Improve processing parallelization to avoid blocking consumer.poll()</li></ul> Applying some combination of the three is likely wisest.
7879
Network exceptions at high produce throughput | If you're using Java client + default max.request.size, your requests might be too large. | See Java configs mentioned earlier.
7980

8081
## Next steps

articles/event-hubs/apache-kafka-developer-guide.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,8 @@
11
---
22
title: Apache Kafka developer guide for Event Hubs
33
description: This article provides links to articles that describe how to integrate your Kafka applications with Azure Event Hubs.
4-
ms.date: 12/18/2024
4+
ms.date: 03/06/2025
5+
ms.subservice: kafka
56
ms.topic: article
67
---
78

articles/event-hubs/apache-kafka-frequently-asked-questions.yml

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,9 @@ metadata:
33
title: Frequently asked questions - Azure Event Hubs for Apache Kafka
44
description: This article answers frequent questions asked about Azure Event Hubs' support for Apache Kafka clients not covered elsewhere.
55
ms.topic: faq
6+
ms.subservice: kafka
67

7-
ms.date: 10/14/2022
8+
ms.date: 03/06/2025
89
title: Frequently asked questions - Event Hubs for Apache Kafka
910
summary: This article provides answers to some of the frequently asked questions on migrating to Event Hubs for Apache Kafka.
1011

@@ -33,8 +34,8 @@ sections:
3334
- They're autocreated. Kafka groups can be managed via the Kafka consumer group APIs.
3435
- They can store offsets in the Event Hubs service.
3536
- They're used as keys in what is effectively an offset key-value store. For a unique pair of `group.id` and `topic-partition`, we store an offset in Azure Storage (3x replication). Event Hubs users don't incur extra storage costs from storing Kafka offsets. Offsets are manipulable via the Kafka consumer group APIs, but the offset storage *accounts* aren't directly visible or manipulable for Event Hubs users.
36-
- They span a namespace. Using the same Kafka group name for multiple applications on multiple topics means that all applications and their Kafka clients will be rebalanced whenever only a single application needs rebalancing. Choose your group names wisely.
37-
- They fully distinct from Event Hubs consumer groups. You **don't** need to use '$Default', nor do you need to worry about Kafka clients interfering with AMQP workloads.
37+
- They span a namespace. Using the same Kafka group name for multiple applications on multiple topics means that all applications and their Kafka clients are rebalanced whenever only a single application needs rebalancing. Choose your group names wisely.
38+
- They fully distinct from Event Hubs consumer groups. You **don't** need to use `$Default`, nor do you need to worry about Kafka clients interfering with AMQP workloads.
3839
- They aren't viewable in the Azure portal. Consumer group info is accessible via Kafka APIs.
3940
4041
- question: |

articles/event-hubs/apache-kafka-migration-guide.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,8 @@
22
title: Migrate to Azure Event Hubs for Apache Kafka
33
description: This article explains how to migrate clients from Apache Kafka to Azure Event Hubs.
44
ms.topic: article
5-
ms.date: 12/18/2024
5+
ms.subservice: kafka
6+
ms.date: 03/06/2025
67
---
78

89
# Migrate to Azure Event Hubs for Apache Kafka Ecosystems

0 commit comments

Comments
 (0)