Skip to content

Commit da78a9a

Browse files
authored
Merge pull request #214520 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents bb46fac + 9695dca commit da78a9a

File tree

5 files changed

+38
-34
lines changed

5 files changed

+38
-34
lines changed

articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ There are some important best practices to follow for optimal performance of NFS
6464
- Create Azure NetApp Files volumes using **Standard** network features to enable optimized connectivity from Azure VMware Solution private cloud via ExpressRoute FastPath connectivity.
6565
- For optimized performance, choose **UltraPerformance** gateway and enable [ExpressRoute FastPath](../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath) from a private cloud to Azure NetApp Files volumes virtual network. View more detailed information on gateway SKUs at [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md).
6666
- Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. For best performance, it's recommended to use the Ultra tier.
67-
- Create multiple datastores of 4-TB size for better performance. The default limit is 8 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
67+
- Create multiple datastores of 4-TB size for better performance. The default limit is 64 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
6868
- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within same [Availability Zone](../availability-zones/az-overview.md#availability-zones).
6969

7070
## Attach an Azure NetApp Files volume to your private cloud

articles/hdinsight/hdinsight-managed-identities.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ Managed identities are used in Azure HDInsight in multiple scenarios. See the re
4040
* [Enterprise Security Package](domain-joined/apache-domain-joined-configure-using-azure-adds.md#create-and-authorize-a-managed-identity)
4141
* [Customer-managed key disk encryption](disk-encryption.md)
4242

43-
HDInsight will automatically renew the certificates for the managed identities you use for these scenarios. However, there is a limitation when multiple different managed identities are used for long running clusters, the certificate renewal may not work as expected for all of the managed identities. Due to this limitation, if you are planning to use long running clusters (e.g. more than 60 days), we recommend to use the same managed identity for all of the above scenarios.
43+
HDInsight will automatically renew the certificates for the managed identities you use for these scenarios. However, there is a limitation when multiple different managed identities are used for long running clusters, the certificate renewal may not work as expected for all of the managed identities. Due to this limitation, we recommend to use the same managed identity for all of the above scenarios.
4444

4545
If you have already created a long running cluster with multiple different managed identities and are running into one of these issues:
4646
* In ESP clusters, cluster services starts failing or scale up and other operations start failing with authentications errors.

articles/hdinsight/kafka/kafka-mirrormaker-2-0-guide.md

Lines changed: 31 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -42,8 +42,8 @@ The summary of the broker setup process is as follows:
4242

4343
**MirrorCheckpointConnector:**
4444

45-
1. Consumes offset-syncsr.
46-
1. Emits checkpoints to enable failover points.
45+
1. Consumes offset-syncsr.
46+
1. Emits checkpoints to enable failover points.
4747

4848
**MirrorHeartBeatConnector:**
4949

@@ -53,47 +53,48 @@ The summary of the broker setup process is as follows:
5353

5454
1. Connect-mirror-maker.sh script bundled with the Kafka library implements a distributed MM2 cluster, which manages the Connect workers internally based on a config file. Internally MirrorMaker driver creates and handles pairs of each connector – MirrorSourceConnector, MirrorSinkConnector, MirrorCheckpoint connector and MirrorHeartbeatConnector.
5555
1. Start MirrorMaker 2.0.
56-
57-
```
58-
./bin/connect-mirror-maker.sh ./config/mirror-maker.properties
59-
```
56+
57+
```
58+
./bin/connect-mirror-maker.sh ./config/mirror-maker.properties
59+
```
6060

6161
> [!NOTE]
6262
> For Kerberos enabled clusters, the JAAS configuration must be exported to the KAFKA_OPTS or must be specified in the MM2 config file.
6363
6464
```
6565
export KAFKA_OPTS="-Djava.security.auth.login.config=<path-to-jaas.conf>"
6666
```
67+
6768
### Sample MirrorMaker 2.0 Configuration file
6869
6970
```
70-
# specify any number of cluster aliases
71-
clusters = source, destination
71+
# specify any number of cluster aliases
72+
clusters = source, destination
7273
73-
# connection information for each cluster
74-
# This is a comma separated host:port pairs for each cluster
75-
# for example. "A_host1:9092, A_host2:9092, A_host3:9092" and you can see the exact host name on Ambari > Hosts
76-
source.bootstrap.servers = wn0-src-kafka.bx.internal.cloudapp.net:9092,wn1-src-kafka.bx.internal.cloudapp.net:9092,wn2-src-kafka.bx.internal.cloudapp.net:9092
77-
destination.bootstrap.servers = wn0-dest-kafka.bx.internal.cloudapp.net:9092,wn1-dest-kafka.bx.internal.cloudapp.net:9092,wn2-dest-kafka.bx.internal.cloudapp.net:9092
74+
# connection information for each cluster
75+
# This is a comma separated host:port pairs for each cluster
76+
# for example. "A_host1:9092, A_host2:9092, A_host3:9092" and you can see the exact host name on Ambari > Hosts
77+
source.bootstrap.servers = wn0-src-kafka.bx.internal.cloudapp.net:9092,wn1-src-kafka.bx.internal.cloudapp.net:9092,wn2-src-kafka.bx.internal.cloudapp.net:9092
78+
destination.bootstrap.servers = wn0-dest-kafka.bx.internal.cloudapp.net:9092,wn1-dest-kafka.bx.internal.cloudapp.net:9092,wn2-dest-kafka.bx.internal.cloudapp.net:9092
7879
79-
# enable and configure individual replication flows
80-
source->destination.enabled = true
80+
# enable and configure individual replication flows
81+
source->destination.enabled = true
8182
82-
# regex which defines which topics gets replicated. For eg "foo-.*"
83-
source->destination.topics = toa.evehicles-latest-dev
84-
groups=.*
85-
topics.blacklist="*.internal,__.*"
83+
# regex which defines which topics gets replicated. For eg "foo-.*"
84+
source->destination.topics = toa.evehicles-latest-dev
85+
groups=.*
86+
topics.blacklist="*.internal,__.*"
8687
87-
# Setting replication factor of newly created remote topics
88-
replication.factor=3
88+
# Setting replication factor of newly created remote topics
89+
replication.factor=3
8990
90-
checkpoints.topic.replication.factor=1
91-
heartbeats.topic.replication.factor=1
92-
offset-syncs.topic.replication.factor=1
91+
checkpoints.topic.replication.factor=1
92+
heartbeats.topic.replication.factor=1
93+
offset-syncs.topic.replication.factor=1
9394
94-
offset.storage.replication.factor=1
95-
status.storage.replication.factor=1
96-
config.storage.replication.factor=1
95+
offset.storage.replication.factor=1
96+
status.storage.replication.factor=1
97+
config.storage.replication.factor=1
9798
```
9899

99100
### SSL configuration
@@ -109,7 +110,6 @@ destination.ssl.keystore.location=/path/to/kafka.server.keystore.jks
109110
destination.sasl.mechanism=GSSAPI
110111
```
111112

112-
113113
### Global configurations
114114

115115
|Property |Default value |Description |
@@ -167,9 +167,9 @@ destination.sasl.mechanism=GSSAPI
167167
Custom Replication Policy can be created by implementing the interface below.
168168

169169
```
170-
/** Defines which topics are "remote topics", e.g. "us-west.topic1". */
171-
public interface ReplicationPolicy {
172-
170+
/** Defines which topics are "remote topics", e.g. "us-west.topic1". */
171+
public interface ReplicationPolicy {
172+
173173
/** How to rename remote topics; generally should be like us-west.topic1. */
174174
String formatRemoteTopic(String sourceClusterAlias, String topic);
175175

articles/service-bus-messaging/enable-partitions-premium.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Service Bus partitions enable queues and topics, or messaging entities, to be pa
2323
> Some limitations may be encountered during public preview, which will be resolved before going into GA.
2424
> - It is currently not possible to use JMS on partitioned entities.
2525
> - Metrics are currently only available on an aggregated namespace level, not for individual partitions.
26-
> - This feature is rolling out during Ignite 2022, and will initially be available in East US and South Central US, with more regions following later.
26+
> - This feature is rolling out during Ignite 2022, and will initially be available in East US and North Europe, with more regions following later.
2727
2828
## Use Azure portal
2929
When creating a **namespace** in the Azure portal, set the **Partitioning** to **Enabled** and choose the number of partitions, as shown in the following image.

articles/service-bus-messaging/service-bus-dead-letter-queues.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,10 @@ If you enable dead-lettering on filter evaluation exceptions, any errors that oc
5353
## Application-level dead-lettering
5454
In addition to the system-provided dead-lettering features, applications can use the DLQ to explicitly reject unacceptable messages. They can include messages that can't be properly processed because of any sort of system issue, messages that hold malformed payloads, or messages that fail authentication when some message-level security scheme is used.
5555

56+
This can be done by calling [QueueClient.DeadLetterAsync(Guid lockToken, string deadLetterReason, string deadLetterErrorDescription) method](/dotnet/api/microsoft.servicebus.messaging.queueclient.deadletterasync#microsoft-servicebus-messaging-queueclient-deadletterasync(system-guid-system-string-system-string)).
57+
58+
It is recommended to include the type of the exception in the DeadLetterReason and the StackTrace of the exception in the DeadLetterDescription as this makes it easier to troubleshoot the cause of the problem resulting in messages being dead-lettered. Be aware that this may result in some messages exceeding [the 256KB quota limit for the Standard Tier of Azure Service Bus](/azure/service-bus-messaging/service-bus-quotas), further indicating that the Premium Tier is what should be used for production environments.
59+
5660
## Dead-lettering in ForwardTo or SendVia scenarios
5761
Messages will be sent to the transfer dead-letter queue under the following conditions:
5862

0 commit comments

Comments
 (0)