Skip to content

Commit 39f00d3

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into yelevin/strong-identifier-examples
2 parents 23d3f3b + 37136cb commit 39f00d3

23 files changed

+233
-47
lines changed

articles/backup/azure-kubernetes-service-backup-troubleshoot.md

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -129,6 +129,41 @@ This error appears due to absence of these FQDN rules because of which configura
129129

130130
6. Delete and reinstall Backup Extension to initiate backup.
131131

132+
### Scenario 4
133+
134+
**Error message**:
135+
136+
```Error
137+
"message": "Error: [ InnerError: [Helm installation failed : Unable to create/update Kubernetes resources for the extension : Recommendation Please check that there are no policies blocking the resource creation/update for the extension : InnerError [release azure-aks-backup failed, and has been uninstalled due to atomic being set: failed pre-install: job failed: BackoffLimitExceeded]]] occurred while doing the operation : [Create] on the config, For general troubleshooting visit: https://aka.ms/k8s-extensions-TSG, For more application specific troubleshooting visit: Facing trouble? Common errors and potential fixes are detailed in the Kubernetes Backup Troubleshooting Guide, available at https://www.aka.ms/aksclusterbackup",
138+
```
139+
The upgrade CRDs pre-install job is failing in the cluster.
140+
141+
**Cause**: Pods Unable to Communicate with Kube API Server
142+
143+
**Debug**
144+
145+
1. Check for any events in the cluster related to pod spawn issue.
146+
```azurecli-interactive
147+
kubectl events -n dataprotection-microsoft
148+
```
149+
2. Check the pods for dataprotection crds.
150+
```azurecli-interactive
151+
kubectl get pods -A | grep "dataprotection-microsoft-kubernetes-agent-upgrade-crds"
152+
```
153+
3. Check the pods logs.
154+
```azurecli-interactive
155+
kubectl logs -f --all-containers=true --timestamps=true -n dataprotection-microsoft <pod-name-from-prev-command>
156+
```
157+
Example log message:
158+
```Error
159+
2024-08-09T06:21:37.712646207Z Unable to connect to the server: dial tcp: lookup aks-test.hcp.westeurope.azmk8s.io: i/o timeout
160+
2024-10-01T11:26:17.498523756Z Unable to connect to the server: dial tcp 10.146.34.10:443: i/o timeout
161+
```
162+
**Resolution**:
163+
In this case, there is a Network/Calico policy or NSG that didn't allow dataprotection-microsoft pods to communicate with the API server.
164+
You should allow the dataprotection-microsoft namespace, and then reinstall the extension.
165+
166+
132167
## Backup Extension post installation related errors
133168

134169
These error codes appear due to issues on the Backup Extension installed in the AKS cluster.

articles/container-apps/networking.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -286,9 +286,6 @@ Azure networking policies are supported with the public network access flag.
286286

287287
### <a name="private-endpoint"></a>Private endpoint (preview)
288288

289-
> [!NOTE]
290-
> This feature is supported for all public regions. Government and China regions aren't supported.
291-
292289
Azure private endpoint enables clients located in your private network to securely connect to your Azure Container Apps environment through Azure Private Link. A private link connection eliminates exposure to the public internet. Private endpoints use a private IP address in your Azure virtual network address space.
293290

294291
This feature is supported for both Consumption and Dedicated plans in workload profile environments.

articles/data-factory/connector-hbase.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -44,9 +44,9 @@ For more information about the network security mechanisms and options supported
4444

4545
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
4646

47-
## Create a linked service to Hbase using UI
47+
## Create a linked service to HBase using UI
4848

49-
Use the following steps to create a linked service to Hbase in the Azure portal UI.
49+
Use the following steps to create a linked service to HBase in the Azure portal UI.
5050

5151
1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
5252

@@ -58,14 +58,14 @@ Use the following steps to create a linked service to Hbase in the Azure portal
5858

5959
:::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
6060

61-
2. Search for Hbase and select the Hbase connector.
61+
2. Search for HBase and select the HBase connector.
6262

63-
:::image type="content" source="media/connector-hbase/hbase-connector.png" alt-text="Screenshot of the Hbase connector.":::
63+
:::image type="content" source="media/connector-hbase/hbase-connector.png" alt-text="Screenshot of the HBase connector.":::
6464

6565

6666
1. Configure the service details, test the connection, and create the new linked service.
6767

68-
:::image type="content" source="media/connector-hbase/configure-hbase-linked-service.png" alt-text="Screenshot of linked service configuration for Hbase.":::
68+
:::image type="content" source="media/connector-hbase/configure-hbase-linked-service.png" alt-text="Screenshot of linked service configuration for HBase.":::
6969

7070
## Connector configuration details
7171

articles/frontdoor/end-to-end-tls.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,10 +29,10 @@ Azure Front Door offloads the TLS sessions at the edge and decrypts client reque
2929

3030
## Supported TLS versions
3131

32-
Azure Front Door supports two versions of the TLS protocol: TLS versions 1.2 and 1.3. All Azure Front Door profiles created after September 2019 use TLS 1.2 as the default minimum with TLS 1.3 enabled, with support for TLS 1.0 and TLS 1.1 for backward compatibility. Currently, Azure Front Door doesn't support client/mutual authentication (mTLS).
32+
Azure Front Door supports two versions of the TLS protocol: TLS versions 1.2 and 1.3. All Azure Front Door profiles created after September 2019 use TLS 1.2 as the default minimum with TLS 1.3 enabled. Currently, Azure Front Door doesn't support client/mutual authentication (mTLS).
3333

3434
> [!IMPORTANT]
35-
> As of March 1, 2025, TLS 1.0 and 1.1 are disallowed on new Azure Front Door profiles. If you didn't disable TLS 1.0 and 1.1 on legacy settings before this date, they'll still work temporarily but will be disabled in the future.
35+
> As of March 1, 2025, TLS 1.0 and 1.1 are not allowed on new Azure Front Door profiles. If you didn't disable TLS 1.0 and 1.1 on legacy settings before this date, they'll still work temporarily but will be updated to TLS 1.2 in the future.
3636
3737
You can configure the minimum TLS version in Azure Front Door in the custom domain HTTPS settings using the Azure portal or the [Azure REST API](/rest/api/frontdoorservice/frontdoor/frontdoors/createorupdate#minimumtlsversion). For a minimum TLS version 1.2, the negotiation will attempt to establish TLS 1.3 and then TLS 1.2. When Azure Front Door initiates TLS traffic to the origin, it will attempt to negotiate the best TLS version that the origin can reliably and consistently accept. Supported TLS versions for origin connections are TLS 1.2 and TLS 1.3.
3838

articles/frontdoor/front-door-faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -267,7 +267,7 @@ sections:
267267
answer: |
268268
Front Door uses TLS 1.2 as the minimum version for all profiles created after September 2019.
269269
270-
You can choose to use TLS 1.0, 1.1, 1.2 or 1.3 with Azure Front Door. To learn more, read the [Azure Front Door end-to-end TLS](concept-end-to-end-tls.md) article.
270+
You can choose to use TLS 1.2 or 1.3 with Azure Front Door. To learn more, read the [Azure Front Door end-to-end TLS](concept-end-to-end-tls.md) article.
271271
272272
- name: Billing
273273
questions:

articles/hdinsight/component-version-validation-error-arm-templates.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ When you're using [templates or automation tools](/azure/hdinsight/hdinsight-had
2626
| Hadoop | 3.1 |- |3.3|
2727
| Spark |2.4 |3.1|3.3|
2828
| Kafka |2.1|2.4|3.2|
29-
| Hbase | 2.1| -|2.4|
29+
| HBase | 2.1| -|2.4|
3030
| InteractiveHive |3.1 |3.1|3.1|
3131

3232
This value enables you to successfully create HDInsight clusters. The following snippet shows how to add the component version in the template:

articles/hdinsight/hbase/apache-hbase-migrate-hdinsight-5-1-new-storage-account.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ Use these detailed steps and commands to migrate your Apache HBase cluster with
7676

7777
1. Stop ingestion to the source HBase cluster.
7878

79-
1. Check Hbase hbck to verify cluster health
79+
1. Check HBase hbck to verify cluster health
8080

8181
1. Verify HBCK Report page on HBase UI. Healthy cluster does not show any inconsistencies
8282

@@ -122,7 +122,7 @@ Use these detailed steps and commands to migrate your Apache HBase cluster with
122122
>
123123
> For more information on connecting to and using Ambari, see [Manage HDInsight clusters by using the Ambari Web UI](../hdinsight-hadoop-manage-ambari.md).
124124
>
125-
> Stopping HBase in the previous steps mentioned how Hbase avoids creating new master proc WALs.
125+
> Stopping HBase in the previous steps mentioned how HBase avoids creating new master proc WALs.
126126
127127
1. If your source HBase cluster doesn't have the [Accelerated Writes](apache-hbase-accelerated-writes.md) feature, skip this step. For source HBase clusters with Accelerated Writes, back up the WAL directory under HDFS by running the following commands from an SSH session on any source cluster Zookeeper node or worker node.
128128

@@ -243,7 +243,7 @@ You can download AzCopy from [Get started with AzCopy](../../storage/common/stor
243243
## Troubleshooting
244244

245245
### Use case 1:
246-
If Hbase masters and region servers up and regions stuck in transition, or only one region i.e. `hbase:meta` region is assigned, and waiting for other regions to assign
246+
If HBase masters and region servers up and regions stuck in transition, or only one region i.e. `hbase:meta` region is assigned, and waiting for other regions to assign
247247

248248
**Solution:**
249249

articles/hdinsight/hbase/apache-hbase-migrate-hdinsight-5-1.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ Use these detailed steps and commands to migrate your Apache HBase cluster.
7474

7575
1. Stop ingestion to the source HBase cluster.
7676

77-
1. Check Hbase hbck to verify cluster health
77+
1. Check HBase hbck to verify cluster health
7878

7979
1. Verify HBCK Report page on HBase UI. Healthy cluster doesn't show any inconsistencies
8080
:::image type="content" source="./media/apache-hbase-migrate-new-version/verify-hbck-report.png" alt-text="Screenshot showing how to verify HBCK report." lightbox="./media/apache-hbase-migrate-new-version/verify-hbck-report.png":::
@@ -83,7 +83,7 @@ Use these detailed steps and commands to migrate your Apache HBase cluster.
8383
1. Note down number of regions in online at source cluster, so that the number can be referred at destination cluster after the migration.
8484
:::image type="content" source="./media/apache-hbase-migrate-new-version/total-number-of-regions.png" alt-text="Screenshot showing total number of regions." lightbox="./media/apache-hbase-migrate-new-version/total-number-of-regions.png":::
8585

86-
1. If replication enabled on the cluster, stop and reenable the replication on destination cluster after migration. For more information, see [Hbase replication guide](/azure/hdinsight/hbase/apache-hbase-replication/)
86+
1. If replication enabled on the cluster, stop and reenable the replication on destination cluster after migration. For more information, see [HBase replication guide](/azure/hdinsight/hbase/apache-hbase-replication/)
8787

8888
1. Flush the source HBase cluster you're upgrading.
8989

@@ -269,7 +269,7 @@ Mandatory argument for the above command:
269269
## Troubleshooting
270270

271271
### Use case 1:
272-
If Hbase masters and region servers up and regions stuck in transition or only one region, for example, `hbase:meta` region is assigned. Waiting for other regions to assign
272+
If HBase masters and region servers up and regions stuck in transition or only one region, for example, `hbase:meta` region is assigned. Waiting for other regions to assign
273273

274274
**Solution:**
275275

articles/hdinsight/hbase/apache-hbase-tutorial-get-started-linux.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ You can query data in HBase tables by using [Apache Hive](https://hive.apache.or
202202
203203
1. To exit your ssh connection, use `exit`.
204204
205-
### Separate Hive and Hbase Clusters
205+
### Separate Hive and HBase Clusters
206206
207207
The Hive query to access HBase data need not be executed from the HBase cluster. Any cluster that comes with Hive (including Spark, Hadoop, HBase, or Interactive Query) can be used to query HBase data, provided the following steps are completed:
208208
@@ -214,7 +214,7 @@ The Hive query to access HBase data need not be executed from the HBase cluster.
214214
HBase data can also be queried from Hive using ESP-enabled HBase:
215215
216216
1. When following a multi-cluster pattern, both clusters must be ESP-enabled.
217-
2. To allow Hive to query HBase data, make sure that the `hive` user is granted permissions to access the HBase data via the Hbase Apache Ranger plugin
217+
2. To allow Hive to query HBase data, make sure that the `hive` user is granted permissions to access the HBase data via the HBase Apache Ranger plugin
218218
3. When you use separate, ESP-enabled clusters, the contents of `/etc/hosts` from the HBase cluster headnodes must be appended to `/etc/hosts` of the Hive cluster headnodes and worker nodes.
219219
> [!NOTE]
220220
> After you scale either clusters, `/etc/hosts` must be appended again

articles/hdinsight/hdinsight-hadoop-create-linux-clusters-with-secure-transfer-storage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ If you accidentally enabled the 'Require secure transfer' option after creating
4141

4242
`com.microsoft.azure.storage.StorageException: The account being accessed does not support http.`
4343

44-
For Hbase clusters only, you can try the following steps to restore the cluster functionality:
44+
For HBase clusters only, you can try the following steps to restore the cluster functionality:
4545
1. Stop HBase from Ambari.
4646
2. Stop HDFS from Ambari.
4747
3. In Ambari, navigate to HDFS --> Configs --> Advanced --> fs.defaultFS

0 commit comments

Comments
 (0)