Skip to content

Commit 6e8c9a0

Browse files
authored
Merge pull request #296846 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents d883e8d + 0a5e468 commit 6e8c9a0

11 files changed

+54
-19
lines changed

articles/backup/azure-kubernetes-service-backup-troubleshoot.md

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -129,6 +129,41 @@ This error appears due to absence of these FQDN rules because of which configura
129129

130130
6. Delete and reinstall Backup Extension to initiate backup.
131131

132+
### Scenario 4
133+
134+
**Error message**:
135+
136+
```Error
137+
"message": "Error: [ InnerError: [Helm installation failed : Unable to create/update Kubernetes resources for the extension : Recommendation Please check that there are no policies blocking the resource creation/update for the extension : InnerError [release azure-aks-backup failed, and has been uninstalled due to atomic being set: failed pre-install: job failed: BackoffLimitExceeded]]] occurred while doing the operation : [Create] on the config, For general troubleshooting visit: https://aka.ms/k8s-extensions-TSG, For more application specific troubleshooting visit: Facing trouble? Common errors and potential fixes are detailed in the Kubernetes Backup Troubleshooting Guide, available at https://www.aka.ms/aksclusterbackup",
138+
```
139+
The upgrade CRDs pre-install job is failing in the cluster.
140+
141+
**Cause**: Pods Unable to Communicate with Kube API Server
142+
143+
**Debug**
144+
145+
1. Check for any events in the cluster related to pod spawn issue.
146+
```azurecli-interactive
147+
kubectl events -n dataprotection-microsoft
148+
```
149+
2. Check the pods for dataprotection crds.
150+
```azurecli-interactive
151+
kubectl get pods -A | grep "dataprotection-microsoft-kubernetes-agent-upgrade-crds"
152+
```
153+
3. Check the pods logs.
154+
```azurecli-interactive
155+
kubectl logs -f --all-containers=true --timestamps=true -n dataprotection-microsoft <pod-name-from-prev-command>
156+
```
157+
Example log message:
158+
```Error
159+
2024-08-09T06:21:37.712646207Z Unable to connect to the server: dial tcp: lookup aks-test.hcp.westeurope.azmk8s.io: i/o timeout
160+
2024-10-01T11:26:17.498523756Z Unable to connect to the server: dial tcp 10.146.34.10:443: i/o timeout
161+
```
162+
**Resolution**:
163+
In this case, there is a Network/Calico policy or NSG that didn't allow dataprotection-microsoft pods to communicate with the API server.
164+
You should allow the dataprotection-microsoft namespace, and then reinstall the extension.
165+
166+
132167
## Backup Extension post installation related errors
133168

134169
These error codes appear due to issues on the Backup Extension installed in the AKS cluster.

articles/data-factory/connector-hbase.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -44,9 +44,9 @@ For more information about the network security mechanisms and options supported
4444

4545
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
4646

47-
## Create a linked service to Hbase using UI
47+
## Create a linked service to HBase using UI
4848

49-
Use the following steps to create a linked service to Hbase in the Azure portal UI.
49+
Use the following steps to create a linked service to HBase in the Azure portal UI.
5050

5151
1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
5252

@@ -58,14 +58,14 @@ Use the following steps to create a linked service to Hbase in the Azure portal
5858

5959
:::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
6060

61-
2. Search for Hbase and select the Hbase connector.
61+
2. Search for HBase and select the HBase connector.
6262

63-
:::image type="content" source="media/connector-hbase/hbase-connector.png" alt-text="Screenshot of the Hbase connector.":::
63+
:::image type="content" source="media/connector-hbase/hbase-connector.png" alt-text="Screenshot of the HBase connector.":::
6464

6565

6666
1. Configure the service details, test the connection, and create the new linked service.
6767

68-
:::image type="content" source="media/connector-hbase/configure-hbase-linked-service.png" alt-text="Screenshot of linked service configuration for Hbase.":::
68+
:::image type="content" source="media/connector-hbase/configure-hbase-linked-service.png" alt-text="Screenshot of linked service configuration for HBase.":::
6969

7070
## Connector configuration details
7171

articles/hdinsight/component-version-validation-error-arm-templates.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ When you're using [templates or automation tools](/azure/hdinsight/hdinsight-had
2626
| Hadoop | 3.1 |- |3.3|
2727
| Spark |2.4 |3.1|3.3|
2828
| Kafka |2.1|2.4|3.2|
29-
| Hbase | 2.1| -|2.4|
29+
| HBase | 2.1| -|2.4|
3030
| InteractiveHive |3.1 |3.1|3.1|
3131

3232
This value enables you to successfully create HDInsight clusters. The following snippet shows how to add the component version in the template:

articles/hdinsight/hbase/apache-hbase-migrate-hdinsight-5-1-new-storage-account.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ Use these detailed steps and commands to migrate your Apache HBase cluster with
7676

7777
1. Stop ingestion to the source HBase cluster.
7878

79-
1. Check Hbase hbck to verify cluster health
79+
1. Check HBase hbck to verify cluster health
8080

8181
1. Verify HBCK Report page on HBase UI. Healthy cluster does not show any inconsistencies
8282

@@ -122,7 +122,7 @@ Use these detailed steps and commands to migrate your Apache HBase cluster with
122122
>
123123
> For more information on connecting to and using Ambari, see [Manage HDInsight clusters by using the Ambari Web UI](../hdinsight-hadoop-manage-ambari.md).
124124
>
125-
> Stopping HBase in the previous steps mentioned how Hbase avoids creating new master proc WALs.
125+
> Stopping HBase in the previous steps mentioned how HBase avoids creating new master proc WALs.
126126
127127
1. If your source HBase cluster doesn't have the [Accelerated Writes](apache-hbase-accelerated-writes.md) feature, skip this step. For source HBase clusters with Accelerated Writes, back up the WAL directory under HDFS by running the following commands from an SSH session on any source cluster Zookeeper node or worker node.
128128

@@ -243,7 +243,7 @@ You can download AzCopy from [Get started with AzCopy](../../storage/common/stor
243243
## Troubleshooting
244244

245245
### Use case 1:
246-
If Hbase masters and region servers up and regions stuck in transition, or only one region i.e. `hbase:meta` region is assigned, and waiting for other regions to assign
246+
If HBase masters and region servers up and regions stuck in transition, or only one region i.e. `hbase:meta` region is assigned, and waiting for other regions to assign
247247

248248
**Solution:**
249249

articles/hdinsight/hbase/apache-hbase-migrate-hdinsight-5-1.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ Use these detailed steps and commands to migrate your Apache HBase cluster.
7474

7575
1. Stop ingestion to the source HBase cluster.
7676

77-
1. Check Hbase hbck to verify cluster health
77+
1. Check HBase hbck to verify cluster health
7878

7979
1. Verify HBCK Report page on HBase UI. Healthy cluster doesn't show any inconsistencies
8080
:::image type="content" source="./media/apache-hbase-migrate-new-version/verify-hbck-report.png" alt-text="Screenshot showing how to verify HBCK report." lightbox="./media/apache-hbase-migrate-new-version/verify-hbck-report.png":::
@@ -83,7 +83,7 @@ Use these detailed steps and commands to migrate your Apache HBase cluster.
8383
1. Note down number of regions in online at source cluster, so that the number can be referred at destination cluster after the migration.
8484
:::image type="content" source="./media/apache-hbase-migrate-new-version/total-number-of-regions.png" alt-text="Screenshot showing total number of regions." lightbox="./media/apache-hbase-migrate-new-version/total-number-of-regions.png":::
8585

86-
1. If replication enabled on the cluster, stop and reenable the replication on destination cluster after migration. For more information, see [Hbase replication guide](/azure/hdinsight/hbase/apache-hbase-replication/)
86+
1. If replication enabled on the cluster, stop and reenable the replication on destination cluster after migration. For more information, see [HBase replication guide](/azure/hdinsight/hbase/apache-hbase-replication/)
8787

8888
1. Flush the source HBase cluster you're upgrading.
8989

@@ -269,7 +269,7 @@ Mandatory argument for the above command:
269269
## Troubleshooting
270270

271271
### Use case 1:
272-
If Hbase masters and region servers up and regions stuck in transition or only one region, for example, `hbase:meta` region is assigned. Waiting for other regions to assign
272+
If HBase masters and region servers up and regions stuck in transition or only one region, for example, `hbase:meta` region is assigned. Waiting for other regions to assign
273273

274274
**Solution:**
275275

articles/hdinsight/hbase/apache-hbase-tutorial-get-started-linux.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ You can query data in HBase tables by using [Apache Hive](https://hive.apache.or
202202
203203
1. To exit your ssh connection, use `exit`.
204204
205-
### Separate Hive and Hbase Clusters
205+
### Separate Hive and HBase Clusters
206206
207207
The Hive query to access HBase data need not be executed from the HBase cluster. Any cluster that comes with Hive (including Spark, Hadoop, HBase, or Interactive Query) can be used to query HBase data, provided the following steps are completed:
208208
@@ -214,7 +214,7 @@ The Hive query to access HBase data need not be executed from the HBase cluster.
214214
HBase data can also be queried from Hive using ESP-enabled HBase:
215215
216216
1. When following a multi-cluster pattern, both clusters must be ESP-enabled.
217-
2. To allow Hive to query HBase data, make sure that the `hive` user is granted permissions to access the HBase data via the Hbase Apache Ranger plugin
217+
2. To allow Hive to query HBase data, make sure that the `hive` user is granted permissions to access the HBase data via the HBase Apache Ranger plugin
218218
3. When you use separate, ESP-enabled clusters, the contents of `/etc/hosts` from the HBase cluster headnodes must be appended to `/etc/hosts` of the Hive cluster headnodes and worker nodes.
219219
> [!NOTE]
220220
> After you scale either clusters, `/etc/hosts` must be appended again

articles/hdinsight/hdinsight-hadoop-create-linux-clusters-with-secure-transfer-storage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ If you accidentally enabled the 'Require secure transfer' option after creating
4141

4242
`com.microsoft.azure.storage.StorageException: The account being accessed does not support http.`
4343

44-
For Hbase clusters only, you can try the following steps to restore the cluster functionality:
44+
For HBase clusters only, you can try the following steps to restore the cluster functionality:
4545
1. Stop HBase from Ambari.
4646
2. Stop HDFS from Ambari.
4747
3. In Ambari, navigate to HDFS --> Configs --> Advanced --> fs.defaultFS

articles/hdinsight/selective-logging-analysis-azure-logs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ For example, assume that `spark HDInsightSecurityLogs` is a table that has two l
156156
If you need to disable two tables and two source types, use the following syntax:
157157

158158
- Spark: `InteractiveHiveMetastoreLog` log type in the `HDInsightHiveAndLLAPLogs` table
159-
- Hbase: `InteractiveHiveHSILog` log type in the `HDInsightHiveAndLLAPLogs` table
159+
- HBase: `InteractiveHiveHSILog` log type in the `HDInsightHiveAndLLAPLogs` table
160160
- Hadoop: `HDInsightHiveAndLLAPMetrics` table
161161
- Hadoop: `HDInsightHiveTezAppStats` table
162162

articles/hdinsight/selective-logging-analysis.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -155,7 +155,7 @@ For example, assume that `spark HDInsightSecurityLogs` is a table that has two l
155155
If you need to disable two tables and two source types, use the following syntax:
156156

157157
- Spark: `InteractiveHiveMetastoreLog` log type in the `HDInsightHiveAndLLAPLogs` table
158-
- Hbase: `InteractiveHiveHSILog` log type in the `HDInsightHiveAndLLAPLogs` table
158+
- HBase: `InteractiveHiveHSILog` log type in the `HDInsightHiveAndLLAPLogs` table
159159
- Hadoop: `HDInsightHiveAndLLAPMetrics` table
160160
- Hadoop: `HDInsightHiveTezAppStats` table
161161

articles/private-link/create-private-endpoint-cli.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -149,7 +149,7 @@ az network private-endpoint create \
149149

150150
## Configure the private DNS zone
151151

152-
A private DNS zone is used to resolve the DNS name of the private endpoint in the virtual network. For this example, we're using the DNS information for an Azure WebApp, for more information on the DNS configuration of private endpoints, see [Azure Private Endpoint DNS configuration](private-endpoint-dns.md)].
152+
A private DNS zone is used to resolve the DNS name of the private endpoint in the virtual network. For this example, we're using the DNS information for an Azure WebApp, for more information on the DNS configuration of private endpoints, see [Azure Private Endpoint DNS configuration](private-endpoint-dns.md).
153153

154154
Create a new private Azure DNS zone with **[az network private-dns zone create](/cli/azure/network/private-dns/zone#az-network-private-dns-zone-create)**.
155155

0 commit comments

Comments
 (0)