Skip to content

Commit 6d4e63b

Browse files
authored
Merge pull request #291856 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents d9a6a27 + b1e558c commit 6d4e63b

25 files changed

+34
-34
lines changed

articles/hdinsight-aks/control-egress-traffic-from-hdinsight-on-aks-clusters.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -244,7 +244,7 @@ The Public FQDN can only be resolved to a CNAME with subdomain, therefore it mus
244244
The Private DNS zone should be able to resolve private FQDN to an IP `(privatelink.{clusterPoolName}.{subscriptionId})`.
245245

246246
> [!NOTE]
247-
> HDInsight on AKS creates private DNS zone in the cluster pool, virtual network. If your client applications are in same virtual network, you need not configure the private DNS zone again. In case you're using a client application in a different virtual network, you're required to use virutal network peering and bind to private dns zone in the cluster pool virtual network or use private endpoints in the virutal network, and private dns zones, to add the A-record to the private endpoint private IP.
247+
> HDInsight on AKS creates private DNS zone in the cluster pool, virtual network. If your client applications are in same virtual network, you need not configure the private DNS zone again. In case you're using a client application in a different virtual network, you're required to use virtual network peering and bind to private dns zone in the cluster pool virtual network or use private endpoints in the virtual network, and private dns zones, to add the A-record to the private endpoint private IP.
248248
249249
Private FQDN: `{clusterName}.privatelink.{clusterPoolName}.{subscriptionId}.{region}.hdinsightaks.net`
250250

articles/hdinsight-aks/create-cluster-error-dictionary.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ This article describes how to troubleshoot and resolve errors that could occur w
3131
|11|`Authorization_IdentityNotFound - {"code":null,"message":"The identity of the calling application could not be established."}`|The 1-p service principle isn't on boarded to the tenant.|Execute the command to provision the 1-p service principle on the new tenant to onboard.|
3232
|12|`NotFound - ARM/AKS sdk error`|The user tries to update HDI on AKS cluster but the corresponding agent pool has been deleted.|The corresponding agent pool has been deleted. It's not recommended to operate AKS agent pool directly.|
3333
|13|`AuthorizationFailed - Scope invalid role assignment issue with managed RG and cluster msi`|Lack of permission to perform the operation.|Check if the service principle app ID mentioned in the error message owned by you. If yes, grant the permission according to the error message. If no, open a support ticket to Azure HDInsight team.|
34-
|14|`DeleteAksClusterFailed - {"code":"DeleteAksClusterFailed","message":"An Azure service request has failed. ErrorCode: 'DeleteAksClusterFailed', ErrorMessage: 'Delete HDI cluster namespcae failed. Additional info: 'Can't access a disposed object.\\r\\nObject name: 'Microsoft.Azure.Common.Configuration.ManagedConfiguration was already disposed'.''."}`|RP switched to a new role instance unexpectedly.|retry the operation or open a support ticket to Azure HDInsight team.|
34+
|14|`DeleteAksClusterFailed - {"code":"DeleteAksClusterFailed","message":"An Azure service request has failed. ErrorCode: 'DeleteAksClusterFailed', ErrorMessage: 'Delete HDI cluster namespace failed. Additional info: 'Can't access a disposed object.\\r\\nObject name: 'Microsoft.Azure.Common.Configuration.ManagedConfiguration was already disposed'.''."}`|RP switched to a new role instance unexpectedly.|retry the operation or open a support ticket to Azure HDInsight team.|
3535
|15|`EntityStoreOperationError - ARM/AKS sdk error`|A database operation failed on AKS side during cluster update.|Retry the operation after some time. If the issue persists, open a support ticket to Azure HDInsight team.|
3636
|16|`InternalServerError - {"exception":"System.Threading.Tasks.TaskCanceledException","message":"The operation was canceled."}`|This error caused due to various issues.|retry the operation or open a support ticket to Azure HDInsight team.|
3737
|17|`InternalServerError - {"exception":"System.IO.IOException","message":"Unable to read data from the transport connection: A connection attempt failed because the connected party didn't properly respond after a period of time, or established connection failed because connected host has failed to respond."}`|This error caused due to various issues.|retry the operation after some time. If the issue persists, open a support ticket to Azure HDInsight team.|

articles/hdinsight-aks/flink/azure-service-bus-demo.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ After submitting the job, access the Flink Dashboard UI and click on the running
4545

4646
Navigate to the Service Bus Explorer on the Azure portal and send messages to the corresponding Service Bus.
4747

48-
:::image type="content" source="./media/azure-service-bus-demo/sending-message-azure-portal.png" alt-text="Screenshot shows sending message from Azure portal Serice Bus Explorer." lightbox="./media/azure-service-bus-demo/sending-message-azure-portal.png":::
48+
:::image type="content" source="./media/azure-service-bus-demo/sending-message-azure-portal.png" alt-text="Screenshot shows sending message from Azure portal Service Bus Explorer." lightbox="./media/azure-service-bus-demo/sending-message-azure-portal.png":::
4949

5050

5151
### Check job run details on Apache Flink UI
@@ -318,7 +318,7 @@ This Flink source function, encapsulated within the `SessionBasedServiceBusSourc
318318

319319
1. **Instance Variables**
320320

321-
The `connectionString`, `topicName`, and `subscriptionName` variables hold the connection string, topic name, and subscription name for your Azure Service Bus. The isRunning flag is used to control the execution of the source function. The `sessionReceiver` is an instance of `erviceBusSessionReceiverAsyncClient`, which is used to receive messages from the Service Bus.
321+
The `connectionString`, `topicName`, and `subscriptionName` variables hold the connection string, topic name, and subscription name for your Azure Service Bus. The isRunning flag is used to control the execution of the source function. The `sessionReceiver` is an instance of `ServiceBusSessionReceiverAsyncClient`, which is used to receive messages from the Service Bus.
322322

323323
1. **Constructor**
324324

articles/hdinsight-aks/flink/change-data-capture-connectors-for-apache-flink.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ VALUES ('hammer','16oz carpenter''s hammer',1.0);
111111
INSERT INTO products(name,description,weight)
112112
VALUES ('rocks','box of assorted rocks',5.3);
113113
INSERT INTO products(name,description,weight)
114-
VALUES ('jacket','water resistent black wind breaker',0.1);
114+
VALUES ('jacket','water resistant black wind breaker',0.1);
115115
INSERT INTO products(name,description,weight)
116116
VALUES ('spare tire','24 inch spare tire',22.2);
117117

articles/hdinsight-aks/flink/flink-job-orchestration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,7 @@ The DAG expects to have setup for the Service Principal, as described during the
139139
140140
"jarDirectory":"abfs://filesystem@<storageaccount>.dfs.core.windows.net",
141141
142-
"subscritpion":"<cluster subscription id>",
142+
"subscription":"<cluster subscription id>",
143143
144144
"rg":"<cluster resource group>",
145145
@@ -173,7 +173,7 @@ The DAG expects to have setup for the Service Principal, as described during the
173173
{
174174
'jarName':'WordCount.jar',
175175
'jarDirectory':'abfs://filesystem@<storageaccount>.dfs.core.windows.net',
176-
'subscritpion':'<cluster subscription id>',
176+
'subscription':'<cluster subscription id>',
177177
'rg':'<cluster resource group>',
178178
'poolNm':'<cluster pool name>',
179179
'clusterNm':'<cluster name>'

articles/hdinsight-aks/flink/sink-sql-server-table-using-flink-sql.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ VALUES ('hammer','16oz carpenter''s hammer',1.0);
9494
INSERT INTO products(name,description,weight)
9595
VALUES ('rocks','box of assorted rocks',5.3);
9696
INSERT INTO products(name,description,weight)
97-
VALUES ('jacket','water resistent black wind breaker',0.1);
97+
VALUES ('jacket','water resistant black wind breaker',0.1);
9898
INSERT INTO products(name,description,weight)
9999
VALUES ('spare tire','24 inch spare tire',22.2);
100100

articles/hdinsight-aks/spark/azure-hdinsight-spark-on-aks-delta-lake.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -186,7 +186,7 @@ createDirectory(avgMoMKPIChangePath)
186186
1. Print Delta Table Schema for transformed and average KPI data1.
187187
188188
```
189-
// tranform data schema
189+
// transform data schema
190190
dtTransformed.toDF.printSchema
191191
// Average KPI Data Schema
192192
dtAvgKpi.toDF.printSchema

articles/hdinsight-aks/spark/create-spark-cluster.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ You can use the Azure portal to create an Apache Spark cluster in cluster pool.
6363
|Number of worker nodes| Select the number of nodes for Spark cluster. Out of those, three nodes are reserved for coordinator and system services, remaining nodes are dedicated to Spark workers, one worker per node. For example, in a five-node cluster there are two workers|
6464
|Autoscale| Click on the toggle button to enable Autoscale|
6565
|Autoscale Type |Select from either load based or schedule based autoscale|
66-
|Graceful decomission timeout |Specify Graceful decommission timeout|
66+
|Graceful decommission timeout |Specify Graceful decommission timeout|
6767
|No of default worker node |Select the number of nodes for autoscale|
6868
|Time Zone |Select the time zone|
6969
|Autoscale Rules |Select the day, start time, end time, no. of worker nodes|

articles/hdinsight-aks/spark/library-management.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,6 @@ If you decide not to use the libraries anymore, then you can easily delete the l
7575
> [!NOTE]
7676
> * Packages installed from Jupyter notebook can only be deleted from Jupyter Notebook.
7777
> * Packages installed from library manager can only be uninstalled from library manager.
78-
> * For upgrading a library/package, uninstall the current version of the library and resinstall the required version of the library.
79-
> * Installation of libraries from Jupyter notebook is particular to the session. It is not persistant.
78+
> * For upgrading a library/package, uninstall the current version of the library and reinstall the required version of the library.
79+
> * Installation of libraries from Jupyter notebook is particular to the session. It is not persistent.
8080
> * Installing heavy packages may take some time due to their size and complexity.

articles/hdinsight-aks/trino/trino-airflow.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ After restarting Airflow, find and run example_trino DAG. Results of the sample
6666
:::image type="content" source="./media/trino-airflow/print-result-log.png" alt-text="Screenshot showing how to check results for Airflow Trino DAG." lightbox="./media/trino-airflow/print-result-log.png":::
6767

6868
> [!NOTE]
69-
> For production scenarios, you should choose to handle connection and secrets diffirently, using Airflow secrets management.
69+
> For production scenarios, you should choose to handle connection and secrets differently, using Airflow secrets management.
7070
7171
## Next steps
7272
This example demonstrates basic steps required to connect Airflow to Trino with HDInsight on AKS. Main steps are obtaining access token and running query.

0 commit comments

Comments
 (0)