Skip to content

Commit 992470d

Browse files
authored
Merge pull request #125322 from ArieHein/Spelling-54
Spelling Fixes
2 parents 26ff2a1 + 547bcce commit 992470d

16 files changed

+22
-22
lines changed

articles/governance/machine-configuration/how-to/create-policy-definition.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -116,14 +116,14 @@ Parameters of the `New-GuestConfigurationPolicy` cmdlet:
116116
- **Category**: Sets the category metadata field in the policy definition.
117117
- **LocalContentPath**: The path to the local copy of the `.zip` Machine Configuration package
118118
file. This parameter is required if you're using a User Assigned Managed Identity to provide
119-
access to an Azure Storge blob.
119+
access to an Azure Storage blob.
120120
- **ManagedIdentityResourceId**: The `resourceId` of the User Assigned Managed Identity that has
121121
read access to the Azure Storage blob containing the `.zip` Machine Configuration package file.
122122
This parameter is required if you're using a User Assigned Managed Identity to provide access to
123-
an Azure Storge blob.
123+
an Azure Storage blob.
124124
- **ExcludeArcMachines**: Specifies that the Policy definition should exclude Arc machines. This
125125
parameter is required if you are using a User Assigned Managed Identity to provide access to an
126-
Azure Storge blob.
126+
Azure Storage blob.
127127

128128
> [!IMPORTANT]
129129
> Unlike Azure VMs, Arc-connected machines currently do not support User Assigned Managed
@@ -189,7 +189,7 @@ New-GuestConfigurationPolicy @PolicyConfig3 -ExcludeArcMachines
189189
```
190190

191191
> [!NOTE]
192-
> You can retrieve the resorceId of a managed identity using the `Get-AzUserAssignedIdentity`
192+
> You can retrieve the resourceId of a managed identity using the `Get-AzUserAssignedIdentity`
193193
> PowerShell cmdlet.
194194
195195
The cmdlet output returns an object containing the definition display name and path of the policy

articles/governance/machine-configuration/whats-new/psdsc-in-machine-configuration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@ string value for the **Phrase** property.
146146
```powershell
147147
$reasons = @()
148148
$reasons += @{
149-
Code = 'Name:Name:ReasonIdentifer'
149+
Code = 'Name:Name:ReasonIdentifier'
150150
Phrase = 'Explain why the setting is not compliant'
151151
}
152152
return @{

articles/governance/resource-graph/samples/alerts-samples.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ This query filters virtual machines that need to be monitored.
3939
```kusto
4040
let RuleGroupTags = dynamic(['Linux']);
4141
Perf | where ObjectName == 'Processor' and CounterName == '% Idle Time' and (InstanceName in ('Total','total'))
42-
| extend CpuUtilisation = (100 - CounterValue)   
42+
| extend CpuUtilization = (100 - CounterValue)   
4343
| join kind=inner hint.remote=left (arg("").Resources
4444
| where type =~ 'Microsoft.Compute/virtualMachines'
4545
| project _ResourceId=tolower(id), tags

articles/hdinsight-aks/required-outbound-traffic.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ You need to configure the following network and application security rules in yo
4242
| ** FQDN|API Server FQDN (available once AKS cluster is created)|TCP|443|Network security rule| Required as the running pods/deployments use it to access the API Server. You can get this information from the AKS cluster running behind the cluster pool. For more information, see [how to get API Server FQDN](secure-traffic-by-firewall-azure-portal.md#get-aks-cluster-details-created-behind-the-cluster-pool) using Azure portal.|
4343

4444
> [!NOTE]
45-
> ** This configiration isn't required if you enable private AKS.
45+
> ** This configuration isn't required if you enable private AKS.
4646
4747
## Cluster specific traffic
4848

articles/hdinsight/hadoop/apache-hadoop-use-sqoop-mac-linux.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ Learn how to use Apache Sqoop to import and export between an Apache Hadoop clus
3838
3939
4040
export SERVER_CONNECT="jdbc:sqlserver://$SQL_SERVER.database.windows.net:1433;user=sqluser;password=$PASSWORD"
41-
export SERVER_DB_CONNECT="jdbc:sqlserver://$SQL_SERVER.database.windows.net:1433;user=sqluser;password=$PASSWORD;database=$DABATASE"
41+
export SERVER_DB_CONNECT="jdbc:sqlserver://$SQL_SERVER.database.windows.net:1433;user=sqluser;password=$PASSWORD;database=$DATABASE"
4242
```
4343
4444
## Sqoop export

articles/hdinsight/hdinsight-administer-use-powershell.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -86,13 +86,13 @@ $clusterName = "<HDInsight Cluster Name>"
8686
8787
$clusterInfo = Get-AzHDInsightCluster -ClusterName $clusterName
8888
$storageInfo = $clusterInfo.DefaultStorageAccount.split('.')
89-
$defaultStoreageType = $storageInfo[1]
89+
$defaultStorageType = $storageInfo[1]
9090
$defaultStorageName = $storageInfo[0]
9191
9292
echo "Default Storage account name: $defaultStorageName"
93-
echo "Default Storage account type: $defaultStoreageType"
93+
echo "Default Storage account type: $defaultStorageType"
9494
95-
if ($defaultStoreageType -eq "blob")
95+
if ($defaultStorageType -eq "blob")
9696
{
9797
$defaultBlobContainerName = $cluster.DefaultStorageContainer
9898
$defaultStorageAccountKey = (Get-AzStorageAccountKey -ResourceGroupName $resourceGroupName -Name $defaultStorageAccountName)[0].Value

articles/hdinsight/hdinsight-apache-kafka-spark-structured-streaming.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -237,7 +237,7 @@ This example demonstrates how to use Spark Structured Streaming with Kafka on HD
237237
1. Declare a schema. The following command demonstrates how to use a schema when reading JSON data from kafka. Enter the command in your next Jupyter cell.
238238
239239
```scala
240-
// Import bits useed for declaring schemas and working with JSON data
240+
// Import bits used for declaring schemas and working with JSON data
241241
import org.apache.spark.sql._
242242
import org.apache.spark.sql.types._
243243
import org.apache.spark.sql.functions._

articles/hdinsight/hdinsight-multiple-clusters-data-lake-store.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Some key points to consider.
4747

4848
|Folder |Permissions |Owning user |Owning group | Named user | Named user permissions | Named group | Named group permissions |
4949
|---------|---------|---------|---------|---------|---------|---------|---------|
50-
|/clusters/finanace/ fincluster01 | rwxr-x--- |Service Principal |FINGRP |- |- |- |- |
50+
|/clusters/finance/ fincluster01 | rwxr-x--- |Service Principal |FINGRP |- |- |- |- |
5151

5252
## Recommendations for job input and output data
5353

articles/hdinsight/hdinsight-release-notes-archive.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1398,7 +1398,7 @@ This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release
13981398

13991399
### New features
14001400
#### Auto key rotation for customer managed key encryption at rest
1401-
Starting from this release, customers can use Azure KeyValut version-less encryption key URLs for customer managed key encryption at rest. HDInsight will automatically rotate the keys as they expire or replaced with new versions. Learn more details [here](./disk-encryption.md).
1401+
Starting from this release, customers can use Azure KeyVault version-less encryption key URLs for customer managed key encryption at rest. HDInsight will automatically rotate the keys as they expire or replaced with new versions. Learn more details [here](./disk-encryption.md).
14021402

14031403
#### Ability to select different Zookeeper virtual machine sizes for Spark, Hadoop, and ML Services
14041404
HDInsight previously didn't support customizing Zookeeper node size for Spark, Hadoop, and ML Services cluster types. It defaults to A2_v2/A2 virtual machine sizes, which are provided free of charge. From this release, you can select a Zookeeper virtual machine size that is most appropriate for your scenario. Zookeeper nodes with virtual machine size other than A2_v2/A2 will be charged. A2_v2 and A2 virtual machines are still provided free of charge.

articles/hdinsight/spark/apache-spark-perf.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ In each of the following articles, you can find information on different aspects
9393

9494
### Optimize Spark SQL partitions
9595

96-
- `spark.sql.shuffle.paritions` is 200 by default. We can adjust based on the business needs when shuffling data for joins or aggregations.
96+
- `spark.sql.shuffle.partitions` is 200 by default. We can adjust based on the business needs when shuffling data for joins or aggregations.
9797
- `spark.sql.files.maxPartitionBytes` is 1G by default in HDI. The maximum number of bytes to pack into a single partition when reading files. This configuration is effective only when using file-based sources such as Parquet, JSON and ORC.
9898
- AQE in Spark 3.0. See [Adaptive Query Execution](https://spark.apache.org/docs/latest/sql-performance-tuning.html#adaptive-query-execution)
9999

0 commit comments

Comments
 (0)