Skip to content

Commit bb63f6d

Browse files
authored
Merge pull request #125173 from ArieHein/Spelling-Wave-43
Spelling Fixes
2 parents 16e3285 + 9638b1b commit bb63f6d

22 files changed

+32
-31
lines changed

articles/synapse-analytics/backuprestore/restore-sql-pool.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ Steps:
138138
139139
1. Update Az.Sql Module to 3.8.0 (or greater) if on an older version using `Update-Module`. Otherwise it will cause failures. To validate your version via PowerShell:
140140
```powershell
141-
foreach ($i in (get-module -ListAvailable | ?{$_.name -eq 'az.sql'}).Version) { $version = [string]$i.Major + "." + [string]$i.Minor; if ($version -gt 3.7) {write-host "Az.Sql version $version installed. Prequisite met."} else {update-module az.sql} }
141+
foreach ($i in (get-module -ListAvailable | ?{$_.name -eq 'az.sql'}).Version) { $version = [string]$i.Major + "." + [string]$i.Minor; if ($version -gt 3.7) {write-host "Az.Sql version $version installed. Prerequisite met."} else {update-module az.sql} }
142142
```
143143

144144
1. Connect to your Azure account and list all the subscriptions associated with your account.

articles/synapse-analytics/how-to-move-workspace-from-one-region-to-another.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ $regionName="<YourTargetRegionName>"
6969
$containerName="<YourFileSystemName>" # This is the file system name
7070
$workspaceName="<YourTargetRegionWorkspaceName>"
7171
72-
$sourcRegionWSName="<Your source region workspace name>"
72+
$sourceRegionWSName="<Your source region workspace name>"
7373
$sourceRegionRGName="<YourSourceRegionResourceGroupName>"
7474
$sqlUserName="<SQLUserName>"
7575
$sqlPassword="<SQLStrongPassword>"
@@ -146,15 +146,15 @@ if($StorageAccountNameAvailable)
146146
Write-Host "Storage account Name is available to be used...creating storage account"
147147
148148
#Creating a Data Lake Storage Gen2 account
149-
$storgeAccountProvisionStatus=az storage account create `
149+
$storageAccountProvisionStatus=az storage account create `
150150
--name $storageAccountName `
151151
--resource-group $resourceGroupName `
152152
--location $regionName `
153153
--sku Standard_GRS `
154154
--kind StorageV2 `
155155
--enable-hierarchical-namespace $true
156156
157-
($storgeAccountProvisionStatus| ConvertFrom-Json).provisioningState
157+
($storageAccountProvisionStatus| ConvertFrom-Json).provisioningState
158158
}
159159
else
160160
{
@@ -320,7 +320,7 @@ Transform the Azure Synapse SQL pool resource ID to SQL database ID because curr
320320
For example: `/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Sql/servers/<WorkspaceName>/databases/<DatabaseName>`
321321

322322
```powershell
323-
$pool = Get-AzSynapseSqlPool -ResourceGroupName $sourceRegionRGName -WorkspaceName $sourcRegionWSName -Name $sqlPoolName
323+
$pool = Get-AzSynapseSqlPool -ResourceGroupName $sourceRegionRGName -WorkspaceName $sourceRegionWSName -Name $sqlPoolName
324324
$databaseId = $pool.Id `
325325
-replace "Microsoft.Synapse", "Microsoft.Sql" `
326326
-replace "workspaces", "servers" `

articles/synapse-analytics/machine-learning/overview-cognitive-services.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -459,7 +459,7 @@ df = spark.createDataFrame(
459459
).withColumn("group", lit("series1"))
460460

461461
# Run the Anomaly Detector service to look for irregular data
462-
anamoly_detector = (
462+
anomaly_detector = (
463463
SimpleDetectAnomalies()
464464
.setSubscriptionKey(anomaly_key)
465465
.setLocation(anomaly_loc)
@@ -472,7 +472,7 @@ anamoly_detector = (
472472

473473
# Show the full results of the analysis with the anomalies marked as "True"
474474
display(
475-
anamoly_detector.transform(df).select("timestamp", "value", "anomalies.isAnomaly")
475+
anomaly_detector.transform(df).select("timestamp", "value", "anomalies.isAnomaly")
476476
)
477477

478478
```

articles/synapse-analytics/machine-learning/tutorial-build-applications-use-mmlspark.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ If you don't have an Azure subscription, [create a free account before you begin
3939

4040

4141
## Get started
42-
To get started, import SynapseML and configurate service keys.
42+
To get started, import SynapseML and configure service keys.
4343

4444
```python
4545
import synapse.ml
@@ -50,7 +50,7 @@ from notebookutils import mssparkutils
5050
ai_service_key = mssparkutils.credentials.getSecret("ADD_YOUR_KEY_VAULT_NAME", "ADD_YOUR_SERVICE_KEY","ADD_YOUR_KEY_VAULT_LINKED_SERVICE_NAME")
5151
# A Bing Search v7 subscription key
5252
bingsearch_service_key = mssparkutils.credentials.getSecret("ADD_YOUR_KEY_VAULT_NAME", "ADD_YOUR_BING_SEARCH_KEY","ADD_YOUR_KEY_VAULT_LINKED_SERVICE_NAME")
53-
# An Anomaly Dectector subscription key
53+
# An Anomaly Detector subscription key
5454
anomalydetector_key = mssparkutils.credentials.getSecret("ADD_YOUR_KEY_VAULT_NAME", "ADD_YOUR_ANOMALY_KEY","ADD_YOUR_KEY_VAULT_LINKED_SERVICE_NAME")
5555
```
5656

@@ -196,7 +196,7 @@ df_timeseriesdata = spark.createDataFrame([
196196
], ["timestamp", "value"]).withColumn("group", lit("series1"))
197197

198198
# Run the Anomaly Detector service to look for irregular data
199-
anamoly_detector = (SimpleDetectAnomalies()
199+
anomaly_detector = (SimpleDetectAnomalies()
200200
.setSubscriptionKey(anomalydetector_key)
201201
.setLocation("eastasia")
202202
.setTimestampCol("timestamp")
@@ -206,7 +206,7 @@ anamoly_detector = (SimpleDetectAnomalies()
206206
.setGranularity("monthly"))
207207

208208
# Show the full results of the analysis with the anomalies marked as "True"
209-
display(anamoly_detector.transform(df_timeseriesdata).select("timestamp", "value", "anomalies.isAnomaly"))
209+
display(anomaly_detector.transform(df_timeseriesdata).select("timestamp", "value", "anomalies.isAnomaly"))
210210
```
211211

212212
### Expected results

articles/synapse-analytics/machine-learning/tutorial-computer-vision-use-mmlspark.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ df = spark.createDataFrame([
3535
("<replace with your file path>/dog.jpg", )
3636
], ["image", ])
3737

38-
# Run the Azure AI Vision service. Analyze Image extracts infortmation from/about the images.
38+
# Run the Azure AI Vision service. Analyze Image extracts information from/about the images.
3939
analysis = (AnalyzeImage()
4040
.setLinkedService(ai_service_name)
4141
.setVisualFeatures(["Categories","Color","Description","Faces","Objects","Tags"])

articles/synapse-analytics/spark/apache-spark-gpu-concept.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ To simplify the process for creating and managing pools, Azure Synapse takes car
2929

3030
> [!NOTE]
3131
> - GPU-accelerated pools can be created in workspaces located in East US, Australia East, and North Europe.
32-
> - GPU-accelerated pools are only availble with the Apache Spark 3 runtime.
32+
> - GPU-accelerated pools are only available with the Apache Spark 3 runtime.
3333
3434
## GPU-accelerated runtime
3535

articles/synapse-analytics/spark/apache-spark-troubleshoot-library-errors.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ To recreate the environment and validate your updates:
112112
3. Use ``pip install -r <provide your req.txt file>`` to update the virtual environment with your specified packages. If the installation results in an error, then there may be a conflict between what is pre-installed in the Synapse base runtime and what is specified in the provided requirements file. These dependency conflicts must be resolved in order to get the updated libraries on your serverless Apache Spark pool.
113113
114114
>[!IMPORTANT]
115-
>Issues may arrise when using pip and conda together. When combining pip and conda, it's best to follow these [recommended best practices](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#activating-an-environment).
115+
>Issues may arise when using pip and conda together. When combining pip and conda, it's best to follow these [recommended best practices](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#activating-an-environment).
116116
117117
## Next steps
118118
- View the default libraries: [Apache Spark version support](apache-spark-version-support.md)

articles/synapse-analytics/spark/optimize-write-for-apache-spark.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ Using the [DeltaTableBuilder API](https://docs.delta.io/latest/delta-apidoc.html
110110
```scala
111111
val table = DeltaTable.create()
112112
.tableName("<table_name>")
113-
.addColumnn("<colName>", <dataType>)
113+
.addColumn("<colName>", <dataType>)
114114
.location("<table_location>")
115115
.property("delta.autoOptimize.optimizeWrite", "true")
116116
.execute()

articles/synapse-analytics/sql-data-warehouse/single-region-residency.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ To create a new dedicated SQL pool through PowerShell, use the [New-AzSqlDatabas
9191
4. Select the subscription that contains the database to be restored.
9292
5. List the restore points for the dedicated SQL pool (formerly SQL DW).
9393
1. Pick the desired restore point using the RestorePointCreationDate.
94-
1. Restore the dedicated SQL pool (formerly SQL DW) to the desired restore point using Restore-AzSqlDatabase PowerShell cmdlet specifying BackupStorageRedundnacy as 'Local'.
94+
1. Restore the dedicated SQL pool (formerly SQL DW) to the desired restore point using Restore-AzSqlDatabase PowerShell cmdlet specifying BackupStorageRedundancy as 'Local'.
9595

9696
```powershell
9797

articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-integrate-azure-stream-analytics.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,8 @@ WITH (DISTRIBUTION = ROUND_ROBIN)
125125

126126
On the Azure portal for Stream Analytics job, click on your job name. Click on the ***Test*** button in the ***Output details*** pane.
127127

128-
![Test button on Outpout details](./media/sql-data-warehouse-integrate-azure-stream-analytics/sqlpool-asatest.png)
128+
![Screenshot showing Test button on Output details.](./media/sql-data-warehouse-integrate-azure-stream-analytics/sqlpool-asatest.png)
129+
129130
When the connection to the database succeeds, you will see a notification in the portal.
130131

131132
### Step 6

0 commit comments

Comments
 (0)