Skip to content

Commit dca1d36

Browse files
authored
Merge pull request #125088 from ArieHein/Spelling-Wave-17
Spelling Fixes
2 parents 3d3e917 + 2d583b5 commit dca1d36

File tree

44 files changed

+70
-70
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+70
-70
lines changed

articles/data-factory/compute-linked-services.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ The following JSON defines a Linux-based on-demand HDInsight linked service. The
8686
"value": "<service principal key>",
8787
"type": "SecureString"
8888
},
89-
"tenant": "<tenent id>",
89+
"tenant": "<tenant id>",
9090
"clusterResourceGroup": "<resource group name>",
9191
"version": "3.6",
9292
"osType": "Linux",
@@ -195,7 +195,7 @@ You can also specify the following properties for the granular configuration of
195195
"value": "<service principal key>",
196196
"type": "SecureString"
197197
},
198-
"tenant": "<tenent id>",
198+
"tenant": "<tenant id>",
199199
"clusterResourceGroup": "<resource group name>",
200200
"version": "3.6",
201201
"osType": "Linux",
@@ -600,7 +600,7 @@ You create an Azure Synapse Analytics (Artifacts) linked service and use it with
600600
},
601601
"ConnectVia":{
602602
"referenceName": "integrationRuntime1",
603-
"type": "IntergrationRuntimeReference"
603+
"type": "IntegrationRuntimeReference"
604604
}
605605
}
606606
}

articles/data-factory/concepts-data-flow-flowlet.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ For a single transformation activity, you can right-click the mapping data flow
6262

6363
![Screenshot showing creating a flowlet from an existing activity using the right-click menu option.](./media/data-flow-flowlet/flowlet-context-create.png)
6464

65-
If you have mulit-select turned on, you can also select multiple mapping data flow activities. This can be done by either lassoing multiple activities by drawing a rectangle to select them or using shift+select to select multiple activities. Then you'll right-click and select Create a new flowlet.
65+
If you have multi-select turned on, you can also select multiple mapping data flow activities. This can be done by either lassoing multiple activities by drawing a rectangle to select them or using shift+select to select multiple activities. Then you'll right-click and select Create a new flowlet.
6666

6767
![Screenshot showing multiple selection from existing activities.](./media/data-flow-flowlet/flowlet-context-multi.png)
6868

articles/data-factory/concepts-workflow-orchestration-manager.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ ms.custom: references_regions
1616

1717
> [!NOTE]
1818
> Apache Airflow is now accessible through Microsoft Fabric. Microsoft Fabric offers a wide range of Apache Airflow capabilities via Data Workflows.
19-
> We recommend migrating your existing Workflow Orchestration Manager (Apache Airflow in ADF) based workflows to Data Workflows (Apache Airflow in Microsoft Fabric) for a broader set of features. Apache Airflow capabilities will be Genrally Available in Q1 CY2025 only in Microsoft Fabric.
19+
> We recommend migrating your existing Workflow Orchestration Manager (Apache Airflow in ADF) based workflows to Data Workflows (Apache Airflow in Microsoft Fabric) for a broader set of features. Apache Airflow capabilities will be Generally Available in Q1 CY2025 only in Microsoft Fabric.
2020
> For new Apache Airflow projects, we recommend using Apache Airflow in Microsoft Fabric. More details can be found [here](https://blog.fabric.microsoft.com/blog/introducing-data-workflows-in-microsoft-fabric?ft=All).
2121
> New users will not be allowed to create a new workflow orchestration manager in ADF, but existing users with a workflow orchestration manager may continue to use it but plan a migration soon.
2222

articles/data-factory/connector-amazon-rds-for-oracle.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -276,7 +276,7 @@ The Amazon RDS for Oracle connector provides built-in data partitioning to copy
276276

277277
When you enable partitioned copy, the service runs parallel queries against your Amazon RDS for Oracle source to load data by partitions. The parallel degree is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. For example, if you set `parallelCopies` to four, the service concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Amazon RDS for Oracle database.
278278

279-
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Amazon RDS for Oracle database. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommanded to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file.
279+
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Amazon RDS for Oracle database. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file.
280280

281281
| Scenario | Suggested settings |
282282
| ------------------------------------------------------------ | ------------------------------------------------------------ |

articles/data-factory/connector-azure-databricks-delta-lake.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -431,7 +431,7 @@ If your source data store and format meet the criteria described in this section
431431
},
432432
"sink": {
433433
"type": "AzureDatabricksDeltaLakeSink",
434-
"sqlReadrQuery": "VACUUM eventsTable DRY RUN"
434+
"sqlReaderQuery": "VACUUM eventsTable DRY RUN"
435435
}
436436
}
437437
}

articles/data-factory/connector-azure-search.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ The following properties are supported for Azure AI Search linked service:
7171
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or Self-hosted Integration Runtime (if your data store is located in private network). If not specified, it uses the default Azure Integration Runtime. |No |
7272

7373
> [!IMPORTANT]
74-
> When copying data from a cloud data store into search index, in Azure AI Search linked service, you need to refer an Azure Integration Runtime with explicit region in connactVia. Set the region as the one where your search service resides. Learn more from [Azure Integration Runtime](concepts-integration-runtime.md#azure-integration-runtime).
74+
> When copying data from a cloud data store into search index, in Azure AI Search linked service, you need to refer an Azure Integration Runtime with explicit region in connectVia. Set the region as the one where your search service resides. Learn more from [Azure Integration Runtime](concepts-integration-runtime.md#azure-integration-runtime).
7575
7676
**Example:**
7777

articles/data-factory/connector-marketo.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,7 @@ To copy data from Marketo, set the source type in the copy activity to **Marketo
139139
| Property | Description | Required |
140140
|:--- |:--- |:--- |
141141
| type | The type property of the copy activity source must be set to: **MarketoSource** | Yes |
142-
| query | Use the custom SQL query to read data. For example: `"SELECT * FROM Activitiy_Types"`. | No (if "tableName" in dataset is specified) |
142+
| query | Use the custom SQL query to read data. For example: `"SELECT * FROM Activity_Types"`. | No (if "tableName" in dataset is specified) |
143143

144144
**Example:**
145145

@@ -163,7 +163,7 @@ To copy data from Marketo, set the source type in the copy activity to **Marketo
163163
"typeProperties": {
164164
"source": {
165165
"type": "MarketoSource",
166-
"query": "SELECT top 1000 * FROM Activitiy_Types"
166+
"query": "SELECT top 1000 * FROM Activity_Types"
167167
},
168168
"sink": {
169169
"type": "<sink type>"

articles/data-factory/connector-netezza.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@ The Data Factory Netezza connector provides built-in data partitioning to copy d
230230

231231
When you enable partitioned copy, the service runs parallel queries against your Netezza source to load data by partitions. The parallel degree is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. For example, if you set `parallelCopies` to four, the service concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Netezza database.
232232

233-
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Netezza database. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommanded to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file.
233+
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Netezza database. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file.
234234

235235
| Scenario | Suggested settings |
236236
| ------------------------------------------------------------ | ------------------------------------------------------------ |

articles/data-factory/connector-oracle.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -329,7 +329,7 @@ The Oracle connector provides built-in data partitioning to copy data from Oracl
329329

330330
When you enable partitioned copy, the service runs parallel queries against your Oracle source to load data by partitions. The parallel degree is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. For example, if you set `parallelCopies` to four, the service concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Oracle database.
331331

332-
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Oracle database. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommanded to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file.
332+
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Oracle database. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file.
333333

334334
| Scenario | Suggested settings |
335335
| ------------------------------------------------------------ | ------------------------------------------------------------ |

articles/data-factory/connector-postgresql.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -299,7 +299,7 @@ When copying data from PostgreSQL, the following mappings are used from PostgreS
299299
|`XML`|`String`|`String`|
300300
|`IntArray`|`String`|`String`|
301301
|`TextArray`|`String`|`String`|
302-
|`NumbericArray`|`String`|`String`|
302+
|`NumericArray`|`String`|`String`|
303303
|`DateArray`|`String`|`String`|
304304
|`Range`|`String`|`String`|
305305
|`Bpchar`|`String`|`String`|

0 commit comments

Comments
 (0)