Skip to content

Commit 322663a

Browse files
committed
Acrolinx improvements
1 parent f508bed commit 322663a

File tree

3 files changed

+19
-19
lines changed

3 files changed

+19
-19
lines changed

articles/data-factory/concepts-data-redundancy.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,16 +8,16 @@ ms.date: 10/03/2024
88
ms.subservice: data-movement
99
---
1010

11-
# **Azure Data Factory data redundancy**
11+
# Azure Data Factory data redundancy
1212

1313
Azure Data Factory data includes metadata (pipeline, datasets, linked services, integration runtime, and triggers) and monitoring data (pipeline, trigger, and activity runs).
1414

15-
In all regions (except Brazil South and Southeast Asia), Azure Data Factory data is stored and replicated in the [paired region](../reliability/cross-region-replication-azure.md#azure-paired-regions) to protect against metadata loss. During regional datacenter failures, Microsoft may initiate a regional failover of your Azure Data Factory instance. In most cases, no action is required on your part. When the Microsoft-managed failover has completed, you'll be able to access your Azure Data Factory in the failover region.
15+
In all regions (except Brazil South and Southeast Asia), Azure Data Factory data is stored and replicated in the [paired region](../reliability/cross-region-replication-azure.md#azure-paired-regions) to protect against metadata loss. During regional datacenter failures, Microsoft might initiate a regional failover of your Azure Data Factory instance. In most cases, no action is required on your part. When the Microsoft-managed failover has completed, you are able to access your Azure Data Factory in the failover region.
1616

1717
Due to data residency requirements in Brazil South, and Southeast Asia, Azure Data Factory data is stored on [local region only](../storage/common/storage-redundancy.md#locally-redundant-storage). For Southeast Asia, all the data are stored in Singapore. For Brazil South, all data are stored in Brazil. When the region is lost due to a significant disaster, Microsoft won't be able to recover your Azure Data Factory data.
1818

1919
> [!NOTE]
20-
> Microsoft-managed failover does not apply to self-hosted integration runtime (SHIR) since this infrastructure is typically customer-managed. If the SHIR is set up on Azure VM, then the recommendation is to leverage [Azure site recovery](../site-recovery/site-recovery-overview.md) for handling the [Azure VM failover](../site-recovery/azure-to-azure-architecture.md) to another region.
20+
> Microsoft-managed failover doesn't apply to self-hosted integration runtime (SHIR) since this infrastructure is typically customer-managed. If the SHIR is set up on Azure VM, then the recommendation is to use [Azure Site Recovery](../site-recovery/site-recovery-overview.md) for handling the [Azure VM failover](../site-recovery/azure-to-azure-architecture.md) to another region.
2121
2222

2323

@@ -28,7 +28,7 @@ To ensure you can track and audit the changes made to your metadata, you should
2828
Learn how to set up [source control in Azure Data Factory](./source-control.md).
2929

3030
> [!NOTE]
31-
> In case of a disaster (loss of region), new data factory can be provisioned manually or in an automated fashion. Once the new data factory has been created, you can restore your pipelines, datasets and linked services JSON from the existing Git repository.
31+
> If there is a disaster (loss of region), new data factory can be provisioned manually or in an automated fashion. Once the new data factory has been created, you can restore your pipelines, datasets, and linked services JSON from the existing Git repository.
3232
3333

3434

articles/data-factory/continuous-integration-delivery-resource-manager-custom-parameters.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -20,22 +20,22 @@ If your development instance has an associated Git repository, you can override
2020

2121
To handle custom parameter 256 limit, there are three options:
2222

23-
* Use the custom parameter file and remove properties that don't need parameterization, i.e., properties that can keep a default value and hence decrease the parameter count.
24-
* Refactor logic in the dataflow to reduce parameters, for example, pipeline parameters all have the same value, you can just use global parameters instead.
23+
* Use the custom parameter file and remove properties that don't need parameterization, that is, properties that can keep a default value and hence decrease the parameter count.
24+
* Refactor logic in the dataflow to reduce parameters, for example, pipeline parameters all have the same value. You can just use global parameters instead.
2525
* Split one data factory into multiple data factories.
2626

27-
To override the default Resource Manager parameter configuration, go to the **Manage** hub and select **ARM template** in the "Source control" section. Under **ARM parameter configuration** section, click **Edit** icon in "Edit parameter configuration" to open the Resource Manager parameter configuration code editor.
27+
To override the default Resource Manager parameter configuration, go to the **Manage** hub and select **ARM template** in the "Source control" section. Under **ARM parameter configuration** section, select **Edit** icon in "Edit parameter configuration" to open the Resource Manager parameter configuration code editor.
2828

2929
:::image type="content" source="media/author-management-hub/management-hub-custom-parameters.png" alt-text="Manage custom parameters":::
3030

3131
> [!NOTE]
32-
> **ARM parameter configuration** is only enabled in "GIT mode". Currently it is disabled in "live mode" or "Data Factory" mode.
32+
> **ARM parameter configuration** is only enabled in "GIT mode". Currently it's disabled in "live mode" or "Data Factory" mode.
3333
3434
Creating a custom Resource Manager parameter configuration creates a file named **arm-template-parameters-definition.json** in the root folder of your git branch. You must use that exact file name.
3535

3636
:::image type="content" source="media/continuous-integration-delivery/custom-parameters.png" alt-text="Custom parameters file":::
3737

38-
When publishing from the collaboration branch, Data Factory will read this file and use its configuration to generate which properties get parameterized. If no file is found, the default template is used.
38+
When publishing from the collaboration branch, Data Factory reads this file and use its configuration to generate which properties get parameterized. If no file is found, the default template is used.
3939

4040
When exporting a Resource Manager template, Data Factory reads this file from whichever branch you're currently working on, not the collaboration branch. You can create or edit the file from a private branch, where you can test your changes by selecting **Export ARM Template** in the UI. You can then merge the file into the collaboration branch.
4141

@@ -61,7 +61,7 @@ The following are some guidelines to follow when you create the custom parameter
6161

6262
## Sample parameterization template
6363

64-
Here's an example of what an Resource Manager parameter configuration might look like. It contains examples of a number of possible usages, including parameterization of nested activities within a pipeline and changing the defaultValue of a linked service parameter.
64+
Here's an example of what a Resource Manager parameter configuration might look like. It contains examples of many possible usages, including parameterization of nested activities within a pipeline and changing the defaultValue of a linked service parameter.
6565

6666
```json
6767
{
@@ -156,7 +156,7 @@ Here's an explanation of how the preceding template is constructed, broken down
156156

157157
### Pipelines
158158

159-
* Any property in the path `activities/typeProperties/waitTimeInSeconds` is parameterized. Any activity in a pipeline that has a code-level property named `waitTimeInSeconds` (for example, the `Wait` activity) is parameterized as a number, with a default name. But it won't have a default value in the Resource Manager template. It will be a mandatory input during the Resource Manager deployment.
159+
* Any property in the path `activities/typeProperties/waitTimeInSeconds` is parameterized. Any activity in a pipeline that has a code-level property named `waitTimeInSeconds` (for example, the `Wait` activity) is parameterized as a number, with a default name. But it won't have a default value in the Resource Manager template. It is a mandatory input during the Resource Manager deployment.
160160
* Similarly, a property called `headers` (for example, in a `Web` activity) is parameterized with type `object` (JObject). It has a default value, which is the same value as that of the source factory.
161161

162162
### IntegrationRuntimes
@@ -170,16 +170,16 @@ Here's an explanation of how the preceding template is constructed, broken down
170170

171171
### LinkedServices
172172

173-
* Linked services are unique. Because linked services and datasets have a wide range of types, you can provide type-specific customization. In this example, for all linked services of type `AzureDataLakeStore`, a specific template will be applied. For all others (via `*`), a different template will be applied.
174-
* The `connectionString` property will be parameterized as a `securestring` value. It won't have a default value. It will have a shortened parameter name that's suffixed with `connectionString`.
173+
* Linked services are unique. Because linked services and datasets have a wide range of types, you can provide type-specific customization. In this example, for all linked services of type `AzureDataLakeStore`, a specific template is applied. For all others (via `*`), a different template is applied.
174+
* The `connectionString` property is parameterized as a `securestring` value. It won't have a default value. It has a shortened parameter name that's suffixed with `connectionString`.
175175
* The property `secretAccessKey` happens to be an `AzureKeyVaultSecret` (for example, in an Amazon S3 linked service). It's automatically parameterized as an Azure Key Vault secret and fetched from the configured key vault. You can also parameterize the key vault itself.
176176

177177
### Datasets
178178

179179
* Although type-specific customization is available for datasets, you can provide configuration without explicitly having a \*-level configuration. In the preceding example, all dataset properties under `typeProperties` are parameterized.
180180

181181
> [!NOTE]
182-
> If **Azure alerts and matrices** are configured for a pipeline, they are not currently supported as parameters for ARM deployments. To reapply the alerts and matrices in new environment, please follow [Data Factory Monitoring, Alerts and Matrices.](./monitor-metrics-alerts.md)
182+
> If **Azure alerts and matrices** are configured for a pipeline, they aren't currently supported as parameters for ARM template deployments. To reapply the alerts and matrices in new environment, follow [Data Factory Monitoring, Alerts, and Matrices.](./monitor-metrics-alerts.md)
183183
>
184184
185185
## Default parameterization template
@@ -331,7 +331,7 @@ Below is the current default parameterization template. If you need to add only
331331

332332
## Example: Parameterizing an existing Azure Databricks interactive cluster ID
333333

334-
The following example shows how to add a single value to the default parameterization template. We only want to add an existing Azure Databricks interactive cluster ID for a Databricks linked service to the parameters file. Note that this file is the same as the previous file except for the addition of `existingClusterId` under the properties field of `Microsoft.DataFactory/factories/linkedServices`.
334+
The following example shows how to add a single value to the default parameterization template. We only want to add an existing Azure Databricks interactive cluster ID for a Databricks linked service to the parameters file. This file is the same as the previous file except for the addition of `existingClusterId` under the properties field of `Microsoft.DataFactory/factories/linkedServices`.
335335

336336
```json
337337
{

articles/data-factory/cross-tenant-connections-to-azure-devops.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,13 +10,13 @@ ms.date: 01/05/2024
1010

1111
# Cross-tenant connections to Azure DevOps
1212

13-
This document covers a step-by-step guide for configuring Azure DevOps account in another tenant than the Azure Data Factory. This is useful for when your Azure DevOps is not in the same tenant as the Azure Data Factory.
13+
This document covers a step-by-step guide for configuring Azure DevOps organization in another tenant than the Azure Data Factory. This is useful for when your Azure DevOps isn't in the same tenant as the Azure Data Factory.
1414

1515
:::image type="content" source="media/cross-tenant-connections-to-azure-devops/cross-tenant-architecture-diagram.png" alt-text="Shows an architectural diagram of a connection from Azure Data Factory to Azure DevOps in another tenant.":::
1616

1717
## Prerequisites
1818

19-
- You need to have an Azure DevOps account in another tenant than your Azure Data Factory.
19+
- You need to have an Azure DevOps organization in another tenant than your Azure Data Factory.
2020
- You should have a project in the above Azure DevOps tenant.
2121

2222
## Step-by-step guide
@@ -33,7 +33,7 @@ This document covers a step-by-step guide for configuring Azure DevOps account i
3333

3434
:::image type="content" source="media/cross-tenant-connections-to-azure-devops/cross-tenant-sign-in-confirm.png" alt-text="Shows the confirmation dialog for cross tenant sign in.":::
3535

36-
1. Choose a different account to login to Azure DevOps in the remote tenant.
36+
1. Choose a different account to log in to Azure DevOps in the remote tenant.
3737

3838
:::image type="content" source="media/cross-tenant-connections-to-azure-devops/use-another-account.png" alt-text="Shows the account selection dialog for choosing an account to connect to the remote Azure DevOps tenant.":::
3939

@@ -49,6 +49,6 @@ This document covers a step-by-step guide for configuring Azure DevOps account i
4949

5050
While opening the Azure Data Factory in another tab or a new browser, use the first sign-in to log into to your Azure Data Factory user account.
5151

52-
You should see a dialog with the message _You do not have access to the VSTS repo associated with this factory._ Click **OK** to sign in with the cross-tenant account to gain access to Git through the Azure Data Factory.
52+
You should see a dialog with the message _You don't have access to the VSTS repo associated with this factory._ Select **OK** to sign in with the cross-tenant account to gain access to Git through the Azure Data Factory.
5353

5454
:::image type="content" source="media/cross-tenant-connections-to-azure-devops/sign-in-with-account-with-repository-access.png" alt-text="Shows the sign-in prompt to associate a VSTS repo with a cross-tenant Azure Data Factory.":::

0 commit comments

Comments
 (0)