You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/data-factory/concepts-data-redundancy.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,16 +8,16 @@ ms.date: 10/03/2024
8
8
ms.subservice: data-movement
9
9
---
10
10
11
-
# **Azure Data Factory data redundancy**
11
+
# Azure Data Factory data redundancy
12
12
13
13
Azure Data Factory data includes metadata (pipeline, datasets, linked services, integration runtime, and triggers) and monitoring data (pipeline, trigger, and activity runs).
14
14
15
-
In all regions (except Brazil South and Southeast Asia), Azure Data Factory data is stored and replicated in the [paired region](../reliability/cross-region-replication-azure.md#azure-paired-regions) to protect against metadata loss. During regional datacenter failures, Microsoft may initiate a regional failover of your Azure Data Factory instance. In most cases, no action is required on your part. When the Microsoft-managed failover has completed, you'll be able to access your Azure Data Factory in the failover region.
15
+
In all regions (except Brazil South and Southeast Asia), Azure Data Factory data is stored and replicated in the [paired region](../reliability/cross-region-replication-azure.md#azure-paired-regions) to protect against metadata loss. During regional datacenter failures, Microsoft might initiate a regional failover of your Azure Data Factory instance. In most cases, no action is required on your part. When the Microsoft-managed failover has completed, you are able to access your Azure Data Factory in the failover region.
16
16
17
17
Due to data residency requirements in Brazil South, and Southeast Asia, Azure Data Factory data is stored on [local region only](../storage/common/storage-redundancy.md#locally-redundant-storage). For Southeast Asia, all the data are stored in Singapore. For Brazil South, all data are stored in Brazil. When the region is lost due to a significant disaster, Microsoft won't be able to recover your Azure Data Factory data.
18
18
19
19
> [!NOTE]
20
-
> Microsoft-managed failover does not apply to self-hosted integration runtime (SHIR) since this infrastructure is typically customer-managed. If the SHIR is set up on Azure VM, then the recommendation is to leverage[Azure site recovery](../site-recovery/site-recovery-overview.md) for handling the [Azure VM failover](../site-recovery/azure-to-azure-architecture.md) to another region.
20
+
> Microsoft-managed failover doesn't apply to self-hosted integration runtime (SHIR) since this infrastructure is typically customer-managed. If the SHIR is set up on Azure VM, then the recommendation is to use[Azure Site Recovery](../site-recovery/site-recovery-overview.md) for handling the [Azure VM failover](../site-recovery/azure-to-azure-architecture.md) to another region.
21
21
22
22
23
23
@@ -28,7 +28,7 @@ To ensure you can track and audit the changes made to your metadata, you should
28
28
Learn how to set up [source control in Azure Data Factory](./source-control.md).
29
29
30
30
> [!NOTE]
31
-
> In case of a disaster (loss of region), new data factory can be provisioned manually or in an automated fashion. Once the new data factory has been created, you can restore your pipelines, datasets and linked services JSON from the existing Git repository.
31
+
> If there is a disaster (loss of region), new data factory can be provisioned manually or in an automated fashion. Once the new data factory has been created, you can restore your pipelines, datasets, and linked services JSON from the existing Git repository.
Copy file name to clipboardExpand all lines: articles/data-factory/continuous-integration-delivery-resource-manager-custom-parameters.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,22 +20,22 @@ If your development instance has an associated Git repository, you can override
20
20
21
21
To handle custom parameter 256 limit, there are three options:
22
22
23
-
* Use the custom parameter file and remove properties that don't need parameterization, i.e., properties that can keep a default value and hence decrease the parameter count.
24
-
* Refactor logic in the dataflow to reduce parameters, for example, pipeline parameters all have the same value, you can just use global parameters instead.
23
+
* Use the custom parameter file and remove properties that don't need parameterization, that is, properties that can keep a default value and hence decrease the parameter count.
24
+
* Refactor logic in the dataflow to reduce parameters, for example, pipeline parameters all have the same value. You can just use global parameters instead.
25
25
* Split one data factory into multiple data factories.
26
26
27
-
To override the default Resource Manager parameter configuration, go to the **Manage** hub and select **ARM template** in the "Source control" section. Under **ARM parameter configuration** section, click**Edit** icon in "Edit parameter configuration" to open the Resource Manager parameter configuration code editor.
27
+
To override the default Resource Manager parameter configuration, go to the **Manage** hub and select **ARM template** in the "Source control" section. Under **ARM parameter configuration** section, select**Edit** icon in "Edit parameter configuration" to open the Resource Manager parameter configuration code editor.
> **ARM parameter configuration** is only enabled in "GIT mode". Currently it is disabled in "live mode" or "Data Factory" mode.
32
+
> **ARM parameter configuration** is only enabled in "GIT mode". Currently it's disabled in "live mode" or "Data Factory" mode.
33
33
34
34
Creating a custom Resource Manager parameter configuration creates a file named **arm-template-parameters-definition.json** in the root folder of your git branch. You must use that exact file name.
When publishing from the collaboration branch, Data Factory will read this file and use its configuration to generate which properties get parameterized. If no file is found, the default template is used.
38
+
When publishing from the collaboration branch, Data Factory reads this file and use its configuration to generate which properties get parameterized. If no file is found, the default template is used.
39
39
40
40
When exporting a Resource Manager template, Data Factory reads this file from whichever branch you're currently working on, not the collaboration branch. You can create or edit the file from a private branch, where you can test your changes by selecting **Export ARM Template** in the UI. You can then merge the file into the collaboration branch.
41
41
@@ -61,7 +61,7 @@ The following are some guidelines to follow when you create the custom parameter
61
61
62
62
## Sample parameterization template
63
63
64
-
Here's an example of what an Resource Manager parameter configuration might look like. It contains examples of a number of possible usages, including parameterization of nested activities within a pipeline and changing the defaultValue of a linked service parameter.
64
+
Here's an example of what a Resource Manager parameter configuration might look like. It contains examples of many possible usages, including parameterization of nested activities within a pipeline and changing the defaultValue of a linked service parameter.
65
65
66
66
```json
67
67
{
@@ -156,7 +156,7 @@ Here's an explanation of how the preceding template is constructed, broken down
156
156
157
157
### Pipelines
158
158
159
-
* Any property in the path `activities/typeProperties/waitTimeInSeconds` is parameterized. Any activity in a pipeline that has a code-level property named `waitTimeInSeconds` (for example, the `Wait` activity) is parameterized as a number, with a default name. But it won't have a default value in the Resource Manager template. It will be a mandatory input during the Resource Manager deployment.
159
+
* Any property in the path `activities/typeProperties/waitTimeInSeconds` is parameterized. Any activity in a pipeline that has a code-level property named `waitTimeInSeconds` (for example, the `Wait` activity) is parameterized as a number, with a default name. But it won't have a default value in the Resource Manager template. It is a mandatory input during the Resource Manager deployment.
160
160
* Similarly, a property called `headers` (for example, in a `Web` activity) is parameterized with type `object` (JObject). It has a default value, which is the same value as that of the source factory.
161
161
162
162
### IntegrationRuntimes
@@ -170,16 +170,16 @@ Here's an explanation of how the preceding template is constructed, broken down
170
170
171
171
### LinkedServices
172
172
173
-
* Linked services are unique. Because linked services and datasets have a wide range of types, you can provide type-specific customization. In this example, for all linked services of type `AzureDataLakeStore`, a specific template will be applied. For all others (via `*`), a different template will be applied.
174
-
* The `connectionString` property will be parameterized as a `securestring` value. It won't have a default value. It will have a shortened parameter name that's suffixed with `connectionString`.
173
+
* Linked services are unique. Because linked services and datasets have a wide range of types, you can provide type-specific customization. In this example, for all linked services of type `AzureDataLakeStore`, a specific template is applied. For all others (via `*`), a different template is applied.
174
+
* The `connectionString` property is parameterized as a `securestring` value. It won't have a default value. It has a shortened parameter name that's suffixed with `connectionString`.
175
175
* The property `secretAccessKey` happens to be an `AzureKeyVaultSecret` (for example, in an Amazon S3 linked service). It's automatically parameterized as an Azure Key Vault secret and fetched from the configured key vault. You can also parameterize the key vault itself.
176
176
177
177
### Datasets
178
178
179
179
* Although type-specific customization is available for datasets, you can provide configuration without explicitly having a \*-level configuration. In the preceding example, all dataset properties under `typeProperties` are parameterized.
180
180
181
181
> [!NOTE]
182
-
> If **Azure alerts and matrices** are configured for a pipeline, they are not currently supported as parameters for ARM deployments. To reapply the alerts and matrices in new environment, please follow [Data Factory Monitoring, Alerts and Matrices.](./monitor-metrics-alerts.md)
182
+
> If **Azure alerts and matrices** are configured for a pipeline, they aren't currently supported as parameters for ARM template deployments. To reapply the alerts and matrices in new environment, follow [Data Factory Monitoring, Alerts, and Matrices.](./monitor-metrics-alerts.md)
183
183
>
184
184
185
185
## Default parameterization template
@@ -331,7 +331,7 @@ Below is the current default parameterization template. If you need to add only
331
331
332
332
## Example: Parameterizing an existing Azure Databricks interactive cluster ID
333
333
334
-
The following example shows how to add a single value to the default parameterization template. We only want to add an existing Azure Databricks interactive cluster ID for a Databricks linked service to the parameters file. Note that this file is the same as the previous file except for the addition of `existingClusterId` under the properties field of `Microsoft.DataFactory/factories/linkedServices`.
334
+
The following example shows how to add a single value to the default parameterization template. We only want to add an existing Azure Databricks interactive cluster ID for a Databricks linked service to the parameters file. This file is the same as the previous file except for the addition of `existingClusterId` under the properties field of `Microsoft.DataFactory/factories/linkedServices`.
Copy file name to clipboardExpand all lines: articles/data-factory/cross-tenant-connections-to-azure-devops.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,13 +10,13 @@ ms.date: 01/05/2024
10
10
11
11
# Cross-tenant connections to Azure DevOps
12
12
13
-
This document covers a step-by-step guide for configuring Azure DevOps account in another tenant than the Azure Data Factory. This is useful for when your Azure DevOps is not in the same tenant as the Azure Data Factory.
13
+
This document covers a step-by-step guide for configuring Azure DevOps organization in another tenant than the Azure Data Factory. This is useful for when your Azure DevOps isn't in the same tenant as the Azure Data Factory.
14
14
15
15
:::image type="content" source="media/cross-tenant-connections-to-azure-devops/cross-tenant-architecture-diagram.png" alt-text="Shows an architectural diagram of a connection from Azure Data Factory to Azure DevOps in another tenant.":::
16
16
17
17
## Prerequisites
18
18
19
-
- You need to have an Azure DevOps account in another tenant than your Azure Data Factory.
19
+
- You need to have an Azure DevOps organization in another tenant than your Azure Data Factory.
20
20
- You should have a project in the above Azure DevOps tenant.
21
21
22
22
## Step-by-step guide
@@ -33,7 +33,7 @@ This document covers a step-by-step guide for configuring Azure DevOps account i
33
33
34
34
:::image type="content" source="media/cross-tenant-connections-to-azure-devops/cross-tenant-sign-in-confirm.png" alt-text="Shows the confirmation dialog for cross tenant sign in.":::
35
35
36
-
1. Choose a different account to login to Azure DevOps in the remote tenant.
36
+
1. Choose a different account to log in to Azure DevOps in the remote tenant.
37
37
38
38
:::image type="content" source="media/cross-tenant-connections-to-azure-devops/use-another-account.png" alt-text="Shows the account selection dialog for choosing an account to connect to the remote Azure DevOps tenant.":::
39
39
@@ -49,6 +49,6 @@ This document covers a step-by-step guide for configuring Azure DevOps account i
49
49
50
50
While opening the Azure Data Factory in another tab or a new browser, use the first sign-in to log into to your Azure Data Factory user account.
51
51
52
-
You should see a dialog with the message _You do not have access to the VSTS repo associated with this factory._Click**OK** to sign in with the cross-tenant account to gain access to Git through the Azure Data Factory.
52
+
You should see a dialog with the message _You don't have access to the VSTS repo associated with this factory._Select**OK** to sign in with the cross-tenant account to gain access to Git through the Azure Data Factory.
53
53
54
54
:::image type="content" source="media/cross-tenant-connections-to-azure-devops/sign-in-with-account-with-repository-access.png" alt-text="Shows the sign-in prompt to associate a VSTS repo with a cross-tenant Azure Data Factory.":::
0 commit comments