You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/data-factory/concepts-data-redundancy.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,23 +1,23 @@
1
1
---
2
2
title: Data redundancy in Azure Data Factory | Microsoft Docs
3
3
description: 'Learn about meta-data redundancy mechanisms in Azure Data Factory'
4
-
author: nabhishek
4
+
author: kromerm
5
+
ms.author: makromer
5
6
ms.topic: conceptual
6
-
ms.date: 10/03/2024
7
+
ms.date: 01/29/2025
7
8
ms.subservice: data-movement
8
-
ms.author: abnarain
9
9
---
10
10
11
-
# **Azure Data Factory data redundancy**
11
+
# Azure Data Factory data redundancy
12
12
13
13
Azure Data Factory data includes metadata (pipeline, datasets, linked services, integration runtime, and triggers) and monitoring data (pipeline, trigger, and activity runs).
14
14
15
-
In all regions (except Brazil South and Southeast Asia), Azure Data Factory data is stored and replicated in the [paired region](../reliability/cross-region-replication-azure.md#azure-paired-regions) to protect against metadata loss. During regional datacenter failures, Microsoft may initiate a regional failover of your Azure Data Factory instance. In most cases, no action is required on your part. When the Microsoft-managed failover has completed, you'll be able to access your Azure Data Factory in the failover region.
15
+
In all regions (except Brazil South and Southeast Asia), Azure Data Factory data is stored and replicated in the [paired region](../reliability/cross-region-replication-azure.md#azure-paired-regions) to protect against metadata loss. During regional datacenter failures, Microsoft might initiate a regional failover of your Azure Data Factory instance. In most cases, no action is required on your part. When the Microsoft-managed failover has completed, you are able to access your Azure Data Factory in the failover region.
16
16
17
17
Due to data residency requirements in Brazil South, and Southeast Asia, Azure Data Factory data is stored on [local region only](../storage/common/storage-redundancy.md#locally-redundant-storage). For Southeast Asia, all the data are stored in Singapore. For Brazil South, all data are stored in Brazil. When the region is lost due to a significant disaster, Microsoft won't be able to recover your Azure Data Factory data.
18
18
19
19
> [!NOTE]
20
-
> Microsoft-managed failover does not apply to self-hosted integration runtime (SHIR) since this infrastructure is typically customer-managed. If the SHIR is set up on Azure VM, then the recommendation is to leverage[Azure site recovery](../site-recovery/site-recovery-overview.md) for handling the [Azure VM failover](../site-recovery/azure-to-azure-architecture.md) to another region.
20
+
> Microsoft-managed failover doesn't apply to self-hosted integration runtime (SHIR) since this infrastructure is typically customer-managed. If the SHIR is set up on Azure VM, then the recommendation is to use[Azure Site Recovery](../site-recovery/site-recovery-overview.md) for handling the [Azure VM failover](../site-recovery/azure-to-azure-architecture.md) to another region.
21
21
22
22
23
23
@@ -28,7 +28,7 @@ To ensure you can track and audit the changes made to your metadata, you should
28
28
Learn how to set up [source control in Azure Data Factory](./source-control.md).
29
29
30
30
> [!NOTE]
31
-
> In case of a disaster (loss of region), new data factory can be provisioned manually or in an automated fashion. Once the new data factory has been created, you can restore your pipelines, datasets and linked services JSON from the existing Git repository.
31
+
> If there is a disaster (loss of region), new data factory can be provisioned manually or in an automated fashion. Once the new data factory has been created, you can restore your pipelines, datasets, and linked services JSON from the existing Git repository.
Copy file name to clipboardExpand all lines: articles/data-factory/continuous-integration-delivery-manual-promotion.md
+3-4Lines changed: 3 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,12 +2,11 @@
2
2
title: Manual promotion of Resource Manager templates
3
3
description: Learn how to manually promote a Resource Manager template to multiple environments with continuous integration and delivery in Azure Data Factory.
4
4
ms.subservice: ci-cd
5
-
author: nabhishek
6
-
ms.author: abnarain
5
+
author: kromerm
6
+
ms.author: makromer
7
7
ms.reviewer: jburchel
8
8
ms.topic: conceptual
9
-
ms.date: 05/15/2024
10
-
ms.custom:
9
+
ms.date: 01/29/2025
11
10
---
12
11
13
12
# Manually promote a Resource Manager template to each environment
Copy file name to clipboardExpand all lines: articles/data-factory/continuous-integration-delivery-resource-manager-custom-parameters.md
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,11 +2,11 @@
2
2
title: Custom parameters in a Resource Manager template
3
3
description: Learn how to use custom parameters in a Resource Manager template with continuous integration and delivery in Azure Data Factory.
4
4
ms.subservice: ci-cd
5
-
author: nabhishek
6
-
ms.author: abnarain
5
+
author: kromerm
6
+
ms.author: makromer
7
7
ms.reviewer: jburchel
8
8
ms.topic: conceptual
9
-
ms.date: 09/26/2024
9
+
ms.date: 01/29/2025
10
10
---
11
11
12
12
# Use custom parameters with the Resource Manager template
@@ -20,22 +20,22 @@ If your development instance has an associated Git repository, you can override
20
20
21
21
To handle custom parameter 256 limit, there are three options:
22
22
23
-
* Use the custom parameter file and remove properties that don't need parameterization, i.e., properties that can keep a default value and hence decrease the parameter count.
24
-
* Refactor logic in the dataflow to reduce parameters, for example, pipeline parameters all have the same value, you can just use global parameters instead.
23
+
* Use the custom parameter file and remove properties that don't need parameterization, that is, properties that can keep a default value and hence decrease the parameter count.
24
+
* Refactor logic in the dataflow to reduce parameters, for example, pipeline parameters all have the same value. You can just use global parameters instead.
25
25
* Split one data factory into multiple data factories.
26
26
27
-
To override the default Resource Manager parameter configuration, go to the **Manage** hub and select **ARM template** in the "Source control" section. Under **ARM parameter configuration** section, click**Edit** icon in "Edit parameter configuration" to open the Resource Manager parameter configuration code editor.
27
+
To override the default Resource Manager parameter configuration, go to the **Manage** hub and select **ARM template** in the "Source control" section. Under **ARM parameter configuration** section, select**Edit** icon in "Edit parameter configuration" to open the Resource Manager parameter configuration code editor.
> **ARM parameter configuration** is only enabled in "GIT mode". Currently it is disabled in "live mode" or "Data Factory" mode.
32
+
> **ARM parameter configuration** is only enabled in "GIT mode". Currently it's disabled in "live mode" or "Data Factory" mode.
33
33
34
34
Creating a custom Resource Manager parameter configuration creates a file named **arm-template-parameters-definition.json** in the root folder of your git branch. You must use that exact file name.
When publishing from the collaboration branch, Data Factory will read this file and use its configuration to generate which properties get parameterized. If no file is found, the default template is used.
38
+
When publishing from the collaboration branch, Data Factory reads this file and use its configuration to generate which properties get parameterized. If no file is found, the default template is used.
39
39
40
40
When exporting a Resource Manager template, Data Factory reads this file from whichever branch you're currently working on, not the collaboration branch. You can create or edit the file from a private branch, where you can test your changes by selecting **Export ARM Template** in the UI. You can then merge the file into the collaboration branch.
41
41
@@ -61,7 +61,7 @@ The following are some guidelines to follow when you create the custom parameter
61
61
62
62
## Sample parameterization template
63
63
64
-
Here's an example of what an Resource Manager parameter configuration might look like. It contains examples of a number of possible usages, including parameterization of nested activities within a pipeline and changing the defaultValue of a linked service parameter.
64
+
Here's an example of what a Resource Manager parameter configuration might look like. It contains examples of many possible usages, including parameterization of nested activities within a pipeline and changing the defaultValue of a linked service parameter.
65
65
66
66
```json
67
67
{
@@ -156,7 +156,7 @@ Here's an explanation of how the preceding template is constructed, broken down
156
156
157
157
### Pipelines
158
158
159
-
* Any property in the path `activities/typeProperties/waitTimeInSeconds` is parameterized. Any activity in a pipeline that has a code-level property named `waitTimeInSeconds` (for example, the `Wait` activity) is parameterized as a number, with a default name. But it won't have a default value in the Resource Manager template. It will be a mandatory input during the Resource Manager deployment.
159
+
* Any property in the path `activities/typeProperties/waitTimeInSeconds` is parameterized. Any activity in a pipeline that has a code-level property named `waitTimeInSeconds` (for example, the `Wait` activity) is parameterized as a number, with a default name. But it won't have a default value in the Resource Manager template. It is a mandatory input during the Resource Manager deployment.
160
160
* Similarly, a property called `headers` (for example, in a `Web` activity) is parameterized with type `object` (JObject). It has a default value, which is the same value as that of the source factory.
161
161
162
162
### IntegrationRuntimes
@@ -170,16 +170,16 @@ Here's an explanation of how the preceding template is constructed, broken down
170
170
171
171
### LinkedServices
172
172
173
-
* Linked services are unique. Because linked services and datasets have a wide range of types, you can provide type-specific customization. In this example, for all linked services of type `AzureDataLakeStore`, a specific template will be applied. For all others (via `*`), a different template will be applied.
174
-
* The `connectionString` property will be parameterized as a `securestring` value. It won't have a default value. It will have a shortened parameter name that's suffixed with `connectionString`.
173
+
* Linked services are unique. Because linked services and datasets have a wide range of types, you can provide type-specific customization. In this example, for all linked services of type `AzureDataLakeStore`, a specific template is applied. For all others (via `*`), a different template is applied.
174
+
* The `connectionString` property is parameterized as a `securestring` value. It won't have a default value. It has a shortened parameter name that's suffixed with `connectionString`.
175
175
* The property `secretAccessKey` happens to be an `AzureKeyVaultSecret` (for example, in an Amazon S3 linked service). It's automatically parameterized as an Azure Key Vault secret and fetched from the configured key vault. You can also parameterize the key vault itself.
176
176
177
177
### Datasets
178
178
179
179
* Although type-specific customization is available for datasets, you can provide configuration without explicitly having a \*-level configuration. In the preceding example, all dataset properties under `typeProperties` are parameterized.
180
180
181
181
> [!NOTE]
182
-
> If **Azure alerts and matrices** are configured for a pipeline, they are not currently supported as parameters for ARM deployments. To reapply the alerts and matrices in new environment, please follow [Data Factory Monitoring, Alerts and Matrices.](./monitor-metrics-alerts.md)
182
+
> If **Azure alerts and matrices** are configured for a pipeline, they aren't currently supported as parameters for ARM template deployments. To reapply the alerts and matrices in new environment, follow [Data Factory Monitoring, Alerts, and Matrices.](./monitor-metrics-alerts.md)
183
183
>
184
184
185
185
## Default parameterization template
@@ -331,7 +331,7 @@ Below is the current default parameterization template. If you need to add only
331
331
332
332
## Example: Parameterizing an existing Azure Databricks interactive cluster ID
333
333
334
-
The following example shows how to add a single value to the default parameterization template. We only want to add an existing Azure Databricks interactive cluster ID for a Databricks linked service to the parameters file. Note that this file is the same as the previous file except for the addition of `existingClusterId` under the properties field of `Microsoft.DataFactory/factories/linkedServices`.
334
+
The following example shows how to add a single value to the default parameterization template. We only want to add an existing Azure Databricks interactive cluster ID for a Databricks linked service to the parameters file. This file is the same as the previous file except for the addition of `existingClusterId` under the properties field of `Microsoft.DataFactory/factories/linkedServices`.
Copy file name to clipboardExpand all lines: articles/data-factory/continuous-integration-delivery.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,11 +2,11 @@
2
2
title: Continuous integration and delivery
3
3
description: Learn how to use continuous integration and delivery to move Azure Data Factory pipelines from one environment (development, test, production) to another.
0 commit comments