You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/data-factory/concepts-linked-services.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ ms.reviewer: whhender
8
8
ms.subservice: data-movement
9
9
ms.custom: synapse
10
10
ms.topic: conceptual
11
-
ms.date: 09/25/2024
11
+
ms.date: 07/25/2025
12
12
---
13
13
14
14
# Linked services in Azure Data Factory and Azure Synapse Analytics
@@ -37,21 +37,21 @@ The following diagram shows the relationships among pipeline, activity, dataset,
37
37
38
38
# [Azure Data Factory](#tab/data-factory)
39
39
40
-
To create a new linked service in Azure Data Factory Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **+ New** to create a new linked service.
40
+
To create a new linked service in Azure Data Factory Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **+ New** to create a new linked service.
41
41
42
42
:::image type="content" source="media/concepts-linked-services/create-linked-service.png" alt-text="Shows the Azure Data Factory studio Manage tab with linked services and the New button highlighted.":::
43
43
44
-
After selecting **+ New** to create a new linked service you can choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
44
+
After selecting **+ New** to create a new linked service, you can choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
45
45
46
46
:::image type="content" source="media/concepts-linked-services/new-linked-service-window.png" alt-text="Shows the new linked service window.":::
47
47
48
48
# [Synapse Analytics](#tab/synapse-analytics)
49
49
50
-
To create a new linked service in Synapse Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **+ New** to create a new linked service.
50
+
To create a new linked service in Synapse Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **+ New** to create a new linked service.
51
51
52
52
:::image type="content" source="media/concepts-linked-services/create-linked-service-synapse.png" alt-text="Shows the Azure Data Factory studio Manage tab with linked services and the New button highlighted.":::
53
53
54
-
After selecting **+ New** to create a new linked service you are able to choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
54
+
After selecting **+ New** to create a new linked service, you're able to choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
55
55
56
56
:::image type="content" source="media/concepts-linked-services/new-linked-service-window.png" alt-text="Shows the new linked service window.":::
Copy file name to clipboardExpand all lines: articles/data-factory/concepts-parameters-variables.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,8 +6,8 @@ author: soferreira
6
6
ms.author: soferreira
7
7
ms.subservice: orchestration
8
8
ms.custom: synapse
9
-
ms.topic: conceptual
10
-
ms.date: 09/25/2024
9
+
ms.topic: concept-article
10
+
ms.date: 07/25/2025
11
11
---
12
12
13
13
# Pipeline parameters and variables in Azure Data Factory and Azure Synapse Analytics
@@ -18,37 +18,37 @@ This article helps you understand the difference between pipeline parameters and
18
18
19
19
## Pipeline parameters
20
20
21
-
Parameters are defined at the pipeline level, and cannot be modified during a pipeline run. Pipeline parameters can be used to control the behavior of a pipeline and its activities, such as by passing in the connection details for a dataset or the path of a file to be processed.
21
+
Parameters are defined at the pipeline level, and can't be modified during a pipeline run. Pipeline parameters can be used to control the behavior of a pipeline and its activities, such as by passing in the connection details for a dataset or the path of a file to be processed.
22
22
23
23
### How to define a pipeline parameter
24
24
25
25
To define a pipeline parameter, follow these steps:
26
26
27
-
-Click on your pipeline to view its configuration tabs.
28
-
- Select the "Parameters" tab, and click on the "+ New" button to define a new parameter.
27
+
-Select your pipeline to view its configuration tabs.
28
+
- Select the "Parameters" tab, and select the "+ New" button to define a new parameter.
29
29
- Enter a name and description for the parameter, and select its data type from the dropdown menu. Data types can be String, Int, Float, Bool, Array, Object, or SecureString.
30
30
- Optionally, you can also assign a default value to the parameter.
31
31
32
32
After defining a pipeline parameter, you can access its value during a pipeline run by using the ```@pipeline().parameters.<parameter name>``` expression in a pipeline activity. For example, if you defined a parameter called account_name of type String, you can access its value in an activity by using the expression ```@pipeline().parameters.account_name```.
33
33
34
34

35
35
36
-
Before each pipeline run, you can assign a new value to your parameter in the right panel, otherwise the pipeline will use the default or the value previously defined.
36
+
Before each pipeline run, you can assign a new value to your parameter in the right panel, otherwise the pipeline uses the default or the value previously defined.
37
37
38
38
## Pipeline variables
39
39
40
-
Pipeline variables are values that can be set and modified during a pipeline run. Unlike pipeline parameters, which are defined at the pipeline level and cannot be changed during a pipeline run, pipeline variables can be set and modified within a pipeline using a [Set Variable](control-flow-set-variable-activity.md) activity.
40
+
Pipeline variables are values that can be set and modified during a pipeline run. Unlike pipeline parameters, which are defined at the pipeline level and can't be changed during a pipeline run, pipeline variables can be set and modified within a pipeline using a [Set Variable](control-flow-set-variable-activity.md) activity.
41
41
42
42
Pipeline variables can be used to store and manipulate data during a pipeline run, such as by storing the results of a computation or the current state of a process.
43
43
44
44
> [!NOTE]
45
-
> Variables are currently scoped at the pipeline level. This means that they are not thread safe and can cause unexpected and undesired behavior if they are accessed from within a parallel iteration activity such as a foreach loop, especially when the value is also being modified within that foreach activity.
45
+
> Variables are currently scoped at the pipeline level. This means that they aren't thread safe and can cause unexpected and undesired behavior if they're accessed from within a parallel iteration activity such as a foreach loop, especially when the value is also being modified within that foreach activity.
46
46
47
47
To define a pipeline variable, follow these steps:
48
48
49
-
-Click on your pipeline to view its configuration tabs.
49
+
-Select your pipeline to view its configuration tabs.
50
50
51
-
- Select the "Variables" tab, and click on the "+ New" button to define a new variable.
51
+
- Select the "Variables" tab, and select the "+ New" button to define a new variable.
52
52
53
53
- Enter a name and description for the variable, and select its data type from the dropdown menu. Data types can be String, Bool, or Array.
>- If want to use the public Azure integration runtime to connect to your Blob storage by leveraging the **Allow trusted Microsoft services to access this storage account** option enabled on Azure Storage firewall, you must use [managed identity authentication](#managed-identity). For more information about the Azure Storage firewalls settings, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
86
+
>- If you want to use the global Azure integration runtime to connect to your Blob storage by applying the **Allow trusted Microsoft services to access this storage account** option enabled on Azure Storage firewall, you must use [managed identity authentication](#managed-identity). For more information about the Azure Storage firewalls settings, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
87
87
>- When you use PolyBase or COPY statement to load data into Azure Synapse Analytics, if your source or staging Blob storage is configured with an Azure Virtual Network endpoint, you must use managed identity authentication as required by Azure Synapse. See the [Managed identity authentication](#managed-identity) section for more configuration prerequisites.
88
88
89
89
>[!NOTE]
@@ -492,7 +492,7 @@ The following properties are supported for Azure Blob Storage under `storeSettin
492
492
|***Additional settings:***|||
493
493
| recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when **recursive** is set to **true** and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. <br>Allowed values are **true** (default) and **false**.<br>This property doesn't apply when you configure `fileListPath`. | No |
494
494
| deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file. Therefore, when the copy activity fails, you'll see some files have already been copied to the destination and deleted from source, while others are still remaining on the source store. <br/>This property is only valid in binary files copy scenario. The default value: false. | No |
495
-
| modifiedDatetimeStart | Files are filtered based on the attribute: last modified. <br>The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to a UTC time zone in the format of "2018-12-01T05:00:00Z". <br> The properties can be **NULL**, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is **NULL**, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is **NULL**, the files whose last modified attribute is less than the datetime value will be selected.<br/>This property doesn't apply when you configure `fileListPath`. | No |
495
+
| modifiedDatetimeStart | Files are filtered based on the attribute: last modified. <br>The files are selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to a UTC time zone in the format of "2018-12-01T05:00:00Z". <br> The properties can be **NULL**, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is **NULL**, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is **NULL**, the files whose last modified attribute is less than the datetime value will be selected.<br/>This property doesn't apply when you configure `fileListPath`. | No |
496
496
| modifiedDatetimeEnd | Same as the previous property. | No |
497
497
| enablePartitionDiscovery | For files that are partitioned, specify whether to parse the partitions from the file path and add them as extra source columns.<br/>Allowed values are **false** (default) and **true**. | No |
498
498
| partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.<br/><br/>If it isn't specified, by default,<br/>- When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.<br/>- When you use wildcard folder filter, partition root path is the sub-path before the first wildcard.<br/>- When you use prefix, partition root path is sub-path before the last "/". <br/><br/>For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":<br/>- If you specify partition root path as "root/folder/year=2020", copy activity will generate two more columns `month` and `day` with value "08" and "27" respectively, in addition to the columns inside the files.<br/>- If partition root path isn't specified, no extra column will be generated. | No |
@@ -734,8 +734,8 @@ In the sink transformation, you can write to either a container or a folder in A
734
734
-**Default**: Allow Spark to name files based on PART defaults.
735
735
-**Pattern**: Enter a pattern that enumerates your output files per partition. For example, `loans[n].csv` creates `loans1.csv`, `loans2.csv`, and so on.
736
736
-**Per partition**: Enter one file name per partition.
737
-
-**As data in column**: Set the output file to the value of a column. The path is relative to the dataset container, not the destination folder. If you have a folder path in your dataset, it is overridden.
738
-
-**Output to a single file**: Combine the partitioned output files into a single named file. The path is relative to the dataset folder. Be aware that the merge operation can possibly fail based on node size. We don't recommend this option for large datasets.
737
+
-**As data in column**: Set the output file to the value of a column. The path is relative to the dataset container, not the destination folder. If you have a folder path in your dataset, it's overridden.
738
+
-**Output to a single file**: Combine the partitioned output files into a single named file. The path is relative to the dataset folder. The merge operation can possibly fail based on node size. We don't recommend this option for large datasets.
739
739
740
740
**Quote all:** Determines whether to enclose all values in quotation marks.
741
741
@@ -883,7 +883,7 @@ To learn details about the properties, check [Delete activity](delete-activity.m
883
883
884
884
## Change data capture
885
885
886
-
Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture ** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice. Please refer to [Change Data Capture](concepts-change-data-capture.md) for details.
886
+
Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture ** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice. Refer to [Change Data Capture](concepts-change-data-capture.md) for details.
0 commit comments