Skip to content

Commit a22b5a1

Browse files
committed
Basic freshness and acrolinx
1 parent fb45b78 commit a22b5a1

15 files changed

+123
-123
lines changed

articles/data-factory/concepts-data-flow-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: makromer
66
ms.subservice: data-flows
77
ms.topic: conceptual
88
ms.custom: references_regions
9-
ms.date: 09/25/2024
9+
ms.date: 07/25/2025
1010
---
1111

1212
# Mapping data flows in Azure Data Factory

articles/data-factory/concepts-linked-services.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.reviewer: whhender
88
ms.subservice: data-movement
99
ms.custom: synapse
1010
ms.topic: conceptual
11-
ms.date: 09/25/2024
11+
ms.date: 07/25/2025
1212
---
1313

1414
# Linked services in Azure Data Factory and Azure Synapse Analytics
@@ -37,21 +37,21 @@ The following diagram shows the relationships among pipeline, activity, dataset,
3737

3838
# [Azure Data Factory](#tab/data-factory)
3939

40-
To create a new linked service in Azure Data Factory Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **+ New** to create a new linked service.
40+
To create a new linked service in Azure Data Factory Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **+ New** to create a new linked service.
4141

4242
:::image type="content" source="media/concepts-linked-services/create-linked-service.png" alt-text="Shows the Azure Data Factory studio Manage tab with linked services and the New button highlighted.":::
4343

44-
After selecting **+ New** to create a new linked service you can choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
44+
After selecting **+ New** to create a new linked service, you can choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
4545

4646
:::image type="content" source="media/concepts-linked-services/new-linked-service-window.png" alt-text="Shows the new linked service window.":::
4747

4848
# [Synapse Analytics](#tab/synapse-analytics)
4949

50-
To create a new linked service in Synapse Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **+ New** to create a new linked service.
50+
To create a new linked service in Synapse Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **+ New** to create a new linked service.
5151

5252
:::image type="content" source="media/concepts-linked-services/create-linked-service-synapse.png" alt-text="Shows the Azure Data Factory studio Manage tab with linked services and the New button highlighted.":::
5353

54-
After selecting **+ New** to create a new linked service you are able to choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
54+
After selecting **+ New** to create a new linked service, you're able to choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
5555

5656
:::image type="content" source="media/concepts-linked-services/new-linked-service-window.png" alt-text="Shows the new linked service window.":::
5757

articles/data-factory/concepts-parameters-variables.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,8 @@ author: soferreira
66
ms.author: soferreira
77
ms.subservice: orchestration
88
ms.custom: synapse
9-
ms.topic: conceptual
10-
ms.date: 09/25/2024
9+
ms.topic: concept-article
10+
ms.date: 07/25/2025
1111
---
1212

1313
# Pipeline parameters and variables in Azure Data Factory and Azure Synapse Analytics
@@ -18,37 +18,37 @@ This article helps you understand the difference between pipeline parameters and
1818

1919
## Pipeline parameters
2020

21-
Parameters are defined at the pipeline level, and cannot be modified during a pipeline run. Pipeline parameters can be used to control the behavior of a pipeline and its activities, such as by passing in the connection details for a dataset or the path of a file to be processed.
21+
Parameters are defined at the pipeline level, and can't be modified during a pipeline run. Pipeline parameters can be used to control the behavior of a pipeline and its activities, such as by passing in the connection details for a dataset or the path of a file to be processed.
2222

2323
### How to define a pipeline parameter
2424

2525
To define a pipeline parameter, follow these steps:
2626

27-
- Click on your pipeline to view its configuration tabs.
28-
- Select the "Parameters" tab, and click on the "+ New" button to define a new parameter.
27+
- Select your pipeline to view its configuration tabs.
28+
- Select the "Parameters" tab, and select the "+ New" button to define a new parameter.
2929
- Enter a name and description for the parameter, and select its data type from the dropdown menu. Data types can be String, Int, Float, Bool, Array, Object, or SecureString.
3030
- Optionally, you can also assign a default value to the parameter.
3131

3232
After defining a pipeline parameter, you can access its value during a pipeline run by using the ```@pipeline().parameters.<parameter name>``` expression in a pipeline activity. For example, if you defined a parameter called account_name of type String, you can access its value in an activity by using the expression ```@pipeline().parameters.account_name```.
3333

3434
![Screenshot of parameter definition.](./media/pipeline-parameter-variable-definition/parameter-definition.png)
3535

36-
Before each pipeline run, you can assign a new value to your parameter in the right panel, otherwise the pipeline will use the default or the value previously defined.
36+
Before each pipeline run, you can assign a new value to your parameter in the right panel, otherwise the pipeline uses the default or the value previously defined.
3737

3838
## Pipeline variables
3939

40-
Pipeline variables are values that can be set and modified during a pipeline run. Unlike pipeline parameters, which are defined at the pipeline level and cannot be changed during a pipeline run, pipeline variables can be set and modified within a pipeline using a [Set Variable](control-flow-set-variable-activity.md) activity.
40+
Pipeline variables are values that can be set and modified during a pipeline run. Unlike pipeline parameters, which are defined at the pipeline level and can't be changed during a pipeline run, pipeline variables can be set and modified within a pipeline using a [Set Variable](control-flow-set-variable-activity.md) activity.
4141

4242
Pipeline variables can be used to store and manipulate data during a pipeline run, such as by storing the results of a computation or the current state of a process.
4343

4444
> [!NOTE]
45-
> Variables are currently scoped at the pipeline level. This means that they are not thread safe and can cause unexpected and undesired behavior if they are accessed from within a parallel iteration activity such as a foreach loop, especially when the value is also being modified within that foreach activity.
45+
> Variables are currently scoped at the pipeline level. This means that they aren't thread safe and can cause unexpected and undesired behavior if they're accessed from within a parallel iteration activity such as a foreach loop, especially when the value is also being modified within that foreach activity.
4646
4747
To define a pipeline variable, follow these steps:
4848

49-
- Click on your pipeline to view its configuration tabs.
49+
- Select your pipeline to view its configuration tabs.
5050

51-
- Select the "Variables" tab, and click on the "+ New" button to define a new variable.
51+
- Select the "Variables" tab, and select the "+ New" button to define a new variable.
5252

5353
- Enter a name and description for the variable, and select its data type from the dropdown menu. Data types can be String, Bool, or Array.
5454

articles/data-factory/connector-azure-blob-storage.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: jianleishen
77
ms.subservice: data-movement
88
ms.topic: conceptual
99
ms.custom: synapse
10-
ms.date: 09/25/2024
10+
ms.date: 07/25/2025
1111
---
1212

1313
# Copy and transform data in Azure Blob Storage by using Azure Data Factory or Azure Synapse Analytics
@@ -83,7 +83,7 @@ This Blob storage connector supports the following authentication types. See the
8383
- [User-assigned managed identity authentication](#user-assigned-managed-identity-authentication)
8484

8585
>[!NOTE]
86-
>- If want to use the public Azure integration runtime to connect to your Blob storage by leveraging the **Allow trusted Microsoft services to access this storage account** option enabled on Azure Storage firewall, you must use [managed identity authentication](#managed-identity). For more information about the Azure Storage firewalls settings, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
86+
>- If you want to use the global Azure integration runtime to connect to your Blob storage by applying the **Allow trusted Microsoft services to access this storage account** option enabled on Azure Storage firewall, you must use [managed identity authentication](#managed-identity). For more information about the Azure Storage firewalls settings, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
8787
>- When you use PolyBase or COPY statement to load data into Azure Synapse Analytics, if your source or staging Blob storage is configured with an Azure Virtual Network endpoint, you must use managed identity authentication as required by Azure Synapse. See the [Managed identity authentication](#managed-identity) section for more configuration prerequisites.
8888
8989
>[!NOTE]
@@ -492,7 +492,7 @@ The following properties are supported for Azure Blob Storage under `storeSettin
492492
| ***Additional settings:*** | | |
493493
| recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when **recursive** is set to **true** and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. <br>Allowed values are **true** (default) and **false**.<br>This property doesn't apply when you configure `fileListPath`. | No |
494494
| deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file. Therefore, when the copy activity fails, you'll see some files have already been copied to the destination and deleted from source, while others are still remaining on the source store. <br/>This property is only valid in binary files copy scenario. The default value: false. | No |
495-
| modifiedDatetimeStart | Files are filtered based on the attribute: last modified. <br>The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to a UTC time zone in the format of "2018-12-01T05:00:00Z". <br> The properties can be **NULL**, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is **NULL**, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is **NULL**, the files whose last modified attribute is less than the datetime value will be selected.<br/>This property doesn't apply when you configure `fileListPath`. | No |
495+
| modifiedDatetimeStart | Files are filtered based on the attribute: last modified. <br>The files are selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to a UTC time zone in the format of "2018-12-01T05:00:00Z". <br> The properties can be **NULL**, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has a datetime value but `modifiedDatetimeEnd` is **NULL**, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When `modifiedDatetimeEnd` has a datetime value but `modifiedDatetimeStart` is **NULL**, the files whose last modified attribute is less than the datetime value will be selected.<br/>This property doesn't apply when you configure `fileListPath`. | No |
496496
| modifiedDatetimeEnd | Same as the previous property. | No |
497497
| enablePartitionDiscovery | For files that are partitioned, specify whether to parse the partitions from the file path and add them as extra source columns.<br/>Allowed values are **false** (default) and **true**. | No |
498498
| partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.<br/><br/>If it isn't specified, by default,<br/>- When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.<br/>- When you use wildcard folder filter, partition root path is the sub-path before the first wildcard.<br/>- When you use prefix, partition root path is sub-path before the last "/". <br/><br/>For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":<br/>- If you specify partition root path as "root/folder/year=2020", copy activity will generate two more columns `month` and `day` with value "08" and "27" respectively, in addition to the columns inside the files.<br/>- If partition root path isn't specified, no extra column will be generated. | No |
@@ -734,8 +734,8 @@ In the sink transformation, you can write to either a container or a folder in A
734734
- **Default**: Allow Spark to name files based on PART defaults.
735735
- **Pattern**: Enter a pattern that enumerates your output files per partition. For example, `loans[n].csv` creates `loans1.csv`, `loans2.csv`, and so on.
736736
- **Per partition**: Enter one file name per partition.
737-
- **As data in column**: Set the output file to the value of a column. The path is relative to the dataset container, not the destination folder. If you have a folder path in your dataset, it is overridden.
738-
- **Output to a single file**: Combine the partitioned output files into a single named file. The path is relative to the dataset folder. Be aware that the merge operation can possibly fail based on node size. We don't recommend this option for large datasets.
737+
- **As data in column**: Set the output file to the value of a column. The path is relative to the dataset container, not the destination folder. If you have a folder path in your dataset, it's overridden.
738+
- **Output to a single file**: Combine the partitioned output files into a single named file. The path is relative to the dataset folder. The merge operation can possibly fail based on node size. We don't recommend this option for large datasets.
739739

740740
**Quote all:** Determines whether to enclose all values in quotation marks.
741741

@@ -883,7 +883,7 @@ To learn details about the properties, check [Delete activity](delete-activity.m
883883

884884
## Change data capture
885885

886-
Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture ** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice. Please refer to [Change Data Capture](concepts-change-data-capture.md) for details.
886+
Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture ** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice. Refer to [Change Data Capture](concepts-change-data-capture.md) for details.
887887

888888
## Related content
889889

0 commit comments

Comments
 (0)