Skip to content

Commit 1869fc0

Browse files
authored
Merge pull request #115498 from djpmsft/uxRelease
Properties pane, consumption monitoring, non-equi join docs
2 parents bc988ac + 929bac1 commit 1869fc0

File tree

10 files changed

+40
-5
lines changed

10 files changed

+40
-5
lines changed

articles/data-factory/author-visually.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ author: djpmsft
99
ms.author: daperlov
1010
ms.reviewer:
1111
manager: anandsub
12-
ms.date: 12/19/2019
12+
ms.date: 05/15/2020
1313
---
1414

1515
# Visual authoring in Azure Data Factory
@@ -30,6 +30,14 @@ Here, you will author the pipelines, activities, datasets, linked services, data
3030

3131
The default visual authoring experience is directly working with the Data Factory service. Azure Repos Git or GitHub integration is also supported to allow source control and collaboration for work on your data factory pipelines. To learn more about the differences between these authoring experiences, see [Source control in Azure Data Factory](source-control.md).
3232

33+
### Properties pane
34+
35+
For top-level resources such as pipelines, datasets, and data flows, high-level properties are editable in the properties pane on the right-hand side of the canvas. The properties pane contains properties such as name, description, annotations, and other high-level properties. Subresources such as pipeline activities and data flow transformations are edited using the panel at the bottom of the canvas.
36+
37+
![Authoring Canvas](media/author-visually/properties-pane.png)
38+
39+
The properties pane will only be open by default on resource creation. To edit it, click on the properties pane icon located in the top-right corner of the canvas.
40+
3341
## Expressions and functions
3442

3543
Expressions and functions can be used instead of static values to specify many properties in Azure Data Factory.

articles/data-factory/data-flow-join.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.reviewer: daperlov
77
ms.service: data-factory
88
ms.topic: conceptual
99
ms.custom: seo-lt-2019
10-
ms.date: 01/02/2020
10+
ms.date: 05/15/2020
1111
---
1212

1313
# Join transformation in mapping data flow
@@ -58,6 +58,12 @@ If you would like to explicitly produce a full cartesian product, use the Derive
5858

5959
![Join Transformation](media/data-flow/join.png "Join")
6060

61+
### Non-equi joins
62+
63+
To use a conditional operator such as not equals (!=) or greater than (>) in your join conditions, change the operator dropdown between the two columns. Non-equi joins require at least one of the two streams to be broadcasted using **Fixed** broadcasting in the **Optimize** tab.
64+
65+
![Non-equi join](media/data-flow/non-equi-join.png "Non-equi join")
66+
6167
## Optimizing join performance
6268

6369
Unlike merge join in tools like SSIS, the join transformation isn't a mandatory merge join operation. The join keys don't require sorting. The join operation occurs based on the optimal join operation in Spark, either broadcast or map-side join.

articles/data-factory/data-flow-lookup.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.author: makromer
77
ms.service: data-factory
88
ms.topic: conceptual
99
ms.custom: seo-lt-2019
10-
ms.date: 03/23/2020
10+
ms.date: 05/15/2020
1111
---
1212

1313
# Lookup transformation in mapping data flow
@@ -36,6 +36,12 @@ The lookup transformation only supports equality matches. To customize the looku
3636

3737
All columns from both streams are included in the output data. To drop duplicate or unwanted columns, add a [select transformation](data-flow-select.md) after your lookup transformation. Columns can also be dropped or renamed in a sink transformation.
3838

39+
### Non-equi joins
40+
41+
To use a conditional operator such as not equals (!=) or greater than (>) in your lookup conditions, change the operator dropdown between the two columns. Non-equi joins require at least one of the two streams to be broadcasted using **Fixed** broadcasting in the **Optimize** tab.
42+
43+
![Non-equi lookup](media/data-flow/non-equi-lookup.png "Non-equi lookup")
44+
3945
## Analyzing matched rows
4046

4147
After your lookup transformation, the function `isMatch()` can be used to see if the lookup matched for individual rows.
273 KB
Loading
43.3 KB
Loading
51.3 KB
Loading
215 KB
Loading
163 KB
Loading

articles/data-factory/monitor-visually.md

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.reviewer: maghan
99
ms.service: data-factory
1010
ms.workload: data-services
1111
ms.topic: conceptual
12-
ms.date: 11/19/2018
12+
ms.date: 05/15/2020
1313
---
1414

1515
# Visually monitor Azure Data Factory
@@ -127,6 +127,21 @@ You can also view rerun history for a particular pipeline run.
127127

128128
![View history for a pipeline run](media/monitor-visually/rerun-history-image2.png)
129129

130+
## Monitor consumption
131+
132+
You can see the resources consumed by a pipeline run by clicking the consumption icon next to the run.
133+
134+
![Monitor consumption](media/monitor-visually/monitor-consumption-1.png)
135+
136+
Clicking the icon opens a consumption report of resources used by that pipeline run.
137+
138+
![Monitor consumption](media/monitor-visually/monitor-consumption-2.png)
139+
140+
You can plug these values into the [Azure pricing calcula.hat pipeline run. For more information on Azure Data Factory pricing, see [Understanding pricing](pricing-concepts.md).
141+
142+
> [!NOTE]
143+
> These values returned by the pricing calculator is an estimate. It doesn't reflect the exact amount you will be billed by Azure Data Factory
144+
130145
## Gantt views
131146

132147
Use Gantt views to quickly visualize your pipelines and activity runs.

articles/data-factory/source-control.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ When you are ready to merge the changes from your feature branch to your collabo
176176

177177
### Configure publishing settings
178178

179-
By default, data factory generates the Resource Manager templates of the published factory and saves them into a branch called `adf_public`. To configure a custom publish branch, add a `publish_config.json` file to the root folder in the collaboration branch. When publishing, ADF reads this file, looks for the field `publishBranch`, and saves all Resource Manager templates to the specified location. If the branch doesn't exist, data factory will automatically create it. And example of what this file looks like is below:
179+
By default, data factory generates the Resource Manager templates of the published factory and saves them into a branch called `adf_publish`. To configure a custom publish branch, add a `publish_config.json` file to the root folder in the collaboration branch. When publishing, ADF reads this file, looks for the field `publishBranch`, and saves all Resource Manager templates to the specified location. If the branch doesn't exist, data factory will automatically create it. And example of what this file looks like is below:
180180

181181
```json
182182
{

0 commit comments

Comments
 (0)