Skip to content

Commit e6bc478

Browse files
authored
Update concepts-data-flow-performance.md
1 parent 1bb067b commit e6bc478

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/data-factory/concepts-data-flow-performance.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ Setting throughput and batch properties on CosmosDB sinks only take effect durin
127127

128128
## Join performance
129129

130-
Managing the performance of joins in your data flow is a very common operation that you will perform throughout the lifecycle of your data transformations. In ADF, data flows do not require data to be sorted prior to joins as these operations are performed as hash joins in Spark. However, you can benefit from improved performance with the "Broadcast" Join optimization. This will avoid shuffles by pushing down the conents of either side of your join relationship into the Spark node. This works well for smaller tables that are used for reference lookups. Larger tables that my not fit into the node's memory are not good candidates for broadcast optimization.
130+
Managing the performance of joins in your data flow is a very common operation that you will perform throughout the lifecycle of your data transformations. In ADF, data flows do not require data to be sorted prior to joins as these operations are performed as hash joins in Spark. However, you can benefit from improved performance with the "Broadcast" Join optimization. This will avoid shuffles by pushing down the contents of either side of your join relationship into the Spark node. This works well for smaller tables that are used for reference lookups. Larger tables that may not fit into the node's memory are not good candidates for broadcast optimization.
131131

132132
Another Join optimization is to build your joins in such a way that it avoids Spark's tendency to implement cross joins. For example, when you include literal values in your join conditions, Spark may see that as a requirement to perform a full cartesian product first, then filter out the joined values. But if you ensure that you have column values on both sides of your join condition, you can avoid this Spark-induced cartesian product and improve the performance of your joins and data flows.
133133

0 commit comments

Comments
 (0)