Skip to content

Commit f63a70e

Browse files
committed
table fix2
1 parent 367849f commit f63a70e

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

articles/data-explorer/data-factory-integration.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -105,14 +105,13 @@ The following table lists the required permissions for various steps in the inte
105105
If Azure Data Explorer is the source and you use the Lookup, copy, or command activity that contains a query where, refer to [query best practices](/azure/kusto/query/best-practices) for performance information and [ADF documentation for copy activity](/azure/data-factory/copy-activity-performance).
106106

107107
This section addresses the use of copy activity where Azure Data Explorer is the sink. The estimated throughput for Azure Data Explorer sink is 11-13 MBps. The following table details the parameters influencing the performance of the Azure Data Explorer sink.
108-
108+
109109
| Parameter | Notes |
110110
|---|---|
111111
| **Components geographical proximity** | Place all components in the same region:<ul><li>source and sink data stores.</li><li>ADF integration runtime.</li><li>Your ADX cluster.</li></ul>Make sure that at least your integration runtime is in the same region as your ADX cluster. |
112112
| **Number of DIUs** | 1 VM for every 4 DIUs used by ADF. <br>Increasing the DIUs will help only if your source is a file-based store with multiple files. Each VM will then process a different file in parallel. Therefore, copying a single large file will have a higher latency than copying multiple smaller files.|
113113
|**Amount and SKU of your ADX cluster** | High number of ADX nodes will boost ingestion processing time.|
114-
| Parallelism | To copy a very large amount of data from a database, partition your data and then use a ForEach loop that copies each partition in parallel or use the [Bulk Copy from Database to Azure Data Explorer Template](data-factory-template.md).
115-
Note: **Settings** > **Degree of Parallelism** in the Copy activity isn't relevant to ADX.
114+
| **Parallelism** | To copy a very large amount of data from a database, partition your data and then use a ForEach loop that copies each partition in parallel or use the [Bulk Copy from Database to Azure Data Explorer Template](data-factory-template.md). Note: **Settings** > **Degree of Parallelism** in the Copy activity isn't relevant to ADX. |
116115
| **Data processing complexity** | Latency varies according to source file format, column mapping, and compression.|
117116
| **The VM running your integration runtime** | <ul><li>For Azure copy, ADF VMs and machine SKUs can't be changed.</li><li> For on-prem to Azure copy, determine that the VM hosting your self-hosted IR is strong enough.</li></ul>|
118117

0 commit comments

Comments
 (0)