Skip to content

Commit e213d1a

Browse files
authored
Update connector-azure-sql-data-warehouse.md
1 parent 904201c commit e213d1a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/data-factory/connector-azure-sql-data-warehouse.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -700,7 +700,7 @@ Settings specific to Azure Synapse Analytics are available in the **Source Optio
700700

701701
* SQL Example: ```Select * from MyTable where customerId > 1000 and customerId < 2000```
702702

703-
**Batch size**: Enter a batch size to chunk large data into reads.
703+
**Batch size**: Enter a batch size to chunk large data into reads. In data flows, ADF will use this setting to set Spark columnar caching. This is an option field which will use Spark defaults if it is left blank.
704704

705705
**Isolation Level**: The default for SQL sources in mapping data flow is read uncommitted. You can change the isolation level here to one of these values:
706706
* Read Committed

0 commit comments

Comments
 (0)