You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/data-engineering/how-to-use-notebook.md
+8-2Lines changed: 8 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,8 +2,8 @@
2
2
title: How to use notebooks
3
3
description: Learn how to create a new notebook, import an existing notebook, connect notebooks to lakehouses, collaborate in notebooks, and comment code cells.
4
4
ms.reviewer: jingzh
5
-
ms.author: eur
6
-
author: eric-urban
5
+
ms.author: jingzh
6
+
author: JeneZhang
7
7
ms.topic: how-to
8
8
ms.custom: sfi-image-nochange
9
9
ms.search.form: Create and use notebooks
@@ -241,6 +241,12 @@ Version history allows you to easily version your live notebook changes. It supp
241
241
- System checkpoint: These checkpoints are created automatically every 5 minutes based on editing time interval by Notebook system, ensuring that your work is consistently saved and versioned. You can find the modification records from all the contributors in the system checkpoint timeline list.
Fabric notebooks seamlessly integrate with Git, deployment pipelines, and Visual Studio Code. Each saved version is automatically captured in the notebook’s version history. Versions may originate from direct edits within the notebook, Git synchronizations, deployment pipeline activities, or publishing via VS Code. The source of each version is clearly labeled in version history to provide full traceability.
247
+
248
+
:::image type="content" source="media\how-to-use-notebook\multi-source-checkpoint.png" alt-text="Screenshot showing multi-source checkpoint for notebook version history." lightbox="media\how-to-use-notebook\multi-source-checkpoint.png":::
249
+
244
250
1. You can click on a checkpoint to open the **diff view**, it highlights the content differences between the selected checkpoint and the current live version, including the differences of cell content, cell output, and metadata. The version of this checkpoint can be managed individually in **'more options'** menu.
Copy file name to clipboardExpand all lines: docs/data-factory/connector-lakehouse-copy-activity.md
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: This article explains how to copy data using Lakehouse.
4
4
author: jianleishen
5
5
ms.author: jianleishen
6
6
ms.topic: how-to
7
-
ms.date: 10/09/2025
7
+
ms.date: 02/02/2026
8
8
ms.custom:
9
9
- pipelines
10
10
- template-how-to
@@ -65,7 +65,7 @@ The following properties are **required**:
65
65
-**Root folder**: Select **Tables** or **Files**, which indicates the virtual view of the managed or unmanaged area in your lake. For more information, refer to [Lakehouse introduction](../data-engineering/lakehouse-overview.md).
66
66
67
67
- If you select **Tables**:
68
-
-**Use query**: Select from **Table** or **T-SQL Query**.
68
+
-**Use query**: Select from **Table** or **T-SQL Query (Preview)**.
69
69
- If you select **Table**:
70
70
-**Table**: Choose an existing table from the table list or specify a table name as the source. Or you can select **New** to create a new table.
71
71
@@ -80,8 +80,8 @@ The following properties are **required**:
80
80
-**Version**: Specify to query an older snapshot by version.
81
81
-**Additional columns**: Add additional data columns to the store source files' relative path or static value. Expression is supported for the latter.
82
82
83
-
- If you select **T-SQL Query**:
84
-
-**T-SQL Query**: Specify the custom SQL query to read data through the [Lakehouse SQL analytics endpoint](../data-engineering/lakehouse-sql-analytics-endpoint.md). For example: `SELECT * FROM MyTable`. Note that Lakehouse table query mode does not support workspace-level private links.
83
+
- If you select **T-SQL Query (Preview)**:
84
+
-**T-SQL Query (Preview)**: Specify the custom SQL query to read data through the [Lakehouse SQL analytics endpoint](../data-engineering/lakehouse-sql-analytics-endpoint.md). For example: `SELECT * FROM MyTable`. Note that Lakehouse table query mode does not support workspace-level private links.
@@ -93,11 +93,11 @@ The following properties are **required**:
93
93
94
94
If you select **Dynamic range**, when using query with parallel enabled, range partition parameter(`?DfDynamicRangePartitionCondition`) is needed. Sample query: `SELECT * FROM <TableName> WHERE ?DfDynamicRangePartitionCondition`.
95
95
-**Partition column name**: Specify the name of the source column in **integer** type that's used by range partitioning for parallel copy. If not specified, the index or the primary key of the table is auto-detected and used as the partition column.
96
-
If you use a query to retrieve the source data, hook `?DfDynamicRangePartitionCondition` in the WHERE clause. For an example, see the [Parallel copy from Lakehouse tables using T-SQL Query](#parallel-copy-from-lakehouse-tables-using-t-sql-query) section.
96
+
If you use a query to retrieve the source data, hook `?DfDynamicRangePartitionCondition` in the WHERE clause. For an example, see the [Parallel copy from Lakehouse tables using T-SQL Query (Preview)](#parallel-copy-from-lakehouse-tables-using-t-sql-query) section.
97
97
98
-
-**Partition upper bound**: Specify the maximum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value. For an example, see the [Parallel copy from Lakehouse tables using T-SQL Query](#parallel-copy-from-lakehouse-tables-using-t-sql-query) section.
98
+
-**Partition upper bound**: Specify the maximum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value. For an example, see the [Parallel copy from Lakehouse tables using T-SQL Query (Preview)](#parallel-copy-from-lakehouse-tables-using-t-sql-query) section.
99
99
100
-
-**Partition lower bound**: Specify the minimum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value. For an example, see the [Parallel copy from Lakehouse tables using T-SQL Query](#parallel-copy-from-lakehouse-tables-using-t-sql-query) section.
100
+
-**Partition lower bound**: Specify the minimum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value. For an example, see the [Parallel copy from Lakehouse tables using T-SQL Query (Preview)](#parallel-copy-from-lakehouse-tables-using-t-sql-query) section.
101
101
102
102
103
103
:::image type="content" source="./media/connector-lakehouse/dynamic-range.png" alt-text="Screenshot showing the configuration when you select Dynamic range." lightbox="./media/connector-lakehouse/dynamic-range.png":::
@@ -266,11 +266,11 @@ When copying data to Lakehouse tables in Table mode, the following mappings are
266
266
| Byte array | binary |
267
267
| Decimal | decimal |
268
268
269
-
### T-SQL Query
269
+
### T-SQL Query (Preview)
270
270
271
-
When copying data from Lakehouse tables in T-SQL Query mode, the following mappings are used from Lakehouse table data types to interim data types used by the service internally.
271
+
When copying data from Lakehouse tables in T-SQL Query (Preview) mode, the following mappings are used from Lakehouse table data types to interim data types used by the service internally.
272
272
273
-
| Lakehouse table data type in T-SQL Query mode | Interim service data type |
273
+
| Lakehouse table data type in T-SQL Query (Preview) mode | Interim service data type |
274
274
|---------------------|------------------|
275
275
| int | Int32 |
276
276
| varchar | String |
@@ -284,13 +284,13 @@ When copying data from Lakehouse tables in T-SQL Query mode, the following mappi
284
284
| date | Date |
285
285
| datetime2 | DateTime |
286
286
287
-
## Parallel copy from Lakehouse tables using T-SQL Query
287
+
## <aname="parallel-copy-from-lakehouse-tables-using-t-sql-query"></a> Parallel copy from Lakehouse tables using T-SQL Query (Preview)
288
288
289
-
The Lakehouse tables connector using T-SQL Query in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning options on the **Source** tab of the copy activity.
289
+
The Lakehouse tables connector using T-SQL Query (Preview) in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning options on the **Source** tab of the copy activity.
290
290
291
-
When you enable partitioned copy, copy activity runs parallel queries against your Lakehouse tables using T-SQL Query source to load data by partitions. The parallel degree is controlled by the **Degree of copy parallelism** in the copy activity settings tab. For example, if you set **Degree of copy parallelism** to four, the service concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Lakehouse tables using T-SQL Query.
291
+
When you enable partitioned copy, copy activity runs parallel queries against your Lakehouse tables using T-SQL Query (Preview) source to load data by partitions. The parallel degree is controlled by the **Degree of copy parallelism** in the copy activity settings tab. For example, if you set **Degree of copy parallelism** to four, the service concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Lakehouse tables using T-SQL Query (Preview).
292
292
293
-
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Lakehouse tables using T-SQL Query. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file.
293
+
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Lakehouse tables using T-SQL Query (Preview). The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file.
@@ -336,11 +336,11 @@ The following tables contain more information about a copy activity in Lakehouse
336
336
|:---|:---|:---|:---|:---|
337
337
|**Connection**|The section to select your connection.|< your Lakehouse connection>|Yes|workspaceId<br>itemId|
338
338
|**Root folder**|The type of the root folder.|• **Tables**<br>• **Files**|No|rootFolder:<br>Tables or Files|
339
-
|**Use query**|The way to read data from Lakehouse. Apply **Table** to read data from the specified table or apply **T-SQL Query** to read data using query.|• **Table** <br>• **T-SQL Query**|Yes |/|
339
+
|**Use query**|The way to read data from Lakehouse. Apply **Table** to read data from the specified table or apply **T-SQL Query (Preview)** to read data using query.|• **Table** <br>• **T-SQL Query (Preview)**|Yes |/|
340
340
|**Table**|The name of the table that you want to read data, or the name of the table with a schema that you want to read data when you apply Lakehouse with schemas as the connection. |\<your table name> |Yes when you select **Tables** in **Root folder**| table |
341
341
|**schema name**| Name of the schema. |< your schema name > | No | schema |
342
342
|**table name**| Name of the table. | < your table name > | No |table |
343
-
|**T-SQL Query**| Use the custom query to read data. An example is `SELECT * FROM MyTable`. | < query > |No | sqlReaderQuery|
343
+
|**T-SQL Query (Preview)**| Use the custom query to read data. An example is `SELECT * FROM MyTable`. | < query > |No | sqlReaderQuery|
344
344
|**Timestamp**| The timestamp to query an older snapshot.|\<timestamp>|No |timestampAsOf |
345
345
|**Version**|The version to query an older snapshot.|\<version>|No |versionAsOf|
346
346
|**Query timeout (minutes)**|The timeout for query command execution, default is 120 minutes.|timespan |No |queryTimeout|
0 commit comments