Skip to content

Commit 30d0cdc

Browse files
Learn Build Service GitHub AppLearn Build Service GitHub App
authored andcommitted
Merging changes synced from https://github.com/MicrosoftDocs/fabric-docs-pr (branch live)
2 parents 680377e + 9fd97d5 commit 30d0cdc

File tree

7 files changed

+24
-18
lines changed

7 files changed

+24
-18
lines changed

docs/data-engineering/how-to-use-notebook.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
title: How to use notebooks
33
description: Learn how to create a new notebook, import an existing notebook, connect notebooks to lakehouses, collaborate in notebooks, and comment code cells.
44
ms.reviewer: jingzh
5-
ms.author: eur
6-
author: eric-urban
5+
ms.author: jingzh
6+
author: JeneZhang
77
ms.topic: how-to
88
ms.custom: sfi-image-nochange
99
ms.search.form: Create and use notebooks
@@ -241,6 +241,12 @@ Version history allows you to easily version your live notebook changes. It supp
241241
- System checkpoint: These checkpoints are created automatically every 5 minutes based on editing time interval by Notebook system, ensuring that your work is consistently saved and versioned. You can find the modification records from all the contributors in the system checkpoint timeline list.
242242
:::image type="content" source="media\how-to-use-notebook\expand-system-checkpoint.png" alt-text="Screenshot showing expand checkpoint list." lightbox="media\how-to-use-notebook\expand-system-checkpoint.png":::
243243

244+
1. Multi-Source Checkpointing for Notebook
245+
246+
Fabric notebooks seamlessly integrate with Git, deployment pipelines, and Visual Studio Code. Each saved version is automatically captured in the notebook’s version history. Versions may originate from direct edits within the notebook, Git synchronizations, deployment pipeline activities, or publishing via VS Code. The source of each version is clearly labeled in version history to provide full traceability.
247+
248+
:::image type="content" source="media\how-to-use-notebook\multi-source-checkpoint.png" alt-text="Screenshot showing multi-source checkpoint for notebook version history." lightbox="media\how-to-use-notebook\multi-source-checkpoint.png":::
249+
244250
1. You can click on a checkpoint to open the **diff view**, it highlights the content differences between the selected checkpoint and the current live version, including the differences of cell content, cell output, and metadata. The version of this checkpoint can be managed individually in **'more options'** menu.
245251

246252
:::image type="content" source="media\how-to-use-notebook\checkpoint-diff-view.png" alt-text="Screenshot showing view diff." lightbox="media\how-to-use-notebook\checkpoint-diff-view.png":::
419 KB
Loading

docs/data-factory/connector-lakehouse-copy-activity.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: This article explains how to copy data using Lakehouse.
44
author: jianleishen
55
ms.author: jianleishen
66
ms.topic: how-to
7-
ms.date: 10/09/2025
7+
ms.date: 02/02/2026
88
ms.custom:
99
- pipelines
1010
- template-how-to
@@ -65,7 +65,7 @@ The following properties are **required**:
6565
- **Root folder**: Select **Tables** or **Files**, which indicates the virtual view of the managed or unmanaged area in your lake. For more information, refer to [Lakehouse introduction](../data-engineering/lakehouse-overview.md).
6666

6767
- If you select **Tables**:
68-
- **Use query**: Select from **Table** or **T-SQL Query**.
68+
- **Use query**: Select from **Table** or **T-SQL Query (Preview)**.
6969
- If you select **Table**:
7070
- **Table**: Choose an existing table from the table list or specify a table name as the source. Or you can select **New** to create a new table.
7171

@@ -80,8 +80,8 @@ The following properties are **required**:
8080
- **Version**: Specify to query an older snapshot by version.
8181
- **Additional columns**: Add additional data columns to the store source files' relative path or static value. Expression is supported for the latter.
8282

83-
- If you select **T-SQL Query**:
84-
- **T-SQL Query**: Specify the custom SQL query to read data through the [Lakehouse SQL analytics endpoint](../data-engineering/lakehouse-sql-analytics-endpoint.md). For example: `SELECT * FROM MyTable`. Note that Lakehouse table query mode does not support workspace-level private links.
83+
- If you select **T-SQL Query (Preview)**:
84+
- **T-SQL Query (Preview)**: Specify the custom SQL query to read data through the [Lakehouse SQL analytics endpoint](../data-engineering/lakehouse-sql-analytics-endpoint.md). For example: `SELECT * FROM MyTable`. Note that Lakehouse table query mode does not support workspace-level private links.
8585

8686
:::image type="content" source="./media/connector-lakehouse/use-query-t-sql-query.png" alt-text="Screenshot showing Use query - T-SQL Query." :::
8787

@@ -93,11 +93,11 @@ The following properties are **required**:
9393

9494
If you select **Dynamic range**, when using query with parallel enabled, range partition parameter(`?DfDynamicRangePartitionCondition`) is needed. Sample query: `SELECT * FROM <TableName> WHERE ?DfDynamicRangePartitionCondition`.
9595
- **Partition column name**: Specify the name of the source column in **integer** type that's used by range partitioning for parallel copy. If not specified, the index or the primary key of the table is auto-detected and used as the partition column.
96-
If you use a query to retrieve the source data, hook `?DfDynamicRangePartitionCondition` in the WHERE clause. For an example, see the [Parallel copy from Lakehouse tables using T-SQL Query](#parallel-copy-from-lakehouse-tables-using-t-sql-query) section.
96+
If you use a query to retrieve the source data, hook `?DfDynamicRangePartitionCondition` in the WHERE clause. For an example, see the [Parallel copy from Lakehouse tables using T-SQL Query (Preview)](#parallel-copy-from-lakehouse-tables-using-t-sql-query) section.
9797

98-
- **Partition upper bound**: Specify the maximum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value. For an example, see the [Parallel copy from Lakehouse tables using T-SQL Query](#parallel-copy-from-lakehouse-tables-using-t-sql-query) section.
98+
- **Partition upper bound**: Specify the maximum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value. For an example, see the [Parallel copy from Lakehouse tables using T-SQL Query (Preview)](#parallel-copy-from-lakehouse-tables-using-t-sql-query) section.
9999

100-
- **Partition lower bound**: Specify the minimum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value. For an example, see the [Parallel copy from Lakehouse tables using T-SQL Query](#parallel-copy-from-lakehouse-tables-using-t-sql-query) section.
100+
- **Partition lower bound**: Specify the minimum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value. For an example, see the [Parallel copy from Lakehouse tables using T-SQL Query (Preview)](#parallel-copy-from-lakehouse-tables-using-t-sql-query) section.
101101

102102

103103
:::image type="content" source="./media/connector-lakehouse/dynamic-range.png" alt-text="Screenshot showing the configuration when you select Dynamic range." lightbox="./media/connector-lakehouse/dynamic-range.png":::
@@ -266,11 +266,11 @@ When copying data to Lakehouse tables in Table mode, the following mappings are
266266
| Byte array | binary |
267267
| Decimal | decimal |
268268

269-
### T-SQL Query
269+
### T-SQL Query (Preview)
270270

271-
When copying data from Lakehouse tables in T-SQL Query mode, the following mappings are used from Lakehouse table data types to interim data types used by the service internally.
271+
When copying data from Lakehouse tables in T-SQL Query (Preview) mode, the following mappings are used from Lakehouse table data types to interim data types used by the service internally.
272272

273-
| Lakehouse table data type in T-SQL Query mode | Interim service data type |
273+
| Lakehouse table data type in T-SQL Query (Preview) mode | Interim service data type |
274274
|---------------------|------------------|
275275
| int | Int32 |
276276
| varchar | String |
@@ -284,13 +284,13 @@ When copying data from Lakehouse tables in T-SQL Query mode, the following mappi
284284
| date | Date |
285285
| datetime2 | DateTime |
286286

287-
## Parallel copy from Lakehouse tables using T-SQL Query
287+
## <a name="parallel-copy-from-lakehouse-tables-using-t-sql-query"></a> Parallel copy from Lakehouse tables using T-SQL Query (Preview)
288288

289-
The Lakehouse tables connector using T-SQL Query in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning options on the **Source** tab of the copy activity.
289+
The Lakehouse tables connector using T-SQL Query (Preview) in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning options on the **Source** tab of the copy activity.
290290

291-
When you enable partitioned copy, copy activity runs parallel queries against your Lakehouse tables using T-SQL Query source to load data by partitions. The parallel degree is controlled by the **Degree of copy parallelism** in the copy activity settings tab. For example, if you set **Degree of copy parallelism** to four, the service concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Lakehouse tables using T-SQL Query.
291+
When you enable partitioned copy, copy activity runs parallel queries against your Lakehouse tables using T-SQL Query (Preview) source to load data by partitions. The parallel degree is controlled by the **Degree of copy parallelism** in the copy activity settings tab. For example, if you set **Degree of copy parallelism** to four, the service concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Lakehouse tables using T-SQL Query (Preview).
292292

293-
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Lakehouse tables using T-SQL Query. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file.
293+
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Lakehouse tables using T-SQL Query (Preview). The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file.
294294

295295
| Scenario | Suggested settings |
296296
| ------------------------------------------------------------ | ------------------------------------------------------------ |
@@ -336,11 +336,11 @@ The following tables contain more information about a copy activity in Lakehouse
336336
|:---|:---|:---|:---|:---|
337337
|**Connection** |The section to select your connection.|< your Lakehouse connection>|Yes|workspaceId<br>itemId|
338338
|**Root folder** |The type of the root folder.|**Tables**<br>• **Files** |No|rootFolder:<br>Tables or Files|
339-
|**Use query** |The way to read data from Lakehouse. Apply **Table** to read data from the specified table or apply **T-SQL Query** to read data using query.|**Table** <br>• **T-SQL Query** |Yes |/|
339+
|**Use query** |The way to read data from Lakehouse. Apply **Table** to read data from the specified table or apply **T-SQL Query (Preview)** to read data using query.|**Table** <br>• **T-SQL Query (Preview)** |Yes |/|
340340
|**Table** |The name of the table that you want to read data, or the name of the table with a schema that you want to read data when you apply Lakehouse with schemas as the connection. |\<your table name> |Yes when you select **Tables** in **Root folder** | table |
341341
| **schema name** | Name of the schema. |< your schema name > | No | schema |
342342
| **table name** | Name of the table. | < your table name > | No |table |
343-
| **T-SQL Query** | Use the custom query to read data. An example is `SELECT * FROM MyTable`. | < query > |No | sqlReaderQuery|
343+
| **T-SQL Query (Preview)** | Use the custom query to read data. An example is `SELECT * FROM MyTable`. | < query > |No | sqlReaderQuery|
344344
|**Timestamp** | The timestamp to query an older snapshot.| \<timestamp>|No |timestampAsOf |
345345
|**Version** |The version to query an older snapshot.| \<version>|No |versionAsOf|
346346
|**Query timeout (minutes)**|The timeout for query command execution, default is 120 minutes.|timespan |No |queryTimeout|
-4.06 KB
Loading
-1.6 KB
Loading
-1.59 KB
Loading
2.92 KB
Loading

0 commit comments

Comments
 (0)