You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -661,22 +661,21 @@ The URI of the data file(s) whose data is to be read and returned as row set. Th
661
661
662
662
The URI may include the `*` character to match any sequence of characters, allowing `OPENROWSET` to pattern-match against the URI. Additionally, it can end with `/**` to enable recursive traversal through all subfolders.
663
663
664
-
The URI of the data file(s) whose data is to be read and returned as row set. The URI can reference Azure Data Lake storage or Azure Blob storage.
665
-
666
664
You can use `OPENROWSET(BULK)` to read data directly from files stored in the Fabric OneLake, specifically from the **Files folder** of a Fabric Lakehouse. This eliminates the need for external staging accounts (such as ADLS Gen2 or Blob Storage) and enables workspace-governed, SaaS-native ingestion using Fabric permissions. This functionality supports:
665
+
667
666
- Reading from `Files` folders in Lakehouses
668
667
- Workspace-to-warehouse loads within the same tenant
669
668
- Native identity enforcement using Microsoft Entra ID
670
669
671
670
> [!NOTE]
672
-
> Fabric OneLake storage is in [preview](/fabric/fundamentals/preview). See the [limitations](../statements/copy-into-transact-sql.md#limitations-for-onelake-as-source-public-preview) that are applicable both to `COPY INTO` and `OPENROWSET(BULK)`.
671
+
> Fabric OneLake storage is in [preview](/fabric/fundamentals/preview). See the [limitations](../statements/copy-into-transact-sql.md#limitations-for-onelake-as-source) that are applicable both to `COPY INTO` and `OPENROWSET(BULK)`.
Copy file name to clipboardExpand all lines: docs/t-sql/statements/copy-into-transact-sql.md
+16-17Lines changed: 16 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: Use the COPY statement in Azure Synapse Analytics and Warehouse in
5
5
author: WilliamDAssafMSFT
6
6
ms.author: wiassaf
7
7
ms.reviewer: procha, mikeray, fresantos
8
-
ms.date: 07/28/2025
8
+
ms.date: 07/29/2025
9
9
ms.service: sql
10
10
ms.subservice: t-sql
11
11
ms.topic: reference
@@ -549,7 +549,7 @@ Follow these steps to work around this issue by re-registering the workspace's m
549
549
550
550
This article explains how to use the COPY statement in [!INCLUDE [fabricdw](../../includes/fabric-dw.md)] in [!INCLUDE [fabric](../../includes/fabric.md)] for loading from external storage accounts. The COPY statement provides the most flexibility for high-throughput data ingestion into your [!INCLUDE [fabricdw](../../includes/fabric-dw.md)], and is as strategy to [Ingest data into your [!INCLUDE [fabricdw](../../includes/fabric-dw.md)]](/fabric/data-warehouse/ingest-data).
551
551
552
-
In [!INCLUDE [fabric](../../includes/fabric.md)], the [COPY (Transact-SQL)](/sql/t-sql/statements/copy-into-transact-sql?view=fabric&preserve-view=true) statement currently supports the PARQUET and CSV file formats. For data sources, Azure Data Lake Storage Gen2 accounts and OneLake sources are supported.
552
+
In [!INCLUDE [fabric](../../includes/fabric.md)], the [COPY (Transact-SQL)](/sql/t-sql/statements/copy-into-transact-sql?view=fabric&preserve-view=true) statement currently supports the PARQUET and CSV file formats. For data sources, Azure Data Lake Storage Gen2 accounts, and OneLake sources are supported.
553
553
554
554
For more information on using COPY INTO on your [!INCLUDE [fabricdw](../../includes/fabric-dw.md)] in [!INCLUDE [fabric](../../includes/fabric.md)], see [Ingest data into your [!INCLUDE [fabricdw](../../includes/fabric-dw.md)] using the COPY statement](/fabric/data-warehouse/ingest-data-copy).
555
555
@@ -631,11 +631,11 @@ When a column list isn't specified, COPY maps columns based on the source and ta
631
631
632
632
#### *External location*
633
633
634
-
Specifies where the files containing the data is staged. Currently Azure Data Lake Storage (ADLS) Gen2, Azure Blob Storage and OneLake (Preview) are supported:
634
+
Specifies where the files containing the data is staged. Currently Azure Data Lake Storage (ADLS) Gen2, Azure Blob Storage, and OneLake (Preview) are supported:
635
635
636
636
-*External location* for Blob Storage: `https://<account\>.blob.core.windows.net/<container\>/<path\>`
637
637
-*External location* for ADLS Gen2: `https://<account\>.dfs.core.windows.net/<container\>/<path\>`
638
-
-*External location* for OneLake (Preview): `'https://onelake.dfs.fabric.microsoft.com/<workspaceId>/<lakehouseId>/Files/'`
638
+
-*External location* for OneLake (Preview): `https://onelake.dfs.fabric.microsoft.com/<workspaceId>/<lakehouseId>/Files/`
639
639
640
640
Azure Data Lake Storage (ADLS) Gen2 offers better performance than Azure Blob Storage (legacy). Consider using an ADLS Gen2 account whenever possible.
641
641
@@ -678,10 +678,13 @@ To access files on Azure Data Lake Storage (ADLS) Gen2 and Azure Blob Storage lo
678
678
679
679
#### *CREDENTIAL (IDENTITY = '', SECRET = '')*
680
680
681
-
*CREDENTIAL* specifies the authentication mechanism to access the external storage account. On [!INCLUDE [fabric-dw](../../includes/fabric-dw.md)] in [!INCLUDE [fabric](../../includes/fabric.md)], the only supported authentication mechanisms are Shared Access Signature (SAS) and Storage Account Key (SAK). User's EntraID authentication is default, no credential needs to be specified.
681
+
*CREDENTIAL* specifies the authentication mechanism to access the external storage account. On [!INCLUDE [fabric-dw](../../includes/fabric-dw.md)] in [!INCLUDE [fabric](../../includes/fabric.md)], the only supported authentication mechanisms are Shared Access Signature (SAS) and Storage Account Key (SAK).
682
+
683
+
The user's EntraID authentication is default, no credential needs to be specified. COPY INTO using OneLake as source only supports EntraID authentication.
682
684
683
685
> [!NOTE]
684
686
> When using a public storage account, CREDENTIAL does not need to be specified. By default the executing user's Entra ID is used.
687
+
685
688
- Authenticating with Shared Access Signature (SAS)
686
689
687
690
-*IDENTITY: A constant with a value of 'Shared Access Signature'*
@@ -693,9 +696,6 @@ To access files on Azure Data Lake Storage (ADLS) Gen2 and Azure Blob Storage lo
693
696
-*IDENTITY: A constant with a value of 'Storage Account Key'*
694
697
-*SECRET: Storage account key*
695
698
696
-
> [!NOTE]
697
-
> COPY INTO using OneLake as source only supports EntraID authentication.
698
-
699
699
#### *ERRORFILE = Directory Location*
700
700
701
701
*ERRORFILE* only applies to CSV. Specifies the directory where the rejected rows and the corresponding error file should be written. The full path from the storage account can be specified or the path relative to the container can be specified. If the specified path doesn't exist, one is created on your behalf. A child directory is created with the name "\_rejectedrows". The "\_" character ensures that the directory is escaped for other data processing unless explicitly named in the location parameter.
@@ -824,9 +824,6 @@ WITH (
824
824
);
825
825
```
826
826
827
-
> [!NOTE]
828
-
> This feature is currently in [preview](/fabric/fundamentals/preview).
829
-
830
827
## Permissions
831
828
832
829
### Control plane permissions
@@ -846,18 +843,20 @@ GO
846
843
GRANT INSERT to [mike@contoso.com];
847
844
GO
848
845
```
849
-
> [!NOTE]
850
-
> When using the *ErrorFile* option, the user must have the minimal permission of Blob Storage Contributor on the Storage Account container.
851
846
852
-
> [!NOTE]
853
-
> When using OneLake as the source (Public Preview), the user must have **Contributor** or higher permissions on both the **source workspace** (where the Lakehouse is located) and the **target workspace** (where the Warehouse resides).
854
-
>All access is governed via Microsoft Entra ID and Fabric workspace roles.
847
+
When using the *ErrorFile* option, the user must have the minimal permission of Blob Storage Contributor on the Storage Account container.
848
+
849
+
When using OneLake as the source, the user must have **Contributor** or higher permissions on both the **source workspace** (where the Lakehouse is located) and the **target workspace** (where the Warehouse resides). All access is governed via Microsoft Entra ID and Fabric workspace roles.
855
850
856
851
## Remarks
857
852
858
853
The COPY statement accepts only UTF-8 and UTF-16 valid characters for row data and command parameters. Source files or parameters (such as `ROW TERMINATOR` or `FIELD TERMINATOR`) that use invalid characters might be interpreted incorrectly by the COPY statement and cause unexpected results such as data corruption, or other failures. Make sure your source files and parameters are UTF-8 or UTF-16 compliant before you invoke the COPY statement.
859
854
860
-
## Limitations for OneLake as source (Public Preview)
Fabric OneLake storage as a source for both `COPY INTO` and `OPENROWSET(BULK)` is a [preview feature](/fabric/fundamentals/preview).
861
860
862
861
-**Only Microsoft Entra ID authentication is supported.** Other authentication methods, such as SAS tokens, shared keys, or connection strings, are not permitted.
0 commit comments