You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- You should be able to ACL all folders from "synapse" and downward from Azure Portal. In order to ACL the root "/" folder, please follow the instructions below.
178
+
- You should be able to ACL all folders from "synapse" and downward from Azure portal. In order to ACL the root "/" folder, please follow the instructions below.
179
179
180
180
- Please connect to the storage account connected with the workspace from Storage Explorer using AAD
181
181
- Select your Account and give the ADLS Gen2 URL and default file system for the workspace
@@ -186,5 +186,5 @@ To alter missing permissions for others, you need to be the Storage Blob Data Ow
186
186
187
187
## Next steps
188
188
189
-
-[Create a SQL pool]([Create a new Apache Spark pool for an Azure Synapse Analytics workspace](../../synapse-analytics/quickstart-create-apache-spark-pool.md))
189
+
-[Create a SQL pool](../../synapse-analytics/quickstart-create-apache-spark-pool.md))
190
190
-[Create a new Apache Spark pool for an Azure Synapse Analytics workspace](../../synapse-analytics/quickstart-create-apache-spark-pool.md)
Copy file name to clipboardExpand all lines: articles/synapse-analytics/sql/create-use-external-tables.md
+27-23Lines changed: 27 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,31 +22,35 @@ In this section, you'll learn how to create and use external tables in SQL on-de
22
22
Your first step is to create a database where the tables will be created. Then initialize the objects by executing [setup script](https://github.com/Azure-Samples/Synapse/blob/master/SQL/Samples/LdwSample/SampleDB.sql) on that database. This setup script will create the following objects that are used in this sample:
23
23
- DATABASE SCOPED CREDENTIAL `sqlondemand` that enables access to SAS-protected `https://sqlondemandstorage.blob.core.windows.net` Azure storage account.
- EXTERNAL DATA SOURCE `sqlondemanddemo` that references demo storage account protected with SAS key, and EXTERNAL DATA SOURCE `YellowTaxi` that references publicly available Azure storage account on location `https://azureopendatastorage.blob.core.windows.net/nyctlc/yellow/`.
31
-
```sql
32
-
CREATE EXTERNAL DATA SOURCE SqlOnDemandDemo WITH (
Copy file name to clipboardExpand all lines: articles/synapse-analytics/sql/develop-openrowset.md
+19-17Lines changed: 19 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,27 +23,27 @@ OPENROWSET function in Synapse SQL reads the content of the file(s) from a data
23
23
The `OPENROWSET` function can optionally contain a `DATA_SOURCE` parameter to specify the data source that contains files.
24
24
-`OPENROWSET` without `DATA_SOURCE` can be used to directly read the contents of the files from the URL location specified as `BULK` option:
25
25
26
-
```sql
27
-
SELECT*
28
-
FROM OPENROWSET(BULK 'http://storage..../container/folder/*.parquet',
29
-
TYPE ='PARQUET') AS file
30
-
```
26
+
```sql
27
+
SELECT*
28
+
FROM OPENROWSET(BULK 'http://storage..../container/folder/*.parquet',
29
+
TYPE ='PARQUET') AS file
30
+
```
31
31
32
32
This is a quick and easy way to read the content of the files without pre-configuration. This option enables you to use the basic authentication option to access the storage (Azure AD passthrough for Azure AD logins and SAS token for SQL logins).
33
33
34
34
-`OPENROWSET` with `DATA_SOURCE` can be used to access files on specified storage account:
35
35
36
-
```sql
37
-
SELECT*
38
-
FROM OPENROWSET(BULK '/folder/*.parquet',
39
-
DATA_SOURCE='storage', --> Root URL is in LOCATION of DATA SOURCE
40
-
TYPE ='PARQUET') AS file
41
-
```
42
-
43
-
This option enables you to configure location of the storage account in the data source and specify the authentication method that should be used to access storage.
36
+
```sql
37
+
SELECT *
38
+
FROM OPENROWSET(BULK '/folder/*.parquet',
39
+
DATA_SOURCE='storage', --> Root URL is in LOCATION of DATA SOURCE
40
+
TYPE = 'PARQUET') AS file
41
+
```
44
42
45
-
> [!IMPORTANT]
46
-
> `OPENROWSET` without `DATA_SOURCE` provides quick and easy way to access the storage files but offers limited authentication options. As an example, Azure AD principal can access files only using their [Azure AD identity](develop-storage-files-storage-access-control.md#user-identity) and cannot access publicly available files. If you need more powerful authentication options, use `DATA_SOURCE` option and define credential that you want to use to access storage.
43
+
This option enables you to configure location of the storage account in the data source and specify the authentication method that should be used to access storage.
44
+
45
+
> [!IMPORTANT]
46
+
>`OPENROWSET` without `DATA_SOURCE` provides quick and easy way to access the storage files but offers limited authentication options. As an example, Azure AD principal can access files only using their [Azure AD identity](develop-storage-files-storage-access-control.md#user-identity) and cannot access publicly available files. If you need more powerful authentication options, use `DATA_SOURCE` option and define credential that you want to use to access storage.
47
47
48
48
## Security
49
49
@@ -135,8 +135,10 @@ The WITH clause allows you to specify columns that you want to read from files.
135
135
136
136
- For CSV data files, to read all the columns, provide column names and their data types. If you want a subset of columns, use ordinal numbers to pick the columns from the originating data files by ordinal. Columns will be bound by the ordinal designation.
137
137
138
-
> [!IMPORTANT]
139
-
> The WITH clause is mandatory for CSV files.
138
+
> [!IMPORTANT]
139
+
> The WITH clause is mandatory for CSV files.
140
+
>
141
+
140
142
- For Parquet data files, provide column names that match the column names in the originating data files. Columns will be bound by name. If the WITH clause is omitted, all columns from Parquet files will be returned.
141
143
142
144
column_name = Name for the output column. If provided, this name overrides the column name in the source file.
0 commit comments