Skip to content

Commit 1ccbe2a

Browse files
committed
updates for reviewer comments
1 parent 7542eba commit 1ccbe2a

File tree

3 files changed

+48
-42
lines changed

3 files changed

+48
-42
lines changed

articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ To alter missing permissions for others, you need to be the Storage Blob Data Ow
175175
| Access Permissions | --X | --X | --X | --X | --X | --X | -WX |
176176
| Default Permissions | ---| ---| ---| ---| ---| ---| ---|
177177

178-
- You should be able to ACL all folders from "synapse" and downward from Azure Portal. In order to ACL the root "/" folder, please follow the instructions below.
178+
- You should be able to ACL all folders from "synapse" and downward from Azure portal. In order to ACL the root "/" folder, please follow the instructions below.
179179

180180
- Please connect to the storage account connected with the workspace from Storage Explorer using AAD
181181
- Select your Account and give the ADLS Gen2 URL and default file system for the workspace
@@ -186,5 +186,5 @@ To alter missing permissions for others, you need to be the Storage Blob Data Ow
186186

187187
## Next steps
188188

189-
- [Create a SQL pool]([Create a new Apache Spark pool for an Azure Synapse Analytics workspace](../../synapse-analytics/quickstart-create-apache-spark-pool.md))
189+
- [Create a SQL pool](../../synapse-analytics/quickstart-create-apache-spark-pool.md))
190190
- [Create a new Apache Spark pool for an Azure Synapse Analytics workspace](../../synapse-analytics/quickstart-create-apache-spark-pool.md)

articles/synapse-analytics/sql/create-use-external-tables.md

Lines changed: 27 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -22,31 +22,35 @@ In this section, you'll learn how to create and use external tables in SQL on-de
2222
Your first step is to create a database where the tables will be created. Then initialize the objects by executing [setup script](https://github.com/Azure-Samples/Synapse/blob/master/SQL/Samples/LdwSample/SampleDB.sql) on that database. This setup script will create the following objects that are used in this sample:
2323
- DATABASE SCOPED CREDENTIAL `sqlondemand` that enables access to SAS-protected `https://sqlondemandstorage.blob.core.windows.net` Azure storage account.
2424

25-
```sql
26-
CREATE DATABASE SCOPED CREDENTIAL [sqlondemand]
27-
WITH IDENTITY='SHARED ACCESS SIGNATURE',
28-
SECRET = 'sv=2018-03-28&ss=bf&srt=sco&sp=rl&st=2019-10-14T12%3A10%3A25Z&se=2061-12-31T12%3A10%3A00Z&sig=KlSU2ullCscyTS0An0nozEpo4tO5JAgGBvw%2FJX2lguw%3D'
29-
```
25+
```sql
26+
CREATE DATABASE SCOPED CREDENTIAL [sqlondemand]
27+
WITH IDENTITY='SHARED ACCESS SIGNATURE',
28+
SECRET = 'sv=2018-03-28&ss=bf&srt=sco&sp=rl&st=2019-10-14T12%3A10%3A25Z&se=2061-12-31T12%3A10%3A00Z&sig=KlSU2ullCscyTS0An0nozEpo4tO5JAgGBvw%2FJX2lguw%3D'
29+
```
30+
3031
- EXTERNAL DATA SOURCE `sqlondemanddemo` that references demo storage account protected with SAS key, and EXTERNAL DATA SOURCE `YellowTaxi` that references publicly available Azure storage account on location `https://azureopendatastorage.blob.core.windows.net/nyctlc/yellow/`.
31-
```sql
32-
CREATE EXTERNAL DATA SOURCE SqlOnDemandDemo WITH (
33-
LOCATION = 'https://sqlondemandstorage.blob.core.windows.net',
34-
CREDENTIAL = sqlondemand
35-
);
36-
GO
37-
CREATE EXTERNAL DATA SOURCE YellowTaxi
38-
WITH ( LOCATION = 'https://azureopendatastorage.blob.core.windows.net/nyctlc/yellow/')
39-
```
32+
33+
```sql
34+
CREATE EXTERNAL DATA SOURCE SqlOnDemandDemo WITH (
35+
LOCATION = 'https://sqlondemandstorage.blob.core.windows.net',
36+
CREDENTIAL = sqlondemand
37+
);
38+
GO
39+
CREATE EXTERNAL DATA SOURCE YellowTaxi
40+
WITH ( LOCATION = 'https://azureopendatastorage.blob.core.windows.net/nyctlc/yellow/')
41+
```
42+
4043
- File formats `QuotedCSVWithHeaderFormat` and `ParquetFormat` that describe CSV and parquet file types.
41-
```sql
42-
CREATE EXTERNAL FILE FORMAT QuotedCsvWithHeaderFormat
43-
WITH (
44-
FORMAT_TYPE = DELIMITEDTEXT,
45-
FORMAT_OPTIONS ( FIELD_TERMINATOR = ',', STRING_DELIMITER = '"', FIRST_ROW = 2 )
46-
);
47-
GO
48-
CREATE EXTERNAL FILE FORMAT ParquetFormat WITH ( FORMAT_TYPE = PARQUET );
49-
```
44+
45+
```sql
46+
CREATE EXTERNAL FILE FORMAT QuotedCsvWithHeaderFormat
47+
WITH (
48+
FORMAT_TYPE = DELIMITEDTEXT,
49+
FORMAT_OPTIONS ( FIELD_TERMINATOR = ',', STRING_DELIMITER = '"', FIRST_ROW = 2 )
50+
);
51+
GO
52+
CREATE EXTERNAL FILE FORMAT ParquetFormat WITH ( FORMAT_TYPE = PARQUET );
53+
```
5054

5155
The queries in this article will be executed on your sample database and use these objects.
5256

articles/synapse-analytics/sql/develop-openrowset.md

Lines changed: 19 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -23,27 +23,27 @@ OPENROWSET function in Synapse SQL reads the content of the file(s) from a data
2323
The `OPENROWSET` function can optionally contain a `DATA_SOURCE` parameter to specify the data source that contains files.
2424
- `OPENROWSET` without `DATA_SOURCE` can be used to directly read the contents of the files from the URL location specified as `BULK` option:
2525

26-
```sql
27-
SELECT *
28-
FROM OPENROWSET(BULK 'http://storage..../container/folder/*.parquet',
29-
TYPE = 'PARQUET') AS file
30-
```
26+
```sql
27+
SELECT *
28+
FROM OPENROWSET(BULK 'http://storage..../container/folder/*.parquet',
29+
TYPE = 'PARQUET') AS file
30+
```
3131

3232
This is a quick and easy way to read the content of the files without pre-configuration. This option enables you to use the basic authentication option to access the storage (Azure AD passthrough for Azure AD logins and SAS token for SQL logins).
3333

3434
- `OPENROWSET` with `DATA_SOURCE` can be used to access files on specified storage account:
3535

36-
```sql
37-
SELECT *
38-
FROM OPENROWSET(BULK '/folder/*.parquet',
39-
DATA_SOURCE='storage', --> Root URL is in LOCATION of DATA SOURCE
40-
TYPE = 'PARQUET') AS file
41-
```
42-
43-
This option enables you to configure location of the storage account in the data source and specify the authentication method that should be used to access storage.
36+
```sql
37+
SELECT *
38+
FROM OPENROWSET(BULK '/folder/*.parquet',
39+
DATA_SOURCE='storage', --> Root URL is in LOCATION of DATA SOURCE
40+
TYPE = 'PARQUET') AS file
41+
```
4442

45-
> [!IMPORTANT]
46-
> `OPENROWSET` without `DATA_SOURCE` provides quick and easy way to access the storage files but offers limited authentication options. As an example, Azure AD principal can access files only using their [Azure AD identity](develop-storage-files-storage-access-control.md#user-identity) and cannot access publicly available files. If you need more powerful authentication options, use `DATA_SOURCE` option and define credential that you want to use to access storage.
43+
This option enables you to configure location of the storage account in the data source and specify the authentication method that should be used to access storage.
44+
45+
> [!IMPORTANT]
46+
> `OPENROWSET` without `DATA_SOURCE` provides quick and easy way to access the storage files but offers limited authentication options. As an example, Azure AD principal can access files only using their [Azure AD identity](develop-storage-files-storage-access-control.md#user-identity) and cannot access publicly available files. If you need more powerful authentication options, use `DATA_SOURCE` option and define credential that you want to use to access storage.
4747

4848
## Security
4949

@@ -135,8 +135,10 @@ The WITH clause allows you to specify columns that you want to read from files.
135135

136136
- For CSV data files, to read all the columns, provide column names and their data types. If you want a subset of columns, use ordinal numbers to pick the columns from the originating data files by ordinal. Columns will be bound by the ordinal designation.
137137

138-
> [!IMPORTANT]
139-
> The WITH clause is mandatory for CSV files.
138+
> [!IMPORTANT]
139+
> The WITH clause is mandatory for CSV files.
140+
>
141+
140142
- For Parquet data files, provide column names that match the column names in the originating data files. Columns will be bound by name. If the WITH clause is omitted, all columns from Parquet files will be returned.
141143

142144
column_name = Name for the output column. If provided, this name overrides the column name in the source file.

0 commit comments

Comments
 (0)