Skip to content

Commit 5c8d437

Browse files
authored
Merge pull request #115317 from Kat-Campise/sql_articles_3
sql articles 3
2 parents 73a078c + de2c296 commit 5c8d437

File tree

2 files changed

+14
-7
lines changed

2 files changed

+14
-7
lines changed

articles/synapse-analytics/sql/develop-storage-files-spark-tables.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,13 +13,17 @@ ms.reviewer: jrasnick
1313

1414
# Query Spark tables with Azure Synapse Analytics using SQL on-demand (preview)
1515

16-
The SQL on-demand (preview) can automatically synchronize metadata from Spark pools within Synapse workspace (preview). A SQL on-demand database will be created for each database existing in Spark pools (preview). For each Spark external table based on Parquet and located in Azure Storage, an external table is created in the SQL on-demand database. As such, you can shut down your Spark pools and still query Spark external tables from SQL on-demand.
16+
The SQL on-demand (preview) can automatically synchronize metadata from Spark pools within Synapse workspace (preview). A SQL on-demand database will be created for each database existing in Spark pools (preview).
17+
18+
For each Spark external table based on Parquet and located in Azure Storage, an external table is created in the SQL on-demand database. As such, you can shut down your Spark pools and still query Spark external tables from SQL on-demand.
1719

1820
When a table is partitioned in Spark, files in storage are organized by folders. SQL on-demand will utilize partition metadata and only target relevant folders and files for your query.
1921

2022
Metadata synchronization is automatically configured for each Spark pool provisioned in the Azure Synapse workspace. You can start querying Spark external tables instantly.
2123

22-
Each Spark parquet external table located in Azure Storage is represented with an external table in a dbo schema that corresponds to a SQL on-demand database. For Spark external table queries, run a query that targets an external [spark_table]. Before running the example below, make sure you have correct [access to the storage account](develop-storage-files-storage-access-control.md) where the files are located.
24+
Each Spark parquet external table located in Azure Storage is represented with an external table in a dbo schema that corresponds to a SQL on-demand database.
25+
26+
For Spark external table queries, run a query that targets an external [spark_table]. Before running the example below, make sure you have correct [access to the storage account](develop-storage-files-storage-access-control.md) where the files are located.
2327

2428
```sql
2529
SELECT * FROM [db].dbo.[spark_table]
@@ -47,7 +51,7 @@ SELECT * FROM [db].dbo.[spark_table]
4751

4852
\* Collation used is Latin1_General_100_BIN2_UTF8.
4953

50-
** ArrayType, MapType and StructType are represented as JSONs.
54+
** ArrayType, MapType, and StructType are represented as JSONs.
5155

5256

5357

articles/synapse-analytics/sql/develop-tables-cetas.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,9 @@ You need to have permissions to list folder content and write to LOCATION folder
8484

8585
These examples use CETAS to save total population aggregated by year and state to an aggregated_data folder that is located in the population_ds datasource.
8686

87-
This sample relies on the credential, data source, and external file format created previously. Refer to the [external tables](develop-tables-external-tables.md) document. To save query results to a different folder in the same data source, change the LOCATION argument. To save results to a different storage account, create and use a different data source for DATA_SOURCE argument.
87+
This sample relies on the credential, data source, and external file format created previously. Refer to the [external tables](develop-tables-external-tables.md) document. To save query results to a different folder in the same data source, change the LOCATION argument.
88+
89+
To save results to a different storage account, create and use a different data source for DATA_SOURCE argument.
8890

8991
> [!NOTE]
9092
> The samples that follow use a public Azure Open Data storage account. It is read-only. To execute these queries, you need to provide the data source for which you have write permissions.
@@ -109,7 +111,7 @@ GO
109111
SELECT * FROM population_by_year_state
110112
```
111113

112-
The sample below uses an external table as the source for CETAS. It relies on the credential, data source, external file format, and external table created previously. Refer to the [external tables](develop-tables-external-tables.md) document.
114+
The following sample uses an external table as the source for CETAS. It relies on the credential, data source, external file format, and external table created previously. Refer to the [external tables](develop-tables-external-tables.md) document.
113115

114116
```sql
115117
-- use CETAS with select from external table
@@ -150,9 +152,10 @@ CETAS can be used to store result sets with following SQL data types:
150152
- tinyint
151153
- bit
152154

153-
LOBs cannot be used with CETAS.
155+
> [!NOTE]
156+
> LOBs cannot be used with CETAS.
154157
155-
Following data types cannot be used in SELECT part of CETAS:
158+
The following data types cannot be used in SELECT part of CETAS:
156159

157160
- nchar
158161
- nvarchar

0 commit comments

Comments
 (0)