You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/synapse-analytics/sql/best-practices-sql-on-demand.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -114,9 +114,9 @@ For more information, check [filename](develop-storage-files-overview.md#filenam
114
114
> Always cast result of filepath and fileinfo functions to appropriate data types. If you use character data types, make sure appropriate length is used.
115
115
116
116
> [!NOTE]
117
-
> Functions used for partition elimination, filepath and fileinfo, are not currently supported for external tables other than those created automatically for each table created in Synapse Spark.
117
+
> Functions used for partition elimination, filepath and fileinfo, are not currently supported for external tables other than those created automatically for each external table created in Apache Spark for Azure Synapse.
118
118
119
-
If your stored data isn't partitioned, consider partitioning it so you can use these functions to optimize queries targeting those files. When [querying partitioned Spark tables](develop-storage-files-spark-tables.md) from SQL on-demand, the query will automatically target only the files needed.
119
+
If your stored data isn't partitioned, consider partitioning it so you can use these functions to optimize queries targeting those files. When [querying partitioned Apache Spark for Azure Synapse tables](develop-storage-files-spark-tables.md) from SQL on-demand, the query will automatically target only the files needed.
Copy file name to clipboardExpand all lines: articles/synapse-analytics/sql/develop-best-practices.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -149,7 +149,7 @@ Consequently, you will achieve better performance. For more information, check [
149
149
150
150
If your data in storage is not partitioned, consider partitioning it so you can use these functions to optimize queries targeting those files.
151
151
152
-
When [querying partitioned Spark tables](develop-storage-files-spark-tables.md) from SQL on-demand, the query will automatically target only files needed.
152
+
When [querying partitioned Apache Spark for Azure Synapse external tables](develop-storage-files-spark-tables.md) from SQL on-demand, the query will automatically target only files needed.
153
153
154
154
### Use CETAS to enhance query performance and joins
Copy file name to clipboardExpand all lines: articles/synapse-analytics/sql/develop-storage-files-spark-tables.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Query Spark tables using SQL on-demand (preview)
2
+
title: Synchronize Apache Spark for Azure Synapse external table definitions in SQL on-demand (preview)
3
3
description: Overview of how to query Spark tables using SQL on-demand (preview)
4
4
services: synapse-analytics
5
5
author: julieMSFT
@@ -11,9 +11,9 @@ ms.author: jrasnick
11
11
ms.reviewer: jrasnick
12
12
---
13
13
14
-
# Query Spark tables with Azure Synapse Analytics using SQL on-demand (preview)
14
+
# Synchronize Apache Spark for Azure Synapse external table definitions in SQL on-demand (preview)
15
15
16
-
The SQL on-demand (preview) can automatically synchronize metadata from Spark pools within Synapse workspace (preview). A SQL on-demand database will be created for each database existing in Spark pools (preview).
16
+
The SQL on-demand (preview) can automatically synchronize metadata from Apache Spark for Azure Synapse pools. A SQL on-demand database will be created for each database existing in Spark pools (preview).
17
17
18
18
For each Spark external table based on Parquet and located in Azure Storage, an external table is created in the SQL on-demand database. As such, you can shut down your Spark pools and still query Spark external tables from SQL on-demand.
Copy file name to clipboardExpand all lines: articles/synapse-analytics/sql/develop-tables-external-tables.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -376,4 +376,4 @@ The external table is now created, for future exploration of the content of this
376
376
377
377
## Next steps
378
378
379
-
Check the [CETAS](develop-tables-cetas.md) article for how to save the query results to an external table in Azure Storage. Or you can start querying [Spark tables](develop-storage-files-spark-tables.md).
379
+
Check the [CETAS](develop-tables-cetas.md) article for how to save the query results to an external table in Azure Storage. Or you can start querying [Apache Spark for Azure Synapse external tables](develop-storage-files-spark-tables.md).
Copy file name to clipboardExpand all lines: articles/synapse-analytics/sql/on-demand-workspace-overview.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ SQL on-demand is a distributed data processing system, built for large scale of
22
22
23
23
SQL on-demand is serverless, hence there is no infrastructure to setup or clusters to maintain. A default endpoint for this service is provided within every Azure Synapse workspace, so you can start querying data as soon as the workspace is created. There is no charge for resources reserved, you are only being charged for the data scanned by queries you run, hence this model is a true pay-per-use model.
24
24
25
-
If you use Spark in your data pipeline, for data preparation, cleansing or enrichment, you can [query any Spark tables](develop-storage-files-spark-tables.md) you've created in the process, directly from SQL on-demand. Use [Private Link](../security/how-to-connect-to-workspace-with-private-links.md) to bring your SQL on-demand endpoint into your [managed workspace VNet](../security/synapse-workspace-managed-vnet.md).
25
+
If you use Apache Spark for Azure Synapse in your data pipeline, for data preparation, cleansing or enrichment, you can [query external Spark tables](develop-storage-files-spark-tables.md) you've created in the process, directly from SQL on-demand. Use [Private Link](../security/how-to-connect-to-workspace-with-private-links.md) to bring your SQL on-demand endpoint into your [managed workspace VNet](../security/synapse-workspace-managed-vnet.md).
26
26
27
27
## Who is SQL on-demand for
28
28
@@ -36,7 +36,7 @@ Different professional roles can benefit from SQL on-demand:
36
36
37
37
- Data Engineers can explore the lake, transform and prepare data using this service, and simplify their data transformation pipelines. For more information, check this [tutorial](tutorial-data-analyst.md).
38
38
- Data Scientists can quickly reason about the contents and structure of the data in the lake, thanks to features such as OPENROWSET and automatic schema inference.
39
-
- Data Analysts can [explore data and Spark tables](develop-storage-files-spark-tables.md) created by Data Scientists or Data Engineers using familiar T-SQL language or their favorite tools, which can connect to SQL on-demand.
39
+
- Data Analysts can [explore data and Spark external tables](develop-storage-files-spark-tables.md) created by Data Scientists or Data Engineers using familiar T-SQL language or their favorite tools, which can connect to SQL on-demand.
40
40
- BI Professionals can quickly [create Power BI reports on top of data in the lake](tutorial-connect-power-bi-desktop.md) and Spark tables.
0 commit comments