Skip to content

Commit ed8b677

Browse files
Merge pull request #115393 from Kat-Campise/spark_image_edits1
spark import image edit
2 parents 272792f + 91e6fb9 commit ed8b677

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed
-60.8 KB
Loading

articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,13 +12,13 @@ ms.reviewer: euang
1212
---
1313
# Introduction
1414

15-
The Spark SQL Analytics Connector is designed to efficiently transfer data between Spark pool (preview) and SQL pools in Azure Synapse. The Spark SQL Analytics Connector works on SQL pools only, it doesn't work with SQL on-Demand.
15+
The Azure Synapse Apache Spark to Synapse SQL connector is designed to efficiently transfer data between Spark pools (preview) and SQL pools in Azure Synapse. The Azure Synapse Apache Spark to Synapse SQL connector works on SQL pools only, it doesn't work with SQL on-demand.
1616

1717
## Design
1818

1919
Transferring data between Spark pools and SQL pools can be done using JDBC. However, given two distributed systems such as Spark and SQL pools, JDBC tends to be a bottleneck with serial data transfer.
2020

21-
The Spark pool to SQL Analytics Connector is a data source implementation for Apache Spark. It uses the Azure Data Lake Storage Gen2 and Polybase in SQL pools to efficiently transfer data between the Spark cluster and the SQL Analytics instance.
21+
The Azure Synapse Apache Spark pool to Synapse SQL connector is a data source implementation for Apache Spark. It uses the Azure Data Lake Storage Gen2 and Polybase in SQL pools to efficiently transfer data between the Spark cluster and the Synapse SQL instance.
2222

2323
![Connector Architecture](./media/synapse-spark-sqlpool-import-export/arch1.png)
2424

@@ -160,7 +160,7 @@ pysparkdftemptable.write.sqlanalytics("sqlpool.dbo.PySparkTable", Constants.INTE
160160

161161
Similarly, in the read scenario, read the data using Scala and write it into a temp table, and use Spark SQL in PySpark to query the temp table into a dataframe.
162162

163-
## Allowing other users to use the DW Connector in your workspace
163+
## Allowing other users to use the DW connector in your workspace
164164

165165
You need to be Storage Blob Data Owner on the ADLS Gen2 storage account connected to the workspace to alter missing permissions for others. Ensure the user has access to the workspace and permissions to run notebooks.
166166

0 commit comments

Comments
 (0)