Skip to content

Commit 3cf0d18

Browse files
committed
Format and acrolinx freshness updates
1 parent 53636b5 commit 3cf0d18

File tree

1 file changed

+7
-19
lines changed

1 file changed

+7
-19
lines changed

articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md

Lines changed: 7 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: kalyankadiyala-Microsoft
55
ms.service: azure-synapse-analytics
66
ms.topic: overview
77
ms.subservice: spark
8-
ms.date: 05/10/2022
8+
ms.date: 01/22/2025
99
ms.author: kakadiya
1010
ms.reviewer: ktuckerdavis, aniket.adnaik
1111
---
@@ -46,9 +46,9 @@ At a high-level, the connector provides the following capabilities:
4646

4747
![A high-level data flow diagram to describe the connector's orchestration of a write request.](./media/synapse-spark-sql-pool-import-export/synapse-dedicated-sql-pool-spark-connector-write-orchestration.png)
4848

49-
## Pre-requisites
49+
## Prerequisites
5050

51-
Pre-requisites such as setting up required Azure resources and steps to configure them are discussed in this section.
51+
Prerequisites such as setting up required Azure resources and steps to configure them are discussed in this section.
5252

5353
### Azure resources
5454

@@ -94,7 +94,7 @@ A basic authentication approach requires user to configure `username` and `passw
9494
There are two ways to grant access permissions to Azure Data Lake Storage Gen2 - Storage Account:
9595

9696
* Role based Access Control role - [Storage Blob Data Contributor role](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)
97-
* Assigning the `Storage Blob Data Contributor Role` grants the User permissions to read, write and delete from the Azure Storage Blob Containers.
97+
* Assigning the `Storage Blob Data Contributor Role` grants the User permissions to read, write, and delete from the Azure Storage Blob Containers.
9898
* RBAC offers a coarse control approach at the container level.
9999
* [Access Control Lists (ACL)](../../storage/blobs/data-lake-storage-access-control.md)
100100
* ACL approach allows for fine-grained controls over specific paths and/or files under a given folder.
@@ -152,7 +152,7 @@ To successfully bootstrap and orchestrate the read or write operation, the Conne
152152
Following is the list of configuration options based on usage scenario:
153153

154154
* **Read using Microsoft Entra ID based authentication**
155-
* Credentials are auto-mapped, and user isn't required to provide specific configuration options.
155+
* Credentials are automapped, and user isn't required to provide specific configuration options.
156156
* Three-part table name argument on `synapsesql` method is required to read from respective table in Azure Synapse Dedicated SQL Pool.
157157
* **Read using basic authentication**
158158
* Azure Synapse Dedicated SQL End Point
@@ -281,7 +281,7 @@ dfToReadFromTable.show()
281281
> * Table name and query cannot be specified at the same time.
282282
> * Only select queries are allowed. DDL and DML SQLs are not allowed.
283283
> * The select and filter options on dataframe are not pushed down to the SQL dedicated pool when a query is specified.
284-
> * Read from a query is only available in Spark 3.1 and 3.2.
284+
> * Read from a query is only available in Spark 3.
285285
286286
##### [Scala](#tab/scala2)
287287

@@ -569,18 +569,6 @@ dfToReadFromQueryAsArgument.show()
569569

570570
#### Write Request - `synapsesql` method signature
571571

572-
The method signature for the Connector version built for [Spark 2.4.8](./apache-spark-24-runtime.md) has one less argument, than that applied to the Spark 3.1.2 version. Following are the two method signatures:
573-
574-
* Spark Pool Version 2.4.8
575-
576-
```Scala
577-
synapsesql(tableName:String,
578-
tableType:String = Constants.INTERNAL,
579-
location:Option[String] = None):Unit
580-
```
581-
582-
* Spark Pool Version 3.1.2
583-
584572
##### [Scala](#tab/scala3)
585573

586574
```Scala
@@ -974,7 +962,7 @@ By default, a write response is printed to the cell output. On failure, the curr
974962
* When writing large data sets, it's important to factor in the impact of [DWU Performance Level](../../synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal.md) setting that limits [transaction size](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-transactions.md#transaction-size).
975963
* Monitor [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-best-practices.md) utilization trends to spot throttling behaviors that can [impact](../../storage/common/scalability-targets-standard-account.md) read and write performance.
976964

977-
## References
965+
## Related content
978966

979967
* [Runtime library versions](../../synapse-analytics/spark/apache-spark-3-runtime.md)
980968
* [Azure Storage](../../storage/blobs/data-lake-storage-introduction.md)

0 commit comments

Comments
 (0)