Skip to content

Commit aa2ff55

Browse files
authored
Update tutorial-use-pandas-spark-pool.md
1 parent b0e90c1 commit aa2ff55

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

articles/synapse-analytics/spark/tutorial-use-pandas-spark-pool.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,12 @@ If you don't have an Azure subscription, [create a free account before you begin
3737

3838
:::image type="content" source="media/tutorial-use-pandas-spark-pool/create-adls-linked-service.png" alt-text="Screenshot of creating a linked service using an ADLS Gen2 storage access key.":::
3939

40+
> [!IMPORTANT]
41+
>
42+
> If the above created Linked Service to the Azure Data Lake Storage Gen2 uses a [managed private endpoint](https://learn.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-managed-private-endpoints) (with a *dfs* URI) , then we need to create another secondary managed private endpoint using the Azure Blob Storage option (with a **blob** URI) to ensure that the internal [fsspec/adlfs](https://github.com/fsspec/adlfs/blob/main/adlfs/spec.py#L400) code can connect using the *BlobServiceClient* interface.
43+
44+
>
45+
> :::image type="content" source="media/tutorial-use-pandas-spark-pool/create-mpe-blob-endpoint.png" alt-text="Screenshot of creating a managed private end-point to an ADLS Gen2 storage using blob endpoint.":::
4046
4147
> [!NOTE]
4248
> - Pandas feature is supported on **Python 3.8** and **Spark3** serverless Apache Spark pool in Azure Synapse Analytics.

0 commit comments

Comments
 (0)