Skip to content

Commit 9bf0bc9

Browse files
authored
Merge pull request #231276 from kevinjaku/pandas-read-using-blob
Adding blob managed private endpoint in case of managed VNET workspaces
2 parents 25568f3 + a589069 commit 9bf0bc9

File tree

2 files changed

+6
-0
lines changed

2 files changed

+6
-0
lines changed
199 KB
Loading

articles/synapse-analytics/spark/tutorial-use-pandas-spark-pool.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,12 @@ If you don't have an Azure subscription, [create a free account before you begin
3737

3838
:::image type="content" source="media/tutorial-use-pandas-spark-pool/create-adls-linked-service.png" alt-text="Screenshot of creating a linked service using an ADLS Gen2 storage access key.":::
3939

40+
> [!IMPORTANT]
41+
>
42+
> - If the above created Linked Service to Azure Data Lake Storage Gen2 uses a [managed private endpoint](../security/synapse-workspace-managed-private-endpoints.md) (with a *dfs* URI) , then we need to create another secondary managed private endpoint using the Azure Blob Storage option (with a **blob** URI) to ensure that the internal [fsspec/adlfs](https://github.com/fsspec/adlfs/blob/main/adlfs/spec.py#L400) code can connect using the *BlobServiceClient* interface.
43+
> - In case the secondary managed private endpoint is not configured correctly, then we would see an error message like *ServiceRequestError: Cannot connect to host [storageaccountname].blob.core.windows.net:443 ssl:True [Name or service not known]*
44+
>
45+
> ![Screenshot of creating a managed private end-point to an ADLS Gen2 storage using blob endpoint.](./media/tutorial-use-pandas-spark-pool/create-mpe-blob-endpoint.png)
4046
4147
> [!NOTE]
4248
> - Pandas feature is supported on **Python 3.8** and **Spark3** serverless Apache Spark pool in Azure Synapse Analytics.

0 commit comments

Comments
 (0)