You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/interactive-data-wrangling-with-apache-spark-azure-ml.md
+41-38Lines changed: 41 additions & 38 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,81 +8,84 @@ ms.reviewer: yogipandey
8
8
ms.service: azure-machine-learning
9
9
ms.subservice: mldata
10
10
ms.topic: how-to
11
-
ms.date: 10/05/2023
12
-
ms.custom: template-how-to
11
+
ms.date: 09/26/2024
12
+
ms.custom: template-how-to
13
13
---
14
14
15
15
# Interactive Data Wrangling with Apache Spark in Azure Machine Learning
16
16
17
-
Data wrangling becomes one of the most important steps in machine learning projects. The Azure Machine Learning integration, with Azure Synapse Analytics, provides access to an Apache Spark pool - backed by Azure Synapse - for interactive data wrangling using Azure Machine Learning Notebooks.
17
+
Data wrangling becomes one of the most important aspects of machine learning projects. The integration of Azure Machine Learning integration with Azure Synapse Analytics provides access to an Apache Spark pool - backed by Azure Synapse - for interactive data wrangling that uses Azure Machine Learning Notebooks.
18
18
19
-
In this article, you'll learn how to perform data wrangling using
19
+
In this article, you learn how to handle data wrangling using
20
20
21
21
- Serverless Spark compute
22
22
- Attached Synapse Spark pool
23
23
24
24
## Prerequisites
25
25
- An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin.
26
-
- An Azure Machine Learning workspace. See[Create workspace resources](./quickstart-create-resources.md).
27
-
- An Azure Data Lake Storage (ADLS) Gen 2 storage account. See[Create an Azure Data Lake Storage (ADLS) Gen 2 storage account](/azure/storage/blobs/create-data-lake-storage-account).
28
-
- (Optional): An Azure Key Vault. See[Create an Azure Key Vault](/azure/key-vault/general/quick-create-portal).
29
-
- (Optional): A Service Principal. See[Create a Service Principal](/azure/active-directory/develop/howto-create-service-principal-portal).
30
-
-[(Optional): An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
26
+
- An Azure Machine Learning workspace. Visit[Create workspace resources](./quickstart-create-resources.md) for more information.
27
+
- An Azure Data Lake Storage (ADLS) Gen 2 storage account. Visit[Create an Azure Data Lake Storage (ADLS) Gen 2 storage account](/azure/storage/blobs/create-data-lake-storage-account) for more information.
28
+
- (Optional): An Azure Key Vault. Visit[Create an Azure Key Vault](/azure/key-vault/general/quick-create-portal) for more information.
29
+
- (Optional): A Service Principal. Visit[Create a Service Principal](/azure/active-directory/develop/howto-create-service-principal-portal) for more information.
30
+
- (Optional): [An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
31
31
32
32
Before you start your data wrangling tasks, learn about the process of storing secrets
33
33
34
34
- Azure Blob storage account access key
35
35
- Shared Access Signature (SAS) token
36
36
- Azure Data Lake Storage (ADLS) Gen 2 service principal information
37
37
38
-
in the Azure Key Vault. You also need to know how to handle role assignments in the Azure storage accounts. The following sections review these concepts. Then, we'll explore the details of interactive data wrangling using the Spark pools in Azure Machine Learning Notebooks.
38
+
in the Azure Key Vault. You also need to know how to handle role assignments in the Azure storage accounts. The following sections in this document describe these concepts. Then, we explore the details of interactive data wrangling, using the Spark pools in Azure Machine Learning Notebooks.
39
39
40
40
> [!TIP]
41
-
> To learn about Azure storage account role assignment configuration, or if you access data in your storage accounts using user identity passthrough, see[Add role assignments in Azure storage accounts](./apache-spark-environment-configuration.md#add-role-assignments-in-azure-storage-accounts).
41
+
> To learn about Azure storage account role assignment configuration, or if you access data in your storage accounts using user identity passthrough, visit[Add role assignments in Azure storage accounts](./apache-spark-environment-configuration.md#add-role-assignments-in-azure-storage-accounts) for more information.
42
42
43
43
## Interactive Data Wrangling with Apache Spark
44
44
45
-
Azure Machine Learning offers serverless Spark compute, and [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md), for interactive data wrangling with Apache Spark in Azure Machine Learning Notebooks. The serverless Spark compute doesn't require creation of resources in the Azure Synapse workspace. Instead, a fully managed serverless Spark compute becomes directly available in the Azure Machine Learning Notebooks. Using a serverless Spark compute is the easiest approach to access a Spark cluster in Azure Machine Learning.
45
+
For interactive data wrangling with Apache Spark in Azure Machine Learning Notebooks, Azure Machine Learning offers serverless Spark compute and [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md). The serverless Spark compute doesn't require creation of resources in the Azure Synapse workspace. Instead, a fully managed serverless Spark compute becomes directly available in the Azure Machine Learning Notebooks. Use of a serverless Spark compute is the easiest way to access a Spark cluster in Azure Machine Learning.
46
46
47
47
### Serverless Spark compute in Azure Machine Learning Notebooks
48
48
49
49
A serverless Spark compute is available in Azure Machine Learning Notebooks by default. To access it in a notebook, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark** from the **Compute** selection menu.
50
50
51
-
The Notebooks UI also provides options for Spark session configuration, for the serverless Spark compute. To configure a Spark session:
51
+
The Notebooks UI also provides options for Spark session configuration for the serverless Spark compute. To configure a Spark session:
52
52
53
53
1. Select **Configure session** at the top of the screen.
54
-
2. Select **Apache Spark version** from the dropdown menu.
54
+
1. Select **Apache Spark version** from the dropdown menu.
55
55
> [!IMPORTANT]
56
56
> Azure Synapse Runtime for Apache Spark: Announcements
57
57
> * Azure Synapse Runtime for Apache Spark 3.2:
58
58
> * EOLA Announcement Date: July 8, 2023
59
59
> * End of Support Date: July 8, 2024. After this date, the runtime will be disabled.
60
-
> * For continued support and optimal performance, we advise that you migrate to Apache Spark 3.3.
61
-
3. Select **Instance type** from the dropdown menu. The following instance types are currently supported:
60
+
> * Apache Spark 3.3:
61
+
> * EOLA Announcement Date: July 12, 2024
62
+
> * End of Support Date: March 31, 2025. After this date, the runtime will be disabled.
63
+
> * For continued support and optimal performance, we advise migration to **Apache Spark 3.4**
64
+
1. Select **Instance type** from the dropdown menu. These types are currently supported:
62
65
-`Standard_E4s_v3`
63
66
-`Standard_E8s_v3`
64
67
-`Standard_E16s_v3`
65
68
-`Standard_E32s_v3`
66
69
-`Standard_E64s_v3`
67
-
4. Input a Spark **Session timeout** value, in minutes.
68
-
5. Select whether to **Dynamically allocate executors**
69
-
6. Select the number of **Executors** for the Spark session.
70
-
7. Select **Executor size** from the dropdown menu.
71
-
8. Select **Driver size** from the dropdown menu.
72
-
9. To use a Conda file to configure a Spark session, check the **Upload conda file** checkbox. Then, select **Browse**, and choose the Conda file with the Spark session configuration you want.
73
-
10. Add **Configuration settings** properties, input values in the **Property** and **Value** textboxes, and select **Add**.
74
-
11. Select **Apply**.
75
-
12. Select **Stop session** in the **Configure new session?** pop-up.
70
+
1. Input a Spark **Session timeout** value, in minutes.
71
+
1. Select whether or not you want to **Dynamically allocate executors**
72
+
1. Select the number of **Executors** for the Spark session.
73
+
1. Select **Executor size** from the dropdown menu.
74
+
1. Select **Driver size** from the dropdown menu.
75
+
1. To use a Conda file to configure a Spark session, check the **Upload conda file** checkbox. Then, select **Browse**, and choose the Conda file with the Spark session configuration you want.
76
+
1. Add **Configuration settings** properties, input values in the **Property** and **Value** textboxes, and select **Add**.
77
+
1. Select **Apply**.
78
+
1. In the **Configure new session?** pop-up, select **Stop session**.
76
79
77
80
The session configuration changes persist and become available to another notebook session that is started using the serverless Spark compute.
78
81
79
82
> [!TIP]
80
83
>
81
-
> If you use session-level Conda packages, you can [improve](./apache-spark-azure-ml-concepts.md#improving-session-cold-start-time-while-using-session-level-conda-packages) the Spark session *cold start* time if you set the configuration variable `spark.hadoop.aml.enable_cache` to true. A session cold start with session level Conda packages typically takes 10 to 15 minutes when the session starts for the first time. However, subsequent session cold starts with the configuration variable set to true typically take three to five minutes.
84
+
> If you use session-level Conda packages, you can [improve](./apache-spark-azure-ml-concepts.md#improving-session-cold-start-time-while-using-session-level-conda-packages) the Spark session *cold start* time if you set the configuration variable `spark.hadoop.aml.enable_cache` to **true**. A session cold start with session level Conda packages typically takes 10 to 15 minutes when the session starts for the first time. However, subsequent session cold starts with the configuration variable set to true typically take three to five minutes.
82
85
83
86
### Import and wrangle data from Azure Data Lake Storage (ADLS) Gen 2
84
87
85
-
You can access and wrangle data stored in Azure Data Lake Storage (ADLS) Gen 2 storage accounts with `abfss://` data URIs following one of the two data access mechanisms:
88
+
You can access and wrangle data stored in Azure Data Lake Storage (ADLS) Gen 2 storage accounts with `abfss://` data URIs. To do this, you must follow one of the two data access mechanisms:
86
89
87
90
- User identity passthrough
88
91
- Service principal-based data access
@@ -127,9 +130,9 @@ To start interactive data wrangling with the user identity passthrough:
127
130
To wrangle data by access through a service principal:
128
131
129
132
1. Verify that the service principal has **Contributor**and**Storage Blob Data Contributor** [role assignments](./apache-spark-environment-configuration.md#add-role-assignments-in-azure-storage-accounts) in the Azure Data Lake Storage (ADLS) Gen 2 storage account.
130
-
2. [Create Azure Key Vault secrets](./apache-spark-environment-configuration.md#store-azure-storage-account-credentials-as-secrets-in-azure-key-vault) for the service principal tenant ID, client ID and client secret values.
131
-
3. Select **Serverless Spark compute** under **Azure Machine Learning Serverless Spark**from the **Compute** selection menu, or select an attached Synapse Spark pool under **Synapse Spark pools**from the **Compute** selection menu.
132
-
4. To setthe service principal tenant ID, client IDand client secret in the configuration, and execute the following code sample.
133
+
1. [Create Azure Key Vault secrets](./apache-spark-environment-configuration.md#store-azure-storage-account-credentials-as-secrets-in-azure-key-vault) for the service principal tenant ID, client ID and client secret values.
134
+
1. In the **Compute** selection menu, select **Serverless Spark compute** under **Azure Machine Learning Serverless Spark**. You can also select an attached Synapse Spark pool under **Synapse Spark pools**from the **Compute** selection menu.
135
+
1. Set the service principal tenant ID, client IDand client secret valuesin the configuration, and execute the following code sample.
133
136
- The `get_secret()` call in the code depends on name of the Azure Key Vault, and the names of the Azure Key Vault secrets created for the service principal tenant ID, client IDand client secret. Set these corresponding property name/values in the configuration:
@@ -169,18 +172,18 @@ To wrangle data by access through a service principal:
169
172
)
170
173
```
171
174
172
-
5. Import and wrangle data using data URIinformat`abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`as shown in the code sample, using the Titanic data.
175
+
1. Using the Titanic data, importandthe wrangle data using the data URIinthe`abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`format, as shown in the code sample.
173
176
174
177
### Import and wrangle data from Azure Blob storage
175
178
176
179
You can access Azure Blob storage data with either the storage account access key or a shared access signature (SAS) token. You should [store these credentials in the Azure Key Vault as a secret](./apache-spark-environment-configuration.md#store-azure-storage-account-credentials-as-secrets-in-azure-key-vault), and set them as properties in the session configuration.
177
180
178
181
To start interactive data wrangling:
179
182
1. At the Azure Machine Learning studio left panel, select **Notebooks**.
180
-
1. Select **Serverless Spark compute** under **Azure Machine Learning Serverless Spark**from the **Compute** selection menu, or select an attached Synapse Spark pool under **Synapse Spark pools**from the **Compute** selection menu.
183
+
1. In the **Compute** selection menu, select **Serverless Spark compute** under **Azure Machine Learning Serverless Spark**. You can also select an attached Synapse Spark pool under **Synapse Spark pools**from the **Compute** selection menu.
181
184
1. To configure the storage account access key or a shared access signature (SAS) token for data access in Azure Machine Learning Notebooks:
182
185
183
-
- For the access key, setproperty`fs.azure.account.key.<STORAGE_ACCOUNT_NAME>.blob.core.windows.net`as shown in this code snippet:
186
+
- For the access key, setthe`fs.azure.account.key.<STORAGE_ACCOUNT_NAME>.blob.core.windows.net`property,as shown in this code snippet:
184
187
185
188
```python
186
189
from pyspark.sql import SparkSession
@@ -192,7 +195,7 @@ To start interactive data wrangling:
- For the SAS token, setproperty`fs.azure.sas.<BLOB_CONTAINER_NAME>.<STORAGE_ACCOUNT_NAME>.blob.core.windows.net`as shown in this code snippet:
198
+
- For the SAS token, setthe`fs.azure.sas.<BLOB_CONTAINER_NAME>.<STORAGE_ACCOUNT_NAME>.blob.core.windows.net`property,as shown in this code snippet:
196
199
197
200
```python
198
201
from pyspark.sql import SparkSession
@@ -206,7 +209,7 @@ To start interactive data wrangling:
206
209
)
207
210
```
208
211
> [!NOTE]
209
-
> The `get_secret()` calls in the above code snippets require the name of the Azure Key Vault, and the names of the secrets created for the Azure Blob storage account access key orSAS token
212
+
> The `get_secret()` calls in the earlier code snippets require the name of the Azure Key Vault, and the names of the secrets created for the Azure Blob storage account access key orSAS token.
210
213
211
214
2. Execute the data wrangling code in the same notebook. Format the data URIas`wasbs://<BLOB_CONTAINER_NAME>@<STORAGE_ACCOUNT_NAME>.blob.core.windows.net/<PATH_TO_DATA>`, similar to what this code snippet shows:
212
215
@@ -239,7 +242,7 @@ To start interactive data wrangling:
239
242
To access data from [Azure Machine Learning Datastore](how-to-datastore.md), define a path to data on the datastore with [URIformat](how-to-create-data-assets.md?tabs=cli#create-data-assets) `azureml://datastores/<DATASTORE_NAME>/paths/<PATH_TO_DATA>`. To wrangle data from an Azure Machine Learning Datastore in a Notebooks session interactively:
240
243
241
244
1. Select **Serverless Spark compute** under **Azure Machine Learning Serverless Spark**from the **Compute** selection menu, or select an attached Synapse Spark pool under **Synapse Spark pools**from the **Compute** selection menu.
242
-
2. This code sample shows how to read and wrangle Titanic data from an Azure Machine Learning Datastore, using `azureml://` datastore URI, `pyspark.pandas`and`pyspark.ml.feature.Imputer`.
245
+
1. This code sample shows how to read and wrangle Titanic data from an Azure Machine Learning Datastore, using `azureml://` datastore URI, `pyspark.pandas`,and`pyspark.ml.feature.Imputer`.
243
246
244
247
```python
245
248
import pyspark.pandas as pd
@@ -271,7 +274,7 @@ The Azure Machine Learning datastores can access data using Azure storage accoun
271
274
-SAS token
272
275
- service principal
273
276
274
-
orprovide credential-less data access. Depending on the datastore typeand the underlying Azure storage account type, select an appropriate authentication mechanism to ensure data access. This table summarizes the authentication mechanisms to access data in the Azure Machine Learning datastores:
277
+
orthey use credential-less data access. Depending on the datastore typeand the underlying Azure storage account type, select an appropriate authentication mechanism to ensure data access. This table summarizes the authentication mechanisms to access data in the Azure Machine Learning datastores:
275
278
276
279
|Storage account type|Credential-less data access|Data access mechanism|Role assignments|
@@ -288,7 +291,7 @@ The default file share is mounted to both serverless Spark compute and attached
288
291
289
292
:::image type="content"source="media/interactive-data-wrangling-with-apache-spark-azure-ml/default-file-share.png" alt-text="Screenshot showing use of a file share.":::
290
293
291
-
In Azure Machine Learning studio, files in the default file share are shown in the directory tree under the **Files** tab. Notebook code can directly access files stored in this file share with`file://` protocol, along with the absolute path of the file, without more configurations. This code snippet shows how to access a file stored on the default file share:
294
+
In Azure Machine Learning studio, files in the default file share are shown in the directory tree under the **Files** tab. Notebook code can directly access files stored in this file share withthe `file://` protocol, along with the absolute path of the file, without more configurations. This code snippet shows how to access a file stored on the default file share:
0 commit comments