Skip to content

Commit 5e7abc8

Browse files
authored
Updated Spark interactive data wrangling doc.
Added notification about retirement of Spark runtime 3.1.
1 parent 3a8d082 commit 5e7abc8

File tree

1 file changed

+12
-9
lines changed

1 file changed

+12
-9
lines changed

articles/machine-learning/interactive-data-wrangling-with-apache-spark-azure-ml.md

Lines changed: 12 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -61,20 +61,23 @@ A Managed (Automatic) Spark compute is available in Azure Machine Learning Noteb
6161
The Notebooks UI also provides options for Spark session configuration, for the Managed (Automatic) Spark compute. To configure a Spark session:
6262

6363
1. Select **Configure session** at the bottom of the screen.
64-
1. Select a version of **Apache Spark** from the dropdown menu.
65-
1. Select **Instance type** from the dropdown menu. The following instance types are currently supported:
64+
2. Select a version of **Apache Spark** from the dropdown menu.
65+
> [!IMPORTANT]
66+
>
67+
> End of life announcement (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 was made on January 26, 2023. In accordance, Apache Spark 3.1 will not be supported after July 31, 2023. We recommend that you use Apache Spark 3.2.
68+
3. Select **Instance type** from the dropdown menu. The following instance types are currently supported:
6669
- `Standard_E4s_v3`
6770
- `Standard_E8s_v3`
6871
- `Standard_E16s_v3`
6972
- `Standard_E32s_v3`
7073
- `Standard_E64s_v3`
71-
1. Input a Spark **Session timeout** value, in minutes.
72-
1. Select the number of **Executors** for the Spark session.
73-
1. Select **Executor size** from the dropdown menu.
74-
1. Select **Driver size** from the dropdown menu.
75-
1. To use a conda file to configure a Spark session, check the **Upload conda file** checkbox. Then, select **Browse**, and choose the conda file with the Spark session configuration you want.
76-
1. Add **Configuration settings** properties, input values in the **Property** and **Value** textboxes, and select **Add**.
77-
1. Select **Apply**.
74+
4. Input a Spark **Session timeout** value, in minutes.
75+
5. Select the number of **Executors** for the Spark session.
76+
6. Select **Executor size** from the dropdown menu.
77+
7. Select **Driver size** from the dropdown menu.
78+
8. To use a conda file to configure a Spark session, check the **Upload conda file** checkbox. Then, select **Browse**, and choose the conda file with the Spark session configuration you want.
79+
9. Add **Configuration settings** properties, input values in the **Property** and **Value** textboxes, and select **Add**.
80+
10. Select **Apply**.
7881

7982
:::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/azure-ml-session-configuration.png" alt-text="Screenshot showing the Spark session configuration options.":::
8083

0 commit comments

Comments
 (0)