You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the Apache Spark framework, Azure Machine Learning Managed (Automatic) Spark compute is the easiest way to accomplish distributed computing tasks in the Azure Machine Learning environment. Azure Machine Learning offers a fully managed, serverless, on-demand Apache Spark compute cluster. Its users can avoid the need to create an Azure Synapse workspace and a Synapse Spark pool.
29
29
30
-
Users can define resources, including instance type and Apache Spark runtime version. They can then use those resources to access Managed (Automatic) Spark compute in Azure Machine Learning notebooks for:
30
+
Users can define resources, including instance type and the Apache Spark runtime version. They can then use those resources to access Managed (Automatic) Spark compute, in Azure Machine Learning notebooks, for:
@@ -60,7 +60,7 @@ As of January 2023, creation of a Managed (Automatic) Spark compute, inside a vi
60
60
61
61
### Inactivity periods and tear-down mechanism
62
62
63
-
At first launch, Managed (Automatic) Spark compute (*cold start*) resource might need three to five minutes to start the Spark session itself. The automated Managed (Automatic) Spark compute provisioning, backed by Azure Synapse, causes this delay. After the Managed (Automatic) Spark compute is provisioned, and an Apache Spark session starts, subsequent code executions (*warm start*) won't experience this delay.
63
+
At first launch, a Managed (Automatic) Spark compute (*cold start*) resource might need three to five minutes to start the Spark session itself. The automated Managed (Automatic) Spark compute provisioning, backed by Azure Synapse, causes this delay. After the Managed (Automatic) Spark compute is provisioned, and an Apache Spark session starts, subsequent code executions (*warm start*) won't experience this delay.
64
64
65
65
The Spark session configuration offers an option that defines a session timeout (in minutes). The Spark session will end after an inactivity period that exceeds the user-defined timeout. If another Spark session doesn't start in the following ten minutes, resources provisioned for the Managed (Automatic) Spark compute will be torn down.
66
66
@@ -70,9 +70,9 @@ After the Managed (Automatic) Spark compute resource tear-down happens, submissi
70
70
71
71
> [!NOTE]
72
72
> For a session-level conda package:
73
-
> -*Cold start* time will need about ten to fifteen minutes.
74
-
> -*Warm start* time using same conda package will need about one minute.
75
-
> -*Warm start* with a different conda package will also need about ten to fifteen minutes.
73
+
> -the *Cold start* will need about ten to fifteen minutes.
74
+
> -the *Warm start*, using same conda package, will need about one minute.
75
+
> -the *Warm start*, with a different conda package, will also need about ten to fifteen minutes.
76
76
77
77
## Attached Synapse Spark pool
78
78
@@ -90,13 +90,13 @@ The Spark session configuration for an attached Synapse Spark pool also offers a
90
90
91
91
## Defining Spark cluster size
92
92
93
-
You can define Spark cluster size with three parameter values in Azure Machine Learning Spark jobs:
93
+
In Azure Machine Learning Spark jobs, you can define Spark cluster size with three parameter values:
94
94
95
95
- Number of executors
96
96
- Executor cores
97
97
- Executor memory
98
98
99
-
You should consider an Azure Machine Learning Apache Spark executor as an equivalent of Azure Spark worker nodes. An example can explain these parameters. Let's say that you defined the number of executors as 6 (equivalent to six worker nodes), executor cores as 4, and executor memory as 28 GB. Your Spark job then has access to a cluster with 24 cores and 168 GB of memory.
99
+
You should consider an Azure Machine Learning Apache Spark executor as equivalent to Azure Spark worker nodes. An example can explain these parameters. Let's say that you defined the number of executors as 6 (equivalent to six worker nodes), executor cores as 4, and executor memory as 28 GB. Your Spark job then has access to a cluster with 24 cores and 168 GB of memory.
100
100
101
101
## Ensuring resource access for Spark jobs
102
102
@@ -110,15 +110,12 @@ To access data and other resources, a Spark job can use either a user identity p
110
110
[This article](./how-to-submit-spark-jobs.md#ensuring-resource-access-for-spark-jobs) describes resource access for Spark jobs. In a notebook session, both the Managed (Automatic) Spark compute and the attached Synapse Spark pool use user identity passthrough for data access during [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md).
111
111
112
112
> [!NOTE]
113
-
> - To ensure successful Spark job execution, assign **Contributor** and **Storage Blob Data Contributor** roles (on the Azure storage account used for data input and output) to the identity that's used for submitting the Spark job.
113
+
> - To ensure successful Spark job execution, assign **Contributor** and **Storage Blob Data Contributor** roles (on the Azure storage account used for data input and output) to the identity that will be used for the Spark job submission.
114
114
> - If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool in an Azure Synapse workspace, and that workspace has an associated managed virtual network, [configure a managed private endpoint to a storage account](../synapse-analytics/security/connect-to-a-secure-storage-account.md). This configuration will help ensure data access.
115
115
> - Both Managed (Automatic) Spark compute and attached Synapse Spark pool do not work in a notebook created in a private link enabled workspace.
116
116
117
-
[This quickstart](./quickstart-spark-data-wrangling.md) describes how to start using Managed (Automatic) Spark compute in Azure Machine Learning.
118
-
119
117
## Next steps
120
118
121
-
-[Quickstart: Submit Apache Spark jobs in Azure Machine Learning (preview)](./quickstart-spark-jobs.md)
122
119
-[Attach and manage a Synapse Spark pool in Azure Machine Learning (preview)](./how-to-manage-synapse-spark-pool.md)
123
120
-[Interactive data wrangling with Apache Spark in Azure Machine Learning (preview)](./interactive-data-wrangling-with-apache-spark-azure-ml.md)
124
121
-[Submit Spark jobs in Azure Machine Learning (preview)](./how-to-submit-spark-jobs.md)
0 commit comments