You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/apache-spark-environment-configuration.md
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -111,6 +111,16 @@ Once the user identity has the appropriate roles assigned, data in the Azure sto
111
111
> [!NOTE]
112
112
> If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool in an Azure Synapse workspace that has a managed virtual network associated with it, [a managed private endpoint to storage account should be configured](../synapse-analytics/security/connect-to-a-secure-storage-account.md) to ensure data access.
113
113
114
+
## Ensuring resource access for Spark jobs
115
+
Spark jobs can use either a managed identity or user identity passthrough to access data and other resources. The following table summarizes the different mechanisms for resource access while using Azure Machine Learning Managed (Automatic) Spark compute and attached Synapse Spark pool.
|Managed (Automatic) Spark compute|User identity and managed identity|User identity|
120
+
|Attached Synapse Spark pool|User identity and managed identity|Managed identity - compute identity of the attached Synapse Spark pool|
121
+
122
+
If the CLI or SDK code defines an option to use managed identity, Azure Machine Learning Managed (Automatic) Spark compute relies on a user-assigned managed identity attached to the workspace. You can attach a user-assigned managed identity to an existing Azure Machine Learning workspace using Azure Machine Learning CLI v2, or with `ARMClient`.
123
+
114
124
## Next steps
115
125
-[Apache Spark in Azure Machine Learning (preview)](./apache-spark-azure-ml-concepts.md)
116
126
-[Attach and manage a Synapse Spark pool in Azure Machine Learning (preview)](./how-to-manage-synapse-spark-pool.md)
Azure Machine Learning supports submission of standalone machine learning jobs, and creation of [machine learning pipelines](./concept-ml-pipelines.md), that involve multiple machine learning workflow steps. Azure Machine Learning handles both standalone Spark job creation, and creation of reusable Spark components that Azure Machine Learning pipelines can use. In this article, you'll learn how to submit Spark jobs using:
20
-
- Azure Machine Learning Studio UI
19
+
Azure Machine Learning supports submission of standalone machine learning jobs and creation of [machine learning pipelines](./concept-ml-pipelines.md) that involve multiple machine learning workflow steps. Azure Machine Learning handles both standalone Spark job creation, and creation of reusable Spark components that Azure Machine Learning pipelines can use. In this article, you'll learn how to submit Spark jobs using:
20
+
- Azure Machine Learning studio UI
21
21
- Azure Machine Learning CLI
22
22
- Azure Machine Learning SDK
23
23
24
-
See [this resource](./apache-spark-azure-ml-concepts.md) for more information about **Apache Spark in Azure Machine Learning** concepts.
24
+
For more information about **Apache Spark in Azure Machine Learning** concepts, see [this resource](./apache-spark-azure-ml-concepts.md).
25
25
26
26
## Prerequisites
27
27
@@ -42,27 +42,20 @@ See [this resource](./apache-spark-azure-ml-concepts.md) for more information ab
42
42
-[(Optional): An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
43
43
44
44
# [Studio UI](#tab/ui)
45
-
These prerequisites cover the submission of a Spark job from Azure Machine Learning Studio UI:
45
+
These prerequisites cover the submission of a Spark job from Azure Machine Learning studio UI:
46
46
- An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin.
47
47
- An Azure Machine Learning workspace. See [Create workspace resources](./quickstart-create-resources.md).
48
48
- To enable this feature:
49
-
1. Navigate to Azure Machine Learning Studio UI.
49
+
1. Navigate to Azure Machine Learning studio UI.
50
50
2. Select **Manage preview features** (megaphone icon) from the icons on the top right side of the screen.
51
51
3. In **Managed preview feature** panel, toggle on **Run notebooks and jobs on managed Spark** feature.
-[(Optional): An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
54
54
55
55
---
56
56
57
-
## Ensuring resource access for Spark jobs
58
-
Spark jobs can use either user identity passthrough, or a managed identity, to access data and other resources. The following table summarizes the different mechanisms for resource access while using Azure Machine Learning Managed (Automatic) Spark compute and attached Synapse Spark pool.
|Managed (Automatic) Spark compute|User identity and managed identity|User identity|
63
-
|Attached Synapse Spark pool|User identity and managed identity|Managed identity - compute identity of the attached Synapse Spark pool|
64
-
65
-
If the CLI or SDK code defines an option to use managed identity, Azure Machine Learning Managed (Automatic) Spark compute uses user-assigned managed identity attached to the workspace. You can attach a user-assigned managed identity to an existing Azure Machine Learning workspace using Azure Machine Learning CLI v2, or with `ARMClient`.
57
+
> [!NOTE]
58
+
> To learn more about resource access while using Azure Machine Learning Managed (Automatic) Spark compute, and attached Synapse Spark pool, see [Ensuring resource access for Spark jobs](apache-spark-environment-configuration.md#ensuring-resource-access-for-spark-jobs).
66
59
67
60
### Attach user assigned managed identity using CLI v2
68
61
1. Create a YAML file that defines the user-assigned managed identity that should be attached to the workspace:
@@ -222,7 +215,7 @@ To create a job, a standalone Spark job can be defined as a YAML specification f
- `identity`- this optional property defines the identity used to submit this job. It can have `user_identity` and `managed` values. If no identity is defined in the YAML specification, the default identity will be used.
218
+
- `identity`- this optional property defines the identity used to submit this job. It can have `user_identity` and `managed` values. If no identity is defined in the YAML specification, the Spark job will use the default identity.
226
219
227
220
### Standalone Spark job
228
221
This example YAML specification shows a standalone Spark job. It uses an Azure Machine Learning Managed (Automatic) Spark compute:
@@ -304,7 +297,7 @@ To create a standalone Spark job, use the `azure.ai.ml.spark` function, with the
304
297
- `dynamic_allocation_max_executors`- the maximum number of Spark executors instances for dynamic allocation.
305
298
- If dynamic allocation of executors is disabled, then define these parameters:
306
299
- `executor_instances`- the number of Spark executor instances.
307
-
- `environment` - the Azure Machine Learning environment that will run the job. This parameter should pass:
300
+
- `environment` - the Azure Machine Learning environment that runs the job. This parameter should pass:
308
301
- an object of `azure.ai.ml.entities.Environment`, or an Azure Machine Learning environment name (string).
309
302
- `args`- the command line arguments that should be passed to the job entry point Python script or class. See the sample code provided here for an example.
310
303
- `resources` - the resources to be used by an Azure Machine Learning Managed (Automatic) Spark compute. This parameter should pass a dictionary with:
@@ -336,7 +329,7 @@ To create a standalone Spark job, use the `azure.ai.ml.spark` function, with the
### Submit a standalone Spark job from Azure Machine Learning Studio UI
403
-
To submit a standalone Spark job using the Azure Machine Learning Studio UI:
395
+
### Submit a standalone Spark job from Azure Machine Learning studio UI
396
+
To submit a standalone Spark job using the Azure Machine Learning studio UI:
404
397
405
-
:::image type="content" source="media/how-to-submit-spark-jobs/create_standalone_spark_job.png" alt-text="Screenshot showing creation of a new Spark job in Azure Machine Learning Studio UI.":::
398
+
:::image type="content" source="media/how-to-submit-spark-jobs/create_standalone_spark_job.png" alt-text="Screenshot showing creation of a new Spark job in Azure Machine Learning studio UI.":::
406
399
407
400
- In the left pane, select **+ New**.
408
401
- Select **Spark job (preview)**.
409
402
- On the **Compute** screen:
410
403
411
-
:::image type="content" source="media/how-to-submit-spark-jobs/create_standalone_spark_job_compute.png" alt-text="Screenshot showing compute selection screen for a new Spark job in Azure Machine Learning Studio UI.":::
404
+
:::image type="content" source="media/how-to-submit-spark-jobs/create_standalone_spark_job_compute.png" alt-text="Screenshot showing compute selection screen for a new Spark job in Azure Machine Learning studio UI.":::
412
405
413
406
1. Under **Select compute type**, select **Spark automatic compute (Preview)** for Managed (Automatic) Spark compute, or **Attached compute** for an attached Synapse Spark pool.
414
407
1. If you selected **Spark automatic compute (Preview)**:
@@ -606,7 +599,7 @@ You can execute the above command from:
606
599
To create an Azure Machine Learning pipeline with a Spark component, you should have familiarity with creation of [Azure Machine Learning pipelines from components, using Python SDK](./tutorial-pipeline-python-sdk.md#create-the-pipeline-from-components). A Spark component is created using `azure.ai.ml.spark` function. The function parameters are defined almost the same way as for the [standalone Spark job](#standalone-spark-job-using-python-sdk). These parameters are defined differently for the Spark component:
607
600
608
601
- `name`- the name of the Spark component.
609
-
- `display_name`- the name of the Spark component that will display in the UI and elsewhere.
602
+
- `display_name`- the name of the Spark component displayed in the UI and elsewhere.
610
603
- `inputs`- this parameter is similar to `inputs` parameter described for the [standalone Spark job](#standalone-spark-job-using-python-sdk), except that the `azure.ai.ml.Input` class is instantiated without the `path` parameter.
611
604
- `outputs`- this parameter is similar to `outputs` parameter described for the [standalone Spark job](#standalone-spark-job-using-python-sdk), except that the `azure.ai.ml.Output` class is instantiated without the `path` parameter.
0 commit comments