Skip to content

Commit 6a0f078

Browse files
Merge pull request #229939 from fbsolo-ms1/tutorial-for-SK
Move content between files, and add a relevant URL.
2 parents c1969f6 + a65cfeb commit 6a0f078

File tree

3 files changed

+38
-26
lines changed

3 files changed

+38
-26
lines changed

articles/machine-learning/apache-spark-azure-ml-concepts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ To access data and other resources, a Spark job can use either a user identity p
107107
|Managed (Automatic) Spark compute|User identity and managed identity|User identity|
108108
|Attached Synapse Spark pool|User identity and managed identity|Managed identity - compute identity of the attached Synapse Spark pool|
109109

110-
[This article](./how-to-submit-spark-jobs.md#ensuring-resource-access-for-spark-jobs) describes resource access for Spark jobs. In a notebook session, both the Managed (Automatic) Spark compute and the attached Synapse Spark pool use user identity passthrough for data access during [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md).
110+
[This article](./apache-spark-environment-configuration.md#ensuring-resource-access-for-spark-jobs) describes resource access for Spark jobs. In a notebook session, both the Managed (Automatic) Spark compute and the attached Synapse Spark pool use user identity passthrough for data access during [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md).
111111

112112
> [!NOTE]
113113
> - To ensure successful Spark job execution, assign **Contributor** and **Storage Blob Data Contributor** roles (on the Azure storage account used for data input and output) to the identity that will be used for the Spark job submission.

articles/machine-learning/apache-spark-environment-configuration.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,19 @@ Once the user identity has the appropriate roles assigned, data in the Azure sto
111111
> [!NOTE]
112112
> If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool in an Azure Synapse workspace that has a managed virtual network associated with it, [a managed private endpoint to storage account should be configured](../synapse-analytics/security/connect-to-a-secure-storage-account.md) to ensure data access.
113113
114+
## Ensuring resource access for Spark jobs
115+
116+
Spark jobs can use either a managed identity or user identity passthrough to access data and other resources. The following table summarizes the different mechanisms for resource access while using Azure Machine Learning Managed (Automatic) Spark compute and attached Synapse Spark pool.
117+
118+
|Spark pool|Supported identities|Default identity|
119+
| ---------- | -------------------- | ---------------- |
120+
|Managed (Automatic) Spark compute|User identity and managed identity|User identity|
121+
|Attached Synapse Spark pool|User identity and managed identity|Managed identity - compute identity of the attached Synapse Spark pool|
122+
123+
If the CLI or SDK code defines an option to use managed identity, Azure Machine Learning Managed (Automatic) Spark compute relies on a user-assigned managed identity attached to the workspace. You can attach a user-assigned managed identity to an existing Azure Machine Learning workspace using Azure Machine Learning CLI v2, or with `ARMClient`.
124+
114125
## Next steps
126+
115127
- [Apache Spark in Azure Machine Learning (preview)](./apache-spark-azure-ml-concepts.md)
116128
- [Attach and manage a Synapse Spark pool in Azure Machine Learning (preview)](./how-to-manage-synapse-spark-pool.md)
117129
- [Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview)](./interactive-data-wrangling-with-apache-spark-azure-ml.md)

articles/machine-learning/how-to-submit-spark-jobs.md

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -8,20 +8,20 @@ ms.reviewer: franksolomon
88
ms.service: machine-learning
99
ms.subservice: mldata
1010
ms.topic: how-to
11-
ms.date: 01/10/2023
11+
ms.date: 03/08/2023
1212
ms.custom: template-how-to
1313
---
1414

1515
# Submit Spark jobs in Azure Machine Learning (preview)
1616

1717
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
1818

19-
Azure Machine Learning supports submission of standalone machine learning jobs, and creation of [machine learning pipelines](./concept-ml-pipelines.md), that involve multiple machine learning workflow steps. Azure Machine Learning handles both standalone Spark job creation, and creation of reusable Spark components that Azure Machine Learning pipelines can use. In this article, you'll learn how to submit Spark jobs using:
20-
- Azure Machine Learning Studio UI
19+
Azure Machine Learning supports submission of standalone machine learning jobs and creation of [machine learning pipelines](./concept-ml-pipelines.md) that involve multiple machine learning workflow steps. Azure Machine Learning handles both standalone Spark job creation, and creation of reusable Spark components that Azure Machine Learning pipelines can use. In this article, you'll learn how to submit Spark jobs using:
20+
- Azure Machine Learning studio UI
2121
- Azure Machine Learning CLI
2222
- Azure Machine Learning SDK
2323

24-
See [this resource](./apache-spark-azure-ml-concepts.md) for more information about **Apache Spark in Azure Machine Learning** concepts.
24+
For more information about **Apache Spark in Azure Machine Learning** concepts, see [this resource](./apache-spark-azure-ml-concepts.md).
2525

2626
## Prerequisites
2727

@@ -42,29 +42,23 @@ See [this resource](./apache-spark-azure-ml-concepts.md) for more information ab
4242
- [(Optional): An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
4343

4444
# [Studio UI](#tab/ui)
45-
These prerequisites cover the submission of a Spark job from Azure Machine Learning Studio UI:
45+
These prerequisites cover the submission of a Spark job from Azure Machine Learning studio UI:
4646
- An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin.
4747
- An Azure Machine Learning workspace. See [Create workspace resources](./quickstart-create-resources.md).
4848
- To enable this feature:
49-
1. Navigate to Azure Machine Learning Studio UI.
49+
1. Navigate to Azure Machine Learning studio UI.
5050
2. Select **Manage preview features** (megaphone icon) from the icons on the top right side of the screen.
5151
3. In **Managed preview feature** panel, toggle on **Run notebooks and jobs on managed Spark** feature.
5252
:::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/how_to_enable_managed_spark_preview.png" alt-text="Screenshot showing option for enabling Managed Spark preview.":::
5353
- [(Optional): An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
5454

5555
---
5656

57-
## Ensuring resource access for Spark jobs
58-
Spark jobs can use either user identity passthrough, or a managed identity, to access data and other resources. The following table summarizes the different mechanisms for resource access while using Azure Machine Learning Managed (Automatic) Spark compute and attached Synapse Spark pool.
59-
60-
|Spark pool|Supported identities|Default identity|
61-
| ---------- | -------------------- | ---------------- |
62-
|Managed (Automatic) Spark compute|User identity and managed identity|User identity|
63-
|Attached Synapse Spark pool|User identity and managed identity|Managed identity - compute identity of the attached Synapse Spark pool|
64-
65-
If the CLI or SDK code defines an option to use managed identity, Azure Machine Learning Managed (Automatic) Spark compute uses user-assigned managed identity attached to the workspace. You can attach a user-assigned managed identity to an existing Azure Machine Learning workspace using Azure Machine Learning CLI v2, or with `ARMClient`.
57+
> [!NOTE]
58+
> To learn more about resource access while using Azure Machine Learning Managed (Automatic) Spark compute, and attached Synapse Spark pool, see [Ensuring resource access for Spark jobs](apache-spark-environment-configuration.md#ensuring-resource-access-for-spark-jobs).
6659
6760
### Attach user assigned managed identity using CLI v2
61+
6862
1. Create a YAML file that defines the user-assigned managed identity that should be attached to the workspace:
6963
```yaml
7064
identity:
@@ -80,6 +74,7 @@ If the CLI or SDK code defines an option to use managed identity, Azure Machine
8074
```
8175

8276
### Attach user assigned managed identity using `ARMClient`
77+
8378
1. Install [ARMClient](https://github.com/projectkudu/ARMClient), a simple command line tool that invokes the Azure Resource Manager API.
8479
1. Create a JSON file that defines the user-assigned managed identity that should be attached to the workspace:
8580
```json
@@ -146,6 +141,7 @@ The above script takes two arguments `--titanic_data` and `--wrangled_data`, whi
146141
To create a job, a standalone Spark job can be defined as a YAML specification file, which can be used in the `az ml job create` command, with the `--file` parameter. Define these properties in the YAML file as follows:
147142

148143
### YAML properties in the Spark job specification
144+
149145
- `type` - set to `spark`.
150146
- `code` - defines the location of the folder that contains source code and scripts for this job.
151147
- `entry` - defines the entry point for the job. It should cover one of these properties:
@@ -222,9 +218,10 @@ To create a job, a standalone Spark job can be defined as a YAML specification f
222218
path: azureml://datastores/workspaceblobstore/paths/data/wrangled/
223219
mode: direct
224220
```
225-
- `identity` - this optional property defines the identity used to submit this job. It can have `user_identity` and `managed` values. If no identity is defined in the YAML specification, the default identity will be used.
226-
221+
- `identity` - this optional property defines the identity used to submit this job. It can have `user_identity` and `managed` values. If no identity is defined in the YAML specification, the Spark job will use the default identity.
222+
227223
### Standalone Spark job
224+
228225
This example YAML specification shows a standalone Spark job. It uses an Azure Machine Learning Managed (Automatic) Spark compute:
229226

230227
```yaml
@@ -304,7 +301,7 @@ To create a standalone Spark job, use the `azure.ai.ml.spark` function, with the
304301
- `dynamic_allocation_max_executors` - the maximum number of Spark executors instances for dynamic allocation.
305302
- If dynamic allocation of executors is disabled, then define these parameters:
306303
- `executor_instances` - the number of Spark executor instances.
307-
- `environment` - the Azure Machine Learning environment that will run the job. This parameter should pass:
304+
- `environment` - the Azure Machine Learning environment that runs the job. This parameter should pass:
308305
- an object of `azure.ai.ml.entities.Environment`, or an Azure Machine Learning environment name (string).
309306
- `args` - the command line arguments that should be passed to the job entry point Python script or class. See the sample code provided here for an example.
310307
- `resources` - the resources to be used by an Azure Machine Learning Managed (Automatic) Spark compute. This parameter should pass a dictionary with:
@@ -336,7 +333,7 @@ To create a standalone Spark job, use the `azure.ai.ml.spark` function, with the
336333
- `azure.ai.ml.entities.UserIdentityConfiguration`
337334
or
338335
- `azure.ai.ml.entities.ManagedIdentityConfiguration`
339-
for user identity and managed identity respectively. If no identity is defined, the default identity will be used.
336+
for user identity and managed identity respectively. If no identity is defined, the Spark job will use the default identity.
340337

341338
You can submit a standalone Spark job from:
342339
- an Azure Machine Learning Notebook connected to an Azure Machine Learning compute instance.
@@ -399,16 +396,17 @@ ml_client.jobs.stream(returned_spark_job.name)
399396

400397
# [Studio UI](#tab/ui)
401398

402-
### Submit a standalone Spark job from Azure Machine Learning Studio UI
403-
To submit a standalone Spark job using the Azure Machine Learning Studio UI:
399+
### Submit a standalone Spark job from Azure Machine Learning studio UI
404400

405-
:::image type="content" source="media/how-to-submit-spark-jobs/create_standalone_spark_job.png" alt-text="Screenshot showing creation of a new Spark job in Azure Machine Learning Studio UI.":::
401+
To submit a standalone Spark job using the Azure Machine Learning studio UI:
402+
403+
:::image type="content" source="media/how-to-submit-spark-jobs/create_standalone_spark_job.png" alt-text="Screenshot showing creation of a new Spark job in Azure Machine Learning studio UI.":::
406404

407405
- In the left pane, select **+ New**.
408406
- Select **Spark job (preview)**.
409407
- On the **Compute** screen:
410408

411-
:::image type="content" source="media/how-to-submit-spark-jobs/create_standalone_spark_job_compute.png" alt-text="Screenshot showing compute selection screen for a new Spark job in Azure Machine Learning Studio UI.":::
409+
:::image type="content" source="media/how-to-submit-spark-jobs/create_standalone_spark_job_compute.png" alt-text="Screenshot showing compute selection screen for a new Spark job in Azure Machine Learning studio UI.":::
412410

413411
1. Under **Select compute type**, select **Spark automatic compute (Preview)** for Managed (Automatic) Spark compute, or **Attached compute** for an attached Synapse Spark pool.
414412
1. If you selected **Spark automatic compute (Preview)**:
@@ -486,6 +484,7 @@ To submit a standalone Spark job using the Azure Machine Learning Studio UI:
486484
---
487485

488486
## Spark component in a pipeline job
487+
489488
A Spark component offers the flexibility to use the same component in multiple [Azure Machine Learning pipelines](./concept-ml-pipelines.md), as a pipeline step.
490489

491490
# [Azure CLI](#tab/cli)
@@ -606,7 +605,7 @@ You can execute the above command from:
606605
To create an Azure Machine Learning pipeline with a Spark component, you should have familiarity with creation of [Azure Machine Learning pipelines from components, using Python SDK](./tutorial-pipeline-python-sdk.md#create-the-pipeline-from-components). A Spark component is created using `azure.ai.ml.spark` function. The function parameters are defined almost the same way as for the [standalone Spark job](#standalone-spark-job-using-python-sdk). These parameters are defined differently for the Spark component:
607606

608607
- `name` - the name of the Spark component.
609-
- `display_name` - the name of the Spark component that will display in the UI and elsewhere.
608+
- `display_name` - the name of the Spark component displayed in the UI and elsewhere.
610609
- `inputs` - this parameter is similar to `inputs` parameter described for the [standalone Spark job](#standalone-spark-job-using-python-sdk), except that the `azure.ai.ml.Input` class is instantiated without the `path` parameter.
611610
- `outputs` - this parameter is similar to `outputs` parameter described for the [standalone Spark job](#standalone-spark-job-using-python-sdk), except that the `azure.ai.ml.Output` class is instantiated without the `path` parameter.
612611

@@ -695,5 +694,6 @@ This functionality isn't available in the Studio UI. The Studio UI doesn't suppo
695694
---
696695

697696
## Next steps
697+
698698
- [Code samples for Spark jobs using Azure Machine Learning CLI](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/spark)
699-
- [Code samples for Spark jobs using Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)
699+
- [Code samples for Spark jobs using Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)

0 commit comments

Comments
 (0)