Skip to content

Commit 79f4b2f

Browse files
committed
Fix an issue with the tabs . . .
1 parent 5d0417a commit 79f4b2f

File tree

1 file changed

+1
-7
lines changed

1 file changed

+1
-7
lines changed

articles/machine-learning/how-to-submit-spark-jobs.md

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,6 @@ Azure Machine Learning supports submission of standalone machine learning jobs,
4040
- [(Optional): An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
4141

4242
# [Studio UI](#tab/ui)
43-
4443
These prerequisites cover the submission of a Spark job from Azure Machine Learning Studio UI:
4544
- An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin.
4645
- An Azure Machine Learning workspace. See [Create workspace resources](./quickstart-create-resources.md).
@@ -50,6 +49,7 @@ These prerequisites cover the submission of a Spark job from Azure Machine Learn
5049
3. In **Managed preview feature** panel, toggle on **Run notebooks and jobs on managed Spark** feature.
5150
:::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/how_to_enable_managed_spark_preview.png" alt-text="Screenshot showing option for enabling Managed Spark preview.":::
5251
- [(Optional): An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
52+
5353
---
5454

5555
## Ensuring resource access for Spark jobs
@@ -139,7 +139,6 @@ df.to_csv(args.wrangled_data, index_col="PassengerId")
139139
The above script takes two arguments `--titanic_data` and `--wrangled_data`, which pass the path of input data and output folder respectively.
140140

141141
# [Azure CLI](#tab/cli)
142-
143142
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
144143

145144
To create a job, a standalone Spark job can be defined as a YAML specification file, which can be used in the `az ml job create` command, with the `--file` parameter. Define these properties in the YAML file as follows:
@@ -280,7 +279,6 @@ You can execute the above command from:
280279
- your local computer that has [Azure Machine Learning CLI](./how-to-configure-cli.md?tabs=public) installed.
281280

282281
# [Python SDK](#tab/sdk)
283-
284282
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
285283

286284
### Standalone Spark job using Python SDK
@@ -398,7 +396,6 @@ ml_client.jobs.stream(returned_spark_job.name)
398396
> To use an attached Synapse Spark pool, define the `compute` parameter in the `azure.ai.ml.spark` function, instead of `resources`.
399397

400398
# [Studio UI](#tab/ui)
401-
402399
This functionality isn't available in the Studio UI. The Studio UI doesn't support this feature.
403400

404401
---
@@ -491,7 +488,6 @@ To submit a standalone Spark job using the Azure Machine Learning Studio UI:
491488
A Spark component offers the flexibility to use the same component in multiple [Azure Machine Learning pipelines](./concept-ml-pipelines.md), as a pipeline step.
492489

493490
# [Azure CLI](#tab/cli)
494-
495491
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
496492

497493
YAML syntax for a Spark component resembles the [YAML syntax for Spark job specification](#yaml-properties-in-the-spark-job-specification) in most ways. These properties are defined differently in the Spark component YAML specification:
@@ -604,7 +600,6 @@ You can execute the above command from:
604600
- your local computer that has [Azure Machine Learning CLI](./how-to-configure-cli.md?tabs=public) installed.
605601

606602
# [Python SDK](#tab/sdk)
607-
608603
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
609604

610605
To create an Azure Machine Learning pipeline with a Spark component, you should have familiarity with creation of [Azure Machine Learning pipelines from components, using Python SDK](./tutorial-pipeline-python-sdk.md#create-the-pipeline-from-components). A Spark component is created using `azure.ai.ml.spark` function. The function parameters are defined almost the same way as for the [standalone Spark job](#standalone-spark-job-using-python-sdk). These parameters are defined differently for the Spark component:
@@ -694,7 +689,6 @@ ml_client.jobs.stream(pipeline_job.name)
694689
> To use an attached Synapse Spark pool, define the `compute` parameter in the `azure.ai.ml.spark` function, instead of `resources` parameter. For example, in the code sample shown above, define `spark_step.compute = "<ATTACHED_SPARK_POOL_NAME>"` instead of defining `spark_step.resources`.
695690

696691
# [Studio UI](#tab/ui)
697-
698692
This functionality isn't available in the Studio UI. The Studio UI doesn't support this feature.
699693

700694
---

0 commit comments

Comments
 (0)