You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To create a job, a standalone Spark job can be defined as a YAML specification file, which can be used in the `az ml job create` command, with the `--file` parameter. Define these properties in the YAML file as follows:
@@ -280,7 +279,6 @@ You can execute the above command from:
280
279
- your local computer that has [Azure Machine Learning CLI](./how-to-configure-cli.md?tabs=public) installed.
> To use an attached Synapse Spark pool, define the `compute` parameter in the `azure.ai.ml.spark` function, instead of `resources`.
399
397
400
398
# [Studio UI](#tab/ui)
401
-
402
399
This functionality isn't available in the Studio UI. The Studio UI doesn't support this feature.
403
400
404
401
---
@@ -491,7 +488,6 @@ To submit a standalone Spark job using the Azure Machine Learning Studio UI:
491
488
A Spark component offers the flexibility to use the same component in multiple [Azure Machine Learning pipelines](./concept-ml-pipelines.md), as a pipeline step.
YAML syntax for a Spark component resembles the [YAML syntax for Spark job specification](#yaml-properties-in-the-spark-job-specification) in most ways. These properties are defined differently in the Spark component YAML specification:
@@ -604,7 +600,6 @@ You can execute the above command from:
604
600
- your local computer that has [Azure Machine Learning CLI](./how-to-configure-cli.md?tabs=public) installed.
To create an Azure Machine Learning pipeline with a Spark component, you should have familiarity with creation of [Azure Machine Learning pipelines from components, using Python SDK](./tutorial-pipeline-python-sdk.md#create-the-pipeline-from-components). A Spark component is created using `azure.ai.ml.spark` function. The function parameters are defined almost the same way as for the [standalone Spark job](#standalone-spark-job-using-python-sdk). These parameters are defined differently for the Spark component:
> To use an attached Synapse Spark pool, define the `compute` parameter in the `azure.ai.ml.spark` function, instead of `resources` parameter. For example, in the code sample shown above, define `spark_step.compute = "<ATTACHED_SPARK_POOL_NAME>"` instead of defining `spark_step.resources`.
695
690
696
691
# [Studio UI](#tab/ui)
697
-
698
692
This functionality isn't available in the Studio UI. The Studio UI doesn't support this feature.
0 commit comments