Skip to content

Commit 944d87f

Browse files
authored
Merge pull request #199524 from ssalgadodev/runsToJobsPart11
runs
2 parents 5fec28b + 3a364e0 commit 944d87f

9 files changed

+56
-55
lines changed

articles/machine-learning/how-to-use-mlflow-cli-runs.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -258,20 +258,21 @@ client.download_artifacts(run_id, "helloworld.txt", ".")
258258

259259
For more details about how to retrieve information from experiments and runs in Azure Machine Learning using MLflow view [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
260260

261+
261262
## Manage models
262263

263-
Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere), which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow-related metadata, such as run ID, is also tracked with the registered model for traceability. Users can submit training runs, register, and deploy models produced from MLflow runs.
264+
Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere), which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow-related metadata, such as run ID, is also tracked with the registered model for traceability. Users can submit training jobs, register, and deploy models produced from MLflow runs.
264265

265266
If you want to deploy and register your production ready model in one step, see [Deploy and register MLflow models](how-to-deploy-mlflow-models.md).
266267

267-
To register and view a model from a run, use the following steps:
268+
To register and view a model from a job, use the following steps:
268269

269-
1. Once a run is complete, call the [`register_model()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.register_model) method.
270+
1. Once a job is complete, call the [`register_model()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.register_model) method.
270271

271272

272273

273274
```Python
274-
# the model folder produced from a run is registered. This includes the MLmodel file, model.pkl and the conda.yaml.
275+
# the model folder produced from a job is registered. This includes the MLmodel file, model.pkl and the conda.yaml.
275276
model_path = "model"
276277
model_uri = 'runs:/{}/{}'.format(run_id, model_path)
277278
mlflow.register_model(model_uri,"registered_model_name")
@@ -287,7 +288,7 @@ To register and view a model from a run, use the following steps:
287288

288289
![model-schema](./media/how-to-use-mlflow-cli-runs/mlflow-model-schema.png)
289290

290-
1. Select MLmodel to see the MLmodel file generated by the run.
291+
1. Select MLmodel to see the MLmodel file generated by the job.
291292

292293
![MLmodel-schema](./media/how-to-use-mlflow-cli-runs/mlmodel-view.png)
293294

articles/machine-learning/how-to-use-pipeline-parameter.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -16,14 +16,14 @@ ms.custom: designer
1616

1717
Use pipeline parameters to build flexible pipelines in the designer. Pipeline parameters let you dynamically set values at runtime to encapsulate pipeline logic and reuse assets.
1818

19-
Pipeline parameters are especially useful when resubmitting a pipeline run, [retraining models](how-to-retrain-designer.md), or [performing batch predictions](how-to-run-batch-predictions-designer.md).
19+
Pipeline parameters are especially useful when resubmitting a pipeline job, [retraining models](how-to-retrain-designer.md), or [performing batch predictions](how-to-run-batch-predictions-designer.md).
2020

2121
In this article, you learn how to do the following:
2222

2323
> [!div class="checklist"]
2424
> * Create pipeline parameters
2525
> * Delete and manage pipeline parameters
26-
> * Trigger pipeline runs while adjusting pipeline parameters
26+
> * Trigger pipeline jobs while adjusting pipeline parameters
2727
2828
## Prerequisites
2929

@@ -96,7 +96,7 @@ In this section, you will learn how to attach and detach component parameter to
9696

9797
### Attach component parameter to pipeline parameter
9898

99-
You can attach the same component parameters of duplicated components to the same pipeline parameter if you want to alter the value at one time when triggering the pipeline run.
99+
You can attach the same component parameters of duplicated components to the same pipeline parameter if you want to alter the value at one time when triggering the pipeline job.
100100

101101
The following example has duplicated **Clean Missing Data** component. For each **Clean Missing Data** component, attach **Replacement value** to pipeline parameter **replace-missing-value**:
102102

@@ -159,18 +159,18 @@ Use the following steps to delete a component pipeline parameter:
159159
> [!NOTE]
160160
> Deleting a pipeline parameter will cause all attached component parameters to be detached and the value of detached component parameters will keep current pipeline parameter value.
161161
162-
## Trigger a pipeline run with pipeline parameters
162+
## Trigger a pipeline job with pipeline parameters
163163

164-
In this section, you learn how to submit a pipeline run while setting pipeline parameters.
164+
In this section, you learn how to submit a pipeline job while setting pipeline parameters.
165165

166-
### Resubmit a pipeline run
166+
### Resubmit a pipeline job
167167

168-
After submitting a pipeline with pipeline parameters, you can resubmit a pipeline run with different parameters:
168+
After submitting a pipeline with pipeline parameters, you can resubmit a pipeline job with different parameters:
169169

170-
1. Go to pipeline detail page. In the **Pipeline run overview** window, you can check current pipeline parameters and values.
170+
1. Go to pipeline detail page. In the **Pipeline job overview** window, you can check current pipeline parameters and values.
171171

172172
1. Select **Resubmit**.
173-
1. In the **Setup pipeline run**, specify your new pipeline parameters.
173+
1. In the **Setup pipeline job**, specify your new pipeline parameters.
174174

175175
![Screenshot that shows resubmit pipeline with pipeline parameters](media/how-to-use-pipeline-parameter/resubmit-pipeline-run.png)
176176

articles/machine-learning/how-to-use-reinforcement-learning.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ In this article you learn how to:
2626
> * Set up an experiment
2727
> * Define head and worker nodes
2828
> * Create an RL estimator
29-
> * Submit an experiment to start a run
29+
> * Submit an experiment to start a job
3030
> * View results
3131
3232
This article is based on the [RLlib Pong example](https://aka.ms/azureml-rl-pong) that can be found in the Azure Machine Learning notebook [GitHub repository](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/README.md).
@@ -102,7 +102,7 @@ ws = Workspace.from_config()
102102

103103
### Create a reinforcement learning experiment
104104

105-
Create an [experiment](/python/api/azureml-core/azureml.core.experiment.experiment) to track your reinforcement learning run. In Azure Machine Learning, experiments are logical collections of related trials to organize run logs, history, outputs, and more.
105+
Create an [experiment](/python/api/azureml-core/azureml.core.experiment.experiment) to track your reinforcement learning job. In Azure Machine Learning, experiments are logical collections of related trials to organize job logs, history, outputs, and more.
106106

107107
```python
108108
experiment_name='rllib-pong-multi-node'
@@ -212,7 +212,7 @@ else:
212212

213213
Use the [ReinforcementLearningEstimator](/python/api/azureml-contrib-reinforcementlearning/azureml.contrib.train.rl.reinforcementlearningestimator) to submit a training job to Azure Machine Learning.
214214

215-
Azure Machine Learning uses estimator classes to encapsulate run configuration information. This lets you specify how to configure a script execution.
215+
Azure Machine Learning uses estimator classes to encapsulate job configuration information. This lets you specify how to configure a script execution.
216216

217217
### Define a worker configuration
218218

@@ -312,7 +312,7 @@ rl_estimator = ReinforcementLearningEstimator(
312312
cluster_coordination_timeout_seconds=3600,
313313

314314
# Maximum time for the whole Ray job to run
315-
# This will cut off the run after an hour
315+
# This will cut off the job after an hour
316316
max_run_duration_seconds=3600,
317317

318318
# Allow the docker container Ray runs in to make full use
@@ -396,7 +396,7 @@ def on_train_result(info):
396396
value=info["result"]["episodes_total"])
397397
```
398398

399-
## Submit a run
399+
## Submit a job
400400

401401
[Run](/python/api/azureml-core/azureml.core.run%28class%29) handles the run history of in-progress or complete jobs.
402402

@@ -408,7 +408,7 @@ run = exp.submit(config=rl_estimator)
408408
409409
## Monitor and view results
410410

411-
Use the Azure Machine Learning Jupyter widget to see the status of your runs in real time. The widget shows two child runs: one for head and one for workers.
411+
Use the Azure Machine Learning Jupyter widget to see the status of your jobs in real time. The widget shows two child jobs: one for head and one for workers.
412412

413413
```python
414414
from azureml.widgets import RunDetails
@@ -418,15 +418,15 @@ run.wait_for_completion()
418418
```
419419

420420
1. Wait for the widget to load.
421-
1. Select the head run in the list of runs.
421+
1. Select the head job in the list of jobs.
422422

423-
Select **Click here to see the run in Azure Machine Learning studio** for additional run information in the studio. You can access this information while the run is in progress or after it completes.
423+
Select **Click here to see the job in Azure Machine Learning studio** for additional job information in the studio. You can access this information while the job is in progress or after it completes.
424424

425-
![Line graph showing how run details widget](./media/how-to-use-reinforcement-learning/pong-run-details-widget.png)
425+
![Line graph showing how job details widget](./media/how-to-use-reinforcement-learning/pong-run-details-widget.png)
426426

427427
The **episode_reward_mean** plot shows the mean number of points scored per training epoch. You can see that the training agent initially performed poorly, losing its matches without scoring a single point (shown by a reward_mean of -21). Within 100 iterations, the training agent learned to beat the computer opponent by an average of 18 points.
428428

429-
If you browse logs of the child run, you can see the evaluation results recorded in driver_log.txt file. You may need to wait several minutes before these metrics become available on the Run page.
429+
If you browse logs of the child job, you can see the evaluation results recorded in driver_log.txt file. You may need to wait several minutes before these metrics become available on the Job page.
430430

431431
In short work, you have learned to configure multiple compute resources to train a reinforcement learning agent to play Pong very well against a computer opponent.
432432

articles/machine-learning/how-to-use-secrets-in-runs.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Authentication secrets in training
33
titleSuffix: Azure Machine Learning
4-
description: Learn how to pass secrets to training runs in secure fashion using the Azure Key Vault for your workspace.
4+
description: Learn how to pass secrets to training jobs in secure fashion using the Azure Key Vault for your workspace.
55
services: machine-learning
66
author: rastala
77
ms.author: roastala
@@ -13,19 +13,19 @@ ms.topic: how-to
1313
ms.custom: sdkv1, event-tier1-build-2022
1414
---
1515

16-
# Use authentication credential secrets in Azure Machine Learning training runs
16+
# Use authentication credential secrets in Azure Machine Learning training jobs
1717

1818
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
1919

20-
In this article, you learn how to use secrets in training runs securely. Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote run context. Coding such values into training scripts in cleartext is insecure as it would expose the secret.
20+
In this article, you learn how to use secrets in training jobs securely. Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote job context. Coding such values into training scripts in cleartext is insecure as it would expose the secret.
2121

22-
Instead, your Azure Machine Learning workspace has an associated resource called a [Azure Key Vault](../key-vault/general/overview.md). Use this Key Vault to pass secrets to remote runs securely through a set of APIs in the Azure Machine Learning Python SDK.
22+
Instead, your Azure Machine Learning workspace has an associated resource called a [Azure Key Vault](../key-vault/general/overview.md). Use this Key Vault to pass secrets to remote jobs securely through a set of APIs in the Azure Machine Learning Python SDK.
2323

2424
The standard flow for using secrets is:
2525
1. On local computer, log in to Azure and connect to your workspace.
2626
2. On local computer, set a secret in Workspace Key Vault.
27-
3. Submit a remote run.
28-
4. Within the remote run, get the secret from Key Vault and use it.
27+
3. Submit a remote job.
28+
4. Within the remote job, get the secret from Key Vault and use it.
2929

3030
## Set secrets
3131

@@ -56,10 +56,10 @@ You can list secret names using the [`list_secrets()`](/python/api/azureml-core/
5656

5757
In your local code, you can use the [`get_secret()`](/python/api/azureml-core/azureml.core.keyvault.keyvault#get-secret-name-) method to get the secret value by name.
5858

59-
For runs submitted the [`Experiment.submit`](/python/api/azureml-core/azureml.core.experiment.experiment#submit-config--tags-none----kwargs-) , use the [`get_secret()`](/python/api/azureml-core/azureml.core.run.run#get-secret-name-) method with the [`Run`](/python/api/azureml-core/azureml.core.run%28class%29) class. Because a submitted run is aware of its workspace, this method shortcuts the Workspace instantiation and returns the secret value directly.
59+
For jobs submitted the [`Experiment.submit`](/python/api/azureml-core/azureml.core.experiment.experiment#submit-config--tags-none----kwargs-) , use the [`get_secret()`](/python/api/azureml-core/azureml.core.run.run#get-secret-name-) method with the [`Run`](/python/api/azureml-core/azureml.core.run%28class%29) class. Because a submitted run is aware of its workspace, this method shortcuts the Workspace instantiation and returns the secret value directly.
6060

6161
```python
62-
# Code in submitted run
62+
# Code in submitted job
6363
from azureml.core import Experiment, Run
6464

6565
run = Run.get_context()

articles/machine-learning/how-to-use-sweep-in-pipeline.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -67,15 +67,15 @@ Below code snippet shows how to enable sweep for `train_model`.
6767

6868
After you submit a pipeline job, the SDK or CLI widget will give you a web URL link to Studio UI. The link will guide you to the pipeline graph view by default.
6969

70-
To check details of the sweep step, double click the sweep step and navigate to the **child run** tab in the panel on the right.
70+
To check details of the sweep step, double click the sweep step and navigate to the **child job** tab in the panel on the right.
7171

72-
:::image type="content" source="./media/how-to-use-sweep-in-pipeline/pipeline-view.png" alt-text="Screenshot of the pipeline with child run and the train_model node highlighted." lightbox= "./media/how-to-use-sweep-in-pipeline/pipeline-view.png":::
72+
:::image type="content" source="./media/how-to-use-sweep-in-pipeline/pipeline-view.png" alt-text="Screenshot of the pipeline with child job and the train_model node highlighted." lightbox= "./media/how-to-use-sweep-in-pipeline/pipeline-view.png":::
7373

74-
This will link you to the sweep job page as seen in the below screenshot. Navigate to **child run** tab, here you can see the metrics of all child runs and list of all child runs.
74+
This will link you to the sweep job page as seen in the below screenshot. Navigate to **child job** tab, here you can see the metrics of all child jobs and list of all child jobs.
7575

76-
:::image type="content" source="./media/how-to-use-sweep-in-pipeline/sweep-job.png" alt-text="Screenshot of the job page on the child runs tab." lightbox= "./media/how-to-use-sweep-in-pipeline/sweep-job.png":::
76+
:::image type="content" source="./media/how-to-use-sweep-in-pipeline/sweep-job.png" alt-text="Screenshot of the job page on the child jobs tab." lightbox= "./media/how-to-use-sweep-in-pipeline/sweep-job.png":::
7777

78-
If a child runs failed, select the name of that child run to enter detail page of that specific child run (see screenshot below). The useful debug information is under **Outputs + Logs**.
78+
If a child jobs failed, select the name of that child job to enter detail page of that specific child job (see screenshot below). The useful debug information is under **Outputs + Logs**.
7979

8080
:::image type="content" source="./media/how-to-use-sweep-in-pipeline/child-run.png" alt-text="Screenshot of the output + logs tab of a child run." lightbox= "./media/how-to-use-sweep-in-pipeline/child-run.png":::
8181

articles/machine-learning/how-to-use-synapsesparkstep.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -198,9 +198,9 @@ sdf.coalesce(1).write\
198198
.csv(args.output_dir)
199199
```
200200

201-
This "data preparation" script doesn't do any real data transformation, but illustrates how to retrieve data, convert it to a spark dataframe, and how to do some basic Apache Spark manipulation. You can find the output in Azure Machine Learning Studio by opening the child run, choosing the **Outputs + logs** tab, and opening the `logs/azureml/driver/stdout` file, as shown in the following figure.
201+
This "data preparation" script doesn't do any real data transformation, but illustrates how to retrieve data, convert it to a spark dataframe, and how to do some basic Apache Spark manipulation. You can find the output in Azure Machine Learning Studio by opening the child job, choosing the **Outputs + logs** tab, and opening the `logs/azureml/driver/stdout` file, as shown in the following figure.
202202

203-
:::image type="content" source="media/how-to-use-synapsesparkstep/synapsesparkstep-stdout.png" alt-text="Screenshot of Studio showing stdout tab of child run":::
203+
:::image type="content" source="media/how-to-use-synapsesparkstep/synapsesparkstep-stdout.png" alt-text="Screenshot of Studio showing stdout tab of child job":::
204204

205205
## Use the `SynapseSparkStep` in a pipeline
206206

@@ -244,7 +244,7 @@ pipeline_run = pipeline.submit('synapse-pipeline', regenerate_outputs=True)
244244

245245
The above code creates a pipeline consisting of the data preparation step on Apache Spark pools powered by Azure Synapse Analytics (`step_1`) and the training step (`step_2`). Azure calculates the execution graph by examining the data dependencies between the steps. In this case, there's only a straightforward dependency that `step2_input` necessarily requires `step1_output`.
246246

247-
The call to `pipeline.submit` creates, if necessary, an Experiment called `synapse-pipeline` and asynchronously begins a Run within it. Individual steps within the pipeline are run as Child Runs of this main run and can be monitored and reviewed in the Experiments page of Studio.
247+
The call to `pipeline.submit` creates, if necessary, an Experiment called `synapse-pipeline` and asynchronously begins a Job within it. Individual steps within the pipeline are run as Child Jobs of this main job and can be monitored and reviewed in the Experiments page of Studio.
248248

249249
## Next steps
250250

0 commit comments

Comments
 (0)