You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For more details about how to retrieve information from experiments and runs in Azure Machine Learning using MLflow view [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
260
260
261
+
261
262
## Manage models
262
263
263
-
Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere), which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow-related metadata, such as run ID, is also tracked with the registered model for traceability. Users can submit training runs, register, and deploy models produced from MLflow runs.
264
+
Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere), which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow-related metadata, such as run ID, is also tracked with the registered model for traceability. Users can submit training jobs, register, and deploy models produced from MLflow runs.
264
265
265
266
If you want to deploy and register your production ready model in one step, see [Deploy and register MLflow models](how-to-deploy-mlflow-models.md).
266
267
267
-
To register and view a model from a run, use the following steps:
268
+
To register and view a model from a job, use the following steps:
268
269
269
-
1. Once a runis complete, call the [`register_model()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.register_model) method.
270
+
1. Once a jobis complete, call the [`register_model()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.register_model) method.
270
271
271
272
272
273
273
274
```Python
274
-
# the model folder produced from a run is registered. This includes the MLmodel file, model.pkl and the conda.yaml.
275
+
# the model folder produced from a job is registered. This includes the MLmodel file, model.pkl and the conda.yaml.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-use-pipeline-parameter.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,14 +16,14 @@ ms.custom: designer
16
16
17
17
Use pipeline parameters to build flexible pipelines in the designer. Pipeline parameters let you dynamically set values at runtime to encapsulate pipeline logic and reuse assets.
18
18
19
-
Pipeline parameters are especially useful when resubmitting a pipeline run, [retraining models](how-to-retrain-designer.md), or [performing batch predictions](how-to-run-batch-predictions-designer.md).
19
+
Pipeline parameters are especially useful when resubmitting a pipeline job, [retraining models](how-to-retrain-designer.md), or [performing batch predictions](how-to-run-batch-predictions-designer.md).
20
20
21
21
In this article, you learn how to do the following:
22
22
23
23
> [!div class="checklist"]
24
24
> * Create pipeline parameters
25
25
> * Delete and manage pipeline parameters
26
-
> * Trigger pipeline runs while adjusting pipeline parameters
26
+
> * Trigger pipeline jobs while adjusting pipeline parameters
27
27
28
28
## Prerequisites
29
29
@@ -96,7 +96,7 @@ In this section, you will learn how to attach and detach component parameter to
96
96
97
97
### Attach component parameter to pipeline parameter
98
98
99
-
You can attach the same component parameters of duplicated components to the same pipeline parameter if you want to alter the value at one time when triggering the pipeline run.
99
+
You can attach the same component parameters of duplicated components to the same pipeline parameter if you want to alter the value at one time when triggering the pipeline job.
100
100
101
101
The following example has duplicated **Clean Missing Data** component. For each **Clean Missing Data** component, attach **Replacement value** to pipeline parameter **replace-missing-value**:
102
102
@@ -159,18 +159,18 @@ Use the following steps to delete a component pipeline parameter:
159
159
> [!NOTE]
160
160
> Deleting a pipeline parameter will cause all attached component parameters to be detached and the value of detached component parameters will keep current pipeline parameter value.
161
161
162
-
## Trigger a pipeline run with pipeline parameters
162
+
## Trigger a pipeline job with pipeline parameters
163
163
164
-
In this section, you learn how to submit a pipeline run while setting pipeline parameters.
164
+
In this section, you learn how to submit a pipeline job while setting pipeline parameters.
165
165
166
-
### Resubmit a pipeline run
166
+
### Resubmit a pipeline job
167
167
168
-
After submitting a pipeline with pipeline parameters, you can resubmit a pipeline run with different parameters:
168
+
After submitting a pipeline with pipeline parameters, you can resubmit a pipeline job with different parameters:
169
169
170
-
1. Go to pipeline detail page. In the **Pipeline run overview** window, you can check current pipeline parameters and values.
170
+
1. Go to pipeline detail page. In the **Pipeline job overview** window, you can check current pipeline parameters and values.
171
171
172
172
1. Select **Resubmit**.
173
-
1. In the **Setup pipeline run**, specify your new pipeline parameters.
173
+
1. In the **Setup pipeline job**, specify your new pipeline parameters.
174
174
175
175

Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-use-reinforcement-learning.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ In this article you learn how to:
26
26
> * Set up an experiment
27
27
> * Define head and worker nodes
28
28
> * Create an RL estimator
29
-
> * Submit an experiment to start a run
29
+
> * Submit an experiment to start a job
30
30
> * View results
31
31
32
32
This article is based on the [RLlib Pong example](https://aka.ms/azureml-rl-pong) that can be found in the Azure Machine Learning notebook [GitHub repository](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/README.md).
@@ -102,7 +102,7 @@ ws = Workspace.from_config()
102
102
103
103
### Create a reinforcement learning experiment
104
104
105
-
Create an [experiment](/python/api/azureml-core/azureml.core.experiment.experiment) to track your reinforcement learning run. In Azure Machine Learning, experiments are logical collections of related trials to organize run logs, history, outputs, and more.
105
+
Create an [experiment](/python/api/azureml-core/azureml.core.experiment.experiment) to track your reinforcement learning job. In Azure Machine Learning, experiments are logical collections of related trials to organize job logs, history, outputs, and more.
106
106
107
107
```python
108
108
experiment_name='rllib-pong-multi-node'
@@ -212,7 +212,7 @@ else:
212
212
213
213
Use the [ReinforcementLearningEstimator](/python/api/azureml-contrib-reinforcementlearning/azureml.contrib.train.rl.reinforcementlearningestimator) to submit a training job to Azure Machine Learning.
214
214
215
-
Azure Machine Learning uses estimator classes to encapsulate run configuration information. This lets you specify how to configure a script execution.
215
+
Azure Machine Learning uses estimator classes to encapsulate job configuration information. This lets you specify how to configure a script execution.
# Allow the docker container Ray runs in to make full use
@@ -396,7 +396,7 @@ def on_train_result(info):
396
396
value=info["result"]["episodes_total"])
397
397
```
398
398
399
-
## Submit a run
399
+
## Submit a job
400
400
401
401
[Run](/python/api/azureml-core/azureml.core.run%28class%29) handles the run history of in-progress or complete jobs.
402
402
@@ -408,7 +408,7 @@ run = exp.submit(config=rl_estimator)
408
408
409
409
## Monitor and view results
410
410
411
-
Use the Azure Machine Learning Jupyter widget to see the status of your runs in real time. The widget shows two child runs: one for head and one for workers.
411
+
Use the Azure Machine Learning Jupyter widget to see the status of your jobs in real time. The widget shows two child jobs: one for head and one for workers.
412
412
413
413
```python
414
414
from azureml.widgets import RunDetails
@@ -418,15 +418,15 @@ run.wait_for_completion()
418
418
```
419
419
420
420
1. Wait for the widget to load.
421
-
1. Select the head run in the list of runs.
421
+
1. Select the head job in the list of jobs.
422
422
423
-
Select **Click here to see the run in Azure Machine Learning studio** for additional run information in the studio. You can access this information while the run is in progress or after it completes.
423
+
Select **Click here to see the job in Azure Machine Learning studio** for additional job information in the studio. You can access this information while the job is in progress or after it completes.
424
424
425
-

425
+

426
426
427
427
The **episode_reward_mean** plot shows the mean number of points scored per training epoch. You can see that the training agent initially performed poorly, losing its matches without scoring a single point (shown by a reward_mean of -21). Within 100 iterations, the training agent learned to beat the computer opponent by an average of 18 points.
428
428
429
-
If you browse logs of the child run, you can see the evaluation results recorded in driver_log.txt file. You may need to wait several minutes before these metrics become available on the Run page.
429
+
If you browse logs of the child job, you can see the evaluation results recorded in driver_log.txt file. You may need to wait several minutes before these metrics become available on the Job page.
430
430
431
431
In short work, you have learned to configure multiple compute resources to train a reinforcement learning agent to play Pong very well against a computer opponent.
In this article, you learn how to use secrets in training runs securely. Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote run context. Coding such values into training scripts in cleartext is insecure as it would expose the secret.
20
+
In this article, you learn how to use secrets in training jobs securely. Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote job context. Coding such values into training scripts in cleartext is insecure as it would expose the secret.
21
21
22
-
Instead, your Azure Machine Learning workspace has an associated resource called a [Azure Key Vault](../key-vault/general/overview.md). Use this Key Vault to pass secrets to remote runs securely through a set of APIs in the Azure Machine Learning Python SDK.
22
+
Instead, your Azure Machine Learning workspace has an associated resource called a [Azure Key Vault](../key-vault/general/overview.md). Use this Key Vault to pass secrets to remote jobs securely through a set of APIs in the Azure Machine Learning Python SDK.
23
23
24
24
The standard flow for using secrets is:
25
25
1. On local computer, log in to Azure and connect to your workspace.
26
26
2. On local computer, set a secret in Workspace Key Vault.
27
-
3. Submit a remote run.
28
-
4. Within the remote run, get the secret from Key Vault and use it.
27
+
3. Submit a remote job.
28
+
4. Within the remote job, get the secret from Key Vault and use it.
29
29
30
30
## Set secrets
31
31
@@ -56,10 +56,10 @@ You can list secret names using the [`list_secrets()`](/python/api/azureml-core/
56
56
57
57
In your local code, you can use the [`get_secret()`](/python/api/azureml-core/azureml.core.keyvault.keyvault#get-secret-name-) method to get the secret value by name.
58
58
59
-
For runs submitted the [`Experiment.submit`](/python/api/azureml-core/azureml.core.experiment.experiment#submit-config--tags-none----kwargs-) , use the [`get_secret()`](/python/api/azureml-core/azureml.core.run.run#get-secret-name-) method with the [`Run`](/python/api/azureml-core/azureml.core.run%28class%29) class. Because a submitted run is aware of its workspace, this method shortcuts the Workspace instantiation and returns the secret value directly.
59
+
For jobs submitted the [`Experiment.submit`](/python/api/azureml-core/azureml.core.experiment.experiment#submit-config--tags-none----kwargs-) , use the [`get_secret()`](/python/api/azureml-core/azureml.core.run.run#get-secret-name-) method with the [`Run`](/python/api/azureml-core/azureml.core.run%28class%29) class. Because a submitted run is aware of its workspace, this method shortcuts the Workspace instantiation and returns the secret value directly.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-use-sweep-in-pipeline.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,15 +67,15 @@ Below code snippet shows how to enable sweep for `train_model`.
67
67
68
68
After you submit a pipeline job, the SDK or CLI widget will give you a web URL link to Studio UI. The link will guide you to the pipeline graph view by default.
69
69
70
-
To check details of the sweep step, double click the sweep step and navigate to the **child run** tab in the panel on the right.
70
+
To check details of the sweep step, double click the sweep step and navigate to the **child job** tab in the panel on the right.
71
71
72
-
:::image type="content" source="./media/how-to-use-sweep-in-pipeline/pipeline-view.png" alt-text="Screenshot of the pipeline with child run and the train_model node highlighted." lightbox= "./media/how-to-use-sweep-in-pipeline/pipeline-view.png":::
72
+
:::image type="content" source="./media/how-to-use-sweep-in-pipeline/pipeline-view.png" alt-text="Screenshot of the pipeline with child job and the train_model node highlighted." lightbox= "./media/how-to-use-sweep-in-pipeline/pipeline-view.png":::
73
73
74
-
This will link you to the sweep job page as seen in the below screenshot. Navigate to **child run** tab, here you can see the metrics of all child runs and list of all child runs.
74
+
This will link you to the sweep job page as seen in the below screenshot. Navigate to **child job** tab, here you can see the metrics of all child jobs and list of all child jobs.
75
75
76
-
:::image type="content" source="./media/how-to-use-sweep-in-pipeline/sweep-job.png" alt-text="Screenshot of the job page on the child runs tab." lightbox= "./media/how-to-use-sweep-in-pipeline/sweep-job.png":::
76
+
:::image type="content" source="./media/how-to-use-sweep-in-pipeline/sweep-job.png" alt-text="Screenshot of the job page on the child jobs tab." lightbox= "./media/how-to-use-sweep-in-pipeline/sweep-job.png":::
77
77
78
-
If a child runs failed, select the name of that child run to enter detail page of that specific child run (see screenshot below). The useful debug information is under **Outputs + Logs**.
78
+
If a child jobs failed, select the name of that child job to enter detail page of that specific child job (see screenshot below). The useful debug information is under **Outputs + Logs**.
79
79
80
80
:::image type="content" source="./media/how-to-use-sweep-in-pipeline/child-run.png" alt-text="Screenshot of the output + logs tab of a child run." lightbox= "./media/how-to-use-sweep-in-pipeline/child-run.png":::
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-use-synapsesparkstep.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -198,9 +198,9 @@ sdf.coalesce(1).write\
198
198
.csv(args.output_dir)
199
199
```
200
200
201
-
This "data preparation" script doesn't do any real data transformation, but illustrates how to retrieve data, convert it to a spark dataframe, and how to do some basic Apache Spark manipulation. You can find the output in Azure Machine Learning Studio by opening the child run, choosing the **Outputs + logs** tab, and opening the `logs/azureml/driver/stdout` file, as shown in the following figure.
201
+
This "data preparation" script doesn't do any real data transformation, but illustrates how to retrieve data, convert it to a spark dataframe, and how to do some basic Apache Spark manipulation. You can find the output in Azure Machine Learning Studio by opening the child job, choosing the **Outputs + logs** tab, and opening the `logs/azureml/driver/stdout` file, as shown in the following figure.
202
202
203
-
:::image type="content" source="media/how-to-use-synapsesparkstep/synapsesparkstep-stdout.png" alt-text="Screenshot of Studio showing stdout tab of child run":::
203
+
:::image type="content" source="media/how-to-use-synapsesparkstep/synapsesparkstep-stdout.png" alt-text="Screenshot of Studio showing stdout tab of child job":::
The above code creates a pipeline consisting of the data preparation step on Apache Spark pools powered by Azure Synapse Analytics (`step_1`) and the training step (`step_2`). Azure calculates the execution graph by examining the data dependencies between the steps. In this case, there's only a straightforward dependency that `step2_input` necessarily requires `step1_output`.
246
246
247
-
The call to `pipeline.submit` creates, if necessary, an Experiment called `synapse-pipeline` and asynchronously begins a Run within it. Individual steps within the pipeline are run as Child Runs of this main run and can be monitored and reviewed in the Experiments page of Studio.
247
+
The call to `pipeline.submit` creates, if necessary, an Experiment called `synapse-pipeline` and asynchronously begins a Job within it. Individual steps within the pipeline are run as Child Jobs of this main job and can be monitored and reviewed in the Experiments page of Studio.
0 commit comments