Skip to content

Commit 9ea748e

Browse files
authored
Merge pull request #199420 from ssalgadodev/runsToJobs5
Runs to jobs
2 parents 77e9dca + 1870a1f commit 9ea748e

6 files changed

+20
-19
lines changed

articles/machine-learning/how-to-high-availability-machine-learning.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -162,7 +162,7 @@ Jobs in Azure Machine Learning are defined by a job specification. This specific
162162

163163
When your primary workspace becomes unavailable, you can switch over the secondary workspace to continue experimentation and development. Azure Machine Learning does not automatically submit jobs to the secondary workspace if there is an outage. Update your code configuration to point to the new workspace resource. We recommend to avoiding hardcoding workspace references. Instead, use a [workspace config file](how-to-configure-environment.md#workspace) to minimize manual user steps when changing workspaces. Make sure to also update any automation, such as continuous integration and deployment pipelines to the new workspace.
164164

165-
Azure Machine Learning cannot sync or recover artifacts or metadata between workspace instances. Dependent on your application deployment strategy, you might have to move artifacts or recreate experimentation inputs such as dataset objects in the failover workspace in order to continue run submission. In case you have configured your primary workspace and secondary workspace resources to share associated resources with geo-replication enabled, some objects might be directly available to the failover workspace. For example, if both workspaces share the same docker images, configured datastores, and Azure Key Vault resources. The following diagram shows a configuration where two workspaces share the same images (1), datastores (2), and Key Vault (3).
165+
Azure Machine Learning cannot sync or recover artifacts or metadata between workspace instances. Dependent on your application deployment strategy, you might have to move artifacts or recreate experimentation inputs such as dataset objects in the failover workspace in order to continue job submission. In case you have configured your primary workspace and secondary workspace resources to share associated resources with geo-replication enabled, some objects might be directly available to the failover workspace. For example, if both workspaces share the same docker images, configured datastores, and Azure Key Vault resources. The following diagram shows a configuration where two workspaces share the same images (1), datastores (2), and Key Vault (3).
166166

167167
![Reference resource configuration](./media/how-to-high-availability-machine-learning/bcdr-resource-configuration.png)
168168

@@ -183,7 +183,7 @@ The following artifacts can be exported and imported between workspaces by using
183183

184184
> [!TIP]
185185
> * __Registered datasets__ cannot be downloaded or moved. This includes datasets generated by Azure ML, such as intermediate pipeline datasets. However datasets that refer to a shared file location that both workspaces can access, or where the underlying data storage is replicated, can be registered on both workspaces. Use the [az ml dataset register](/cli/azure/ml(v1)/dataset#ml-az-ml-dataset-register) to register a dataset.
186-
> * __Run outputs__ are stored in the default storage account associated with a workspace. While run outputs might become inaccessible from the studio UI in the case of a service outage, you can directly access the data through the storage account. For more information on working with data stored in blobs, see [Create, download, and list blobs with Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md).
186+
> * __Job outputs__ are stored in the default storage account associated with a workspace. While job outputs might become inaccessible from the studio UI in the case of a service outage, you can directly access the data through the storage account. For more information on working with data stored in blobs, see [Create, download, and list blobs with Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md).
187187
188188
## Recovery options
189189

articles/machine-learning/how-to-log-pipelines-application-insights.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ ms.date: 10/21/2021
1111
ms.topic: how-to
1212
ms.custom: devx-track-python, sdkv1, event-tier1-build-2022
1313
---
14+
1415
# Collect machine learning pipeline log files in Application Insights for alerts and debugging
1516

1617
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]

articles/machine-learning/how-to-log-view-metrics.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Logs can help you diagnose errors and warnings, or track performance metrics lik
3434
3535

3636
> [!TIP]
37-
> This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
37+
> This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine learning, such as quotas, completed training jobs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
3838
3939
## Prerequisites
4040

@@ -58,7 +58,7 @@ The following table describes how to log specific value types:
5858
|Log numpy metrics or PIL image objects|`mlflow.log_image(img, 'figure.png')`||
5959
|Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`||
6060

61-
## Log a training run with MLflow
61+
## Log a training job with MLflow
6262

6363
To set up for logging with MLflow, import `mlflow` and set the tracking URI:
6464

@@ -76,16 +76,16 @@ ws = Workspace.from_config()
7676
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
7777
```
7878

79-
### Interactive runs
79+
### Interactive jobs
8080

8181
When training interactively, such as in a Jupyter Notebook, use the following pattern:
8282

8383
1. Create or set the active experiment.
84-
1. Start the run.
84+
1. Start the job.
8585
1. Use logging methods to log metrics and other information.
86-
1. End the run.
86+
1. End the job.
8787

88-
For example, the following code snippet demonstrates setting the tracking URI, creating an experiment, and then logging during a run
88+
For example, the following code snippet demonstrates setting the tracking URI, creating an experiment, and then logging during a job
8989

9090
```python
9191
from mlflow.tracking import MlflowClient
@@ -132,7 +132,7 @@ For remote training runs, the tracking URI and experiment are set automatically.
132132
133133
To save the model from a training run, use the `log_model()` API for the framework you're working with. For example, [mlflow.sklearn.log_model()](https://mlflow.org/docs/latest/python_api/mlflow.sklearn.html#mlflow.sklearn.log_model). For frameworks that MLflow doesn't support, see [Convert custom models to MLflow](how-to-convert-custom-model-to-mlflow.md).
134134
135-
## View run information
135+
## View job information
136136
137137
You can view the logged information using MLflow through the [MLflow.entities.Run](https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run) object. After a training job completes, you can retrieve it using the [MlFlowClient()](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient):
138138
@@ -159,22 +159,22 @@ params = finished_mlflow_run.data.params
159159
160160
<a name="view-the-experiment-in-the-web-portal"></a>
161161
162-
## View run metrics in the studio UI
162+
## View job metrics in the studio UI
163163
164-
You can browse completed run records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
164+
You can browse completed job records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
165165
166-
Navigate to the **Experiments** tab. To view all your runs in your Workspace across Experiments, select the **All runs** tab. You can drill down on runs for specific Experiments by applying the Experiment filter in the top menu bar.
166+
Navigate to the **Jobs** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific Experiments by applying the Experiment filter in the top menu bar.
167167
168168
For the individual Experiment view, select the **All experiments** tab. On the experiment run dashboard, you can see tracked metrics and logs for each run.
169169
170-
You can also edit the run list table to select multiple runs and display either the last, minimum, or maximum logged value for your runs. Customize your charts to compare the logged metrics values and aggregates across multiple runs. You can plot multiple metrics on the y-axis of your chart and customize your x-axis to plot your logged metrics.
170+
You can also edit the job list table to select multiple jobs and display either the last, minimum, or maximum logged value for your jobs. Customize your charts to compare the logged metrics values and aggregates across multiple jobs. You can plot multiple metrics on the y-axis of your chart and customize your x-axis to plot your logged metrics.
171171
172172
173-
### View and download log files for a run
173+
### View and download log files for a job
174174
175175
Log files are an essential resource for debugging the Azure ML workloads. After submitting a training job, drill down to a specific run to view its logs and outputs:
176176
177-
1. Navigate to the **Experiments** tab.
177+
1. Navigate to the **Jobs** tab.
178178
1. Select the runID for a specific run.
179179
1. Select **Outputs and logs** at the top of the page.
180180
2. Select **Download all** to download all your logs into a zip folder.

articles/machine-learning/how-to-machine-learning-fairness-aml.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -207,8 +207,8 @@ The following example shows how to use the fairness package. We will upload mode
207207
If you complete the previous steps (uploading generated fairness insights to Azure Machine Learning), you can view the fairness dashboard in [Azure Machine Learning studio](https://ml.azure.com). This dashboard is the same visualization dashboard provided in Fairlearn, enabling you to analyze the disparities among your sensitive feature's subgroups (e.g., male vs. female).
208208
Follow one of these paths to access the visualization dashboard in Azure Machine Learning studio:
209209

210-
* **Experiments pane (Preview)**
211-
1. Select **Experiments** in the left pane to see a list of experiments that you've run on Azure Machine Learning.
210+
* **Jobs pane (Preview)**
211+
1. Select **Jobs** in the left pane to see a list of experiments that you've run on Azure Machine Learning.
212212
1. Select a particular experiment to view all the runs in that experiment.
213213
1. Select a run, and then the **Fairness** tab to the explanation visualization dashboard.
214214
1. Once landing on the **Fairness** tab, click on a **fairness id** from the menu on the right.

articles/machine-learning/how-to-manage-environments-in-studio.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ For a high-level overview of how environments work in Azure Machine Learning, se
3232

3333
## Browse curated environments
3434

35-
Curated environments contain collections of Python packages and are available in your workspace by default. These environments are backed by cached Docker images which reduces the run preparation cost and support training and inferencing scenarios.
35+
Curated environments contain collections of Python packages and are available in your workspace by default. These environments are backed by cached Docker images which reduces the job preparation cost and support training and inferencing scenarios.
3636

3737
Click on an environment to see detailed information about its contents. For more information, see [Azure Machine Learning curated environments](resource-curated-environments.md).
3838

articles/machine-learning/how-to-use-pipeline-ui.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ If your pipeline fails or gets stuck on a node, first view the logs.
8787

8888
The **user_logs folder** contains information about user code generated logs. This folder is open by default, and the **std_log.txt** log is selected. The **std_log.txt** is where your code's logs (for example, print statements) show up.
8989

90-
The **system_logs folder** contains logs generated by Azure Machine Learning. Learn more about [how to view and download log files for a run](how-to-log-view-metrics.md#view-and-download-log-files-for-a-run).
90+
The **system_logs folder** contains logs generated by Azure Machine Learning. Learn more about [how to view and download log files for a run](how-to-log-view-metrics.md#view-and-download-log-files-for-a-job).
9191

9292
:::image type="content" source="./media/how-to-use-pipeline-ui/view-user-log.png" alt-text="Screenshot showing the user logs of a node." lightbox= "./media/how-to-use-pipeline-ui/view-user-log.png":::
9393

0 commit comments

Comments
 (0)