You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-high-availability-machine-learning.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -162,7 +162,7 @@ Jobs in Azure Machine Learning are defined by a job specification. This specific
162
162
163
163
When your primary workspace becomes unavailable, you can switch over the secondary workspace to continue experimentation and development. Azure Machine Learning does not automatically submit jobs to the secondary workspace if there is an outage. Update your code configuration to point to the new workspace resource. We recommend to avoiding hardcoding workspace references. Instead, use a [workspace config file](how-to-configure-environment.md#workspace) to minimize manual user steps when changing workspaces. Make sure to also update any automation, such as continuous integration and deployment pipelines to the new workspace.
164
164
165
-
Azure Machine Learning cannot sync or recover artifacts or metadata between workspace instances. Dependent on your application deployment strategy, you might have to move artifacts or recreate experimentation inputs such as dataset objects in the failover workspace in order to continue run submission. In case you have configured your primary workspace and secondary workspace resources to share associated resources with geo-replication enabled, some objects might be directly available to the failover workspace. For example, if both workspaces share the same docker images, configured datastores, and Azure Key Vault resources. The following diagram shows a configuration where two workspaces share the same images (1), datastores (2), and Key Vault (3).
165
+
Azure Machine Learning cannot sync or recover artifacts or metadata between workspace instances. Dependent on your application deployment strategy, you might have to move artifacts or recreate experimentation inputs such as dataset objects in the failover workspace in order to continue job submission. In case you have configured your primary workspace and secondary workspace resources to share associated resources with geo-replication enabled, some objects might be directly available to the failover workspace. For example, if both workspaces share the same docker images, configured datastores, and Azure Key Vault resources. The following diagram shows a configuration where two workspaces share the same images (1), datastores (2), and Key Vault (3).
@@ -183,7 +183,7 @@ The following artifacts can be exported and imported between workspaces by using
183
183
184
184
> [!TIP]
185
185
> *__Registered datasets__ cannot be downloaded or moved. This includes datasets generated by Azure ML, such as intermediate pipeline datasets. However datasets that refer to a shared file location that both workspaces can access, or where the underlying data storage is replicated, can be registered on both workspaces. Use the [az ml dataset register](/cli/azure/ml(v1)/dataset#ml-az-ml-dataset-register) to register a dataset.
186
-
> *__Run outputs__ are stored in the default storage account associated with a workspace. While run outputs might become inaccessible from the studio UI in the case of a service outage, you can directly access the data through the storage account. For more information on working with data stored in blobs, see [Create, download, and list blobs with Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md).
186
+
> *__Job outputs__ are stored in the default storage account associated with a workspace. While job outputs might become inaccessible from the studio UI in the case of a service outage, you can directly access the data through the storage account. For more information on working with data stored in blobs, see [Create, download, and list blobs with Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-log-view-metrics.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ Logs can help you diagnose errors and warnings, or track performance metrics lik
34
34
35
35
36
36
> [!TIP]
37
-
> This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
37
+
> This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine learning, such as quotas, completed training jobs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
38
38
39
39
## Prerequisites
40
40
@@ -58,7 +58,7 @@ The following table describes how to log specific value types:
58
58
|Log numpy metrics or PIL image objects|`mlflow.log_image(img, 'figure.png')`||
59
59
|Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`||
60
60
61
-
## Log a training run with MLflow
61
+
## Log a training job with MLflow
62
62
63
63
To set up for logging with MLflow, import `mlflow` and set the tracking URI:
When training interactively, such as in a Jupyter Notebook, use the following pattern:
82
82
83
83
1. Create or set the active experiment.
84
-
1. Start the run.
84
+
1. Start the job.
85
85
1. Use logging methods to log metrics and other information.
86
-
1. End the run.
86
+
1. End the job.
87
87
88
-
For example, the following code snippet demonstrates setting the tracking URI, creating an experiment, and then logging during a run
88
+
For example, the following code snippet demonstrates setting the tracking URI, creating an experiment, and then logging during a job
89
89
90
90
```python
91
91
from mlflow.tracking import MlflowClient
@@ -132,7 +132,7 @@ For remote training runs, the tracking URI and experiment are set automatically.
132
132
133
133
To save the model from a training run, use the `log_model()` API for the framework you're working with. For example, [mlflow.sklearn.log_model()](https://mlflow.org/docs/latest/python_api/mlflow.sklearn.html#mlflow.sklearn.log_model). For frameworks that MLflow doesn't support, see [Convert custom models to MLflow](how-to-convert-custom-model-to-mlflow.md).
134
134
135
-
## View run information
135
+
## View job information
136
136
137
137
You can view the logged information using MLflow through the [MLflow.entities.Run](https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run) object. After a training job completes, you can retrieve it using the [MlFlowClient()](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient):
You can browse completed run records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
164
+
You can browse completed job records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
165
165
166
-
Navigate to the **Experiments** tab. To view all your runs in your Workspace across Experiments, select the **All runs** tab. You can drill down on runs for specific Experiments by applying the Experiment filter in the top menu bar.
166
+
Navigate to the **Jobs** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific Experiments by applying the Experiment filter in the top menu bar.
167
167
168
168
For the individual Experiment view, select the **All experiments** tab. On the experiment run dashboard, you can see tracked metrics and logs for each run.
169
169
170
-
You can also edit the run list table to select multiple runs and display either the last, minimum, or maximum logged value for your runs. Customize your charts to compare the logged metrics values and aggregates across multiple runs. You can plot multiple metrics on the y-axis of your chart and customize your x-axis to plot your logged metrics.
170
+
You can also edit the job list table to select multiple jobs and display either the last, minimum, or maximum logged value for your jobs. Customize your charts to compare the logged metrics values and aggregates across multiple jobs. You can plot multiple metrics on the y-axis of your chart and customize your x-axis to plot your logged metrics.
171
171
172
172
173
-
### View and download log files for a run
173
+
### View and download log files for a job
174
174
175
175
Log files are an essential resource for debugging the Azure ML workloads. After submitting a training job, drill down to a specific run to view its logs and outputs:
176
176
177
-
1. Navigate to the **Experiments** tab.
177
+
1. Navigate to the **Jobs** tab.
178
178
1. Select the runID for a specific run.
179
179
1. Select **Outputs and logs** at the top of the page.
180
180
2. Select **Download all** to download all your logs into a zip folder.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-machine-learning-fairness-aml.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -207,8 +207,8 @@ The following example shows how to use the fairness package. We will upload mode
207
207
If you complete the previous steps (uploading generated fairness insights to Azure Machine Learning), you can view the fairness dashboard in [Azure Machine Learning studio](https://ml.azure.com). This dashboard is the same visualization dashboard provided in Fairlearn, enabling you to analyze the disparities among your sensitive feature's subgroups (e.g., male vs. female).
208
208
Follow one of these paths to access the visualization dashboard in Azure Machine Learning studio:
209
209
210
-
***Experiments pane (Preview)**
211
-
1. Select **Experiments**in the left pane to see a list of experiments that you've run on Azure Machine Learning.
210
+
***Jobs pane (Preview)**
211
+
1. Select **Jobs**in the left pane to see a list of experiments that you've run on Azure Machine Learning.
212
212
1. Select a particular experiment to view all the runs in that experiment.
213
213
1. Select a run, and then the **Fairness** tab to the explanation visualization dashboard.
214
214
1. Once landing on the **Fairness** tab, click on a **fairness id**from the menu on the right.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-manage-environments-in-studio.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ For a high-level overview of how environments work in Azure Machine Learning, se
32
32
33
33
## Browse curated environments
34
34
35
-
Curated environments contain collections of Python packages and are available in your workspace by default. These environments are backed by cached Docker images which reduces the run preparation cost and support training and inferencing scenarios.
35
+
Curated environments contain collections of Python packages and are available in your workspace by default. These environments are backed by cached Docker images which reduces the job preparation cost and support training and inferencing scenarios.
36
36
37
37
Click on an environment to see detailed information about its contents. For more information, see [Azure Machine Learning curated environments](resource-curated-environments.md).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-use-pipeline-ui.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -87,7 +87,7 @@ If your pipeline fails or gets stuck on a node, first view the logs.
87
87
88
88
The **user_logs folder** contains information about user code generated logs. This folder is open by default, and the **std_log.txt** log is selected. The **std_log.txt** is where your code's logs (for example, print statements) show up.
89
89
90
-
The **system_logs folder** contains logs generated by Azure Machine Learning. Learn more about [how to view and download log files for a run](how-to-log-view-metrics.md#view-and-download-log-files-for-a-run).
90
+
The **system_logs folder** contains logs generated by Azure Machine Learning. Learn more about [how to view and download log files for a run](how-to-log-view-metrics.md#view-and-download-log-files-for-a-job).
91
91
92
92
:::image type="content" source="./media/how-to-use-pipeline-ui/view-user-log.png" alt-text="Screenshot showing the user logs of a node." lightbox= "./media/how-to-use-pipeline-ui/view-user-log.png":::
0 commit comments