You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/service/how-to-track-experiments.md
+21-1Lines changed: 21 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.service: machine-learning
10
10
ms.subservice: core
11
11
ms.workload: data-services
12
12
ms.topic: conceptual
13
-
ms.date: 09/11/2019
13
+
ms.date: 12/05/2019
14
14
15
15
ms.custom: seodec18
16
16
---
@@ -228,6 +228,25 @@ The [Start, monitor, and cancel training runs](how-to-manage-runs.md) article hi
228
228
229
229
## View run details
230
230
231
+
### View active/queued runs from the browser
232
+
233
+
Compute targets used to train models are a shared resource. As such, they may have multiple runs queued or active at a given time. To see the runs for a specific compute target from your browser, use the following steps:
234
+
235
+
1. From the [Azure Machine Learning studio](https://ml.azure.com/), select your workspace, and then select __Compute__ from the left side of the page.
236
+
237
+
1. Select __Training Clusters__ to display a list of compute targets used for training. Then select the cluster.
238
+
239
+

240
+
241
+
1. Select __Runs__. The list of runs that use this cluster is displayed. To view details for a specific run, use the link in the __Run__ column. To view details for the experiment, use the link in the __Experiment__ column.
242
+
243
+

244
+
245
+
> [!TIP]
246
+
> A run can contain child runs, so one training job can result in multiple entries.
247
+
248
+
Once a run completes, it is no longer displayed on this page. To view information on completed runs, visit the __Experiments__ section of the studio and select the experiment and run. For more information, see the [Query run metrics](#queryrunmetrics) section.
249
+
231
250
### Monitor run with Jupyter notebook widget
232
251
When you use the **ScriptRunConfig** method to submit runs, you can watch the progress of the run with a [Jupyter widget](https://docs.microsoft.com/python/api/azureml-widgets/azureml.widgets?view=azure-ml-py). Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes.
233
252
@@ -267,6 +286,7 @@ To view further details of a pipeline click on the Pipeline you would like to ex
267
286
268
287
Model training and monitoring occur in the background so that you can run other tasks while you wait. You can also wait until the model has completed training before running more code. When you use **ScriptRunConfig**, you can use ```run.wait_for_completion(show_output = True)``` to show when the model training is complete. The ```show_output``` flag gives you verbose output.
0 commit comments