You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/service/how-to-track-experiments.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ ms.date: 09/24/2018
17
17
In the Azure Machine Learning service, you can track your experiments and monitor metrics to enhance the model creation process. In this article, you'll learn about the different ways to add logging to your training script, how to submit the experiment with **start_logging** and **ScriptRunConfig**, how to check the progress of a running job, and how to view the results of a run.
18
18
19
19
>[!NOTE]
20
-
> Code in this article was tested with Azure Machine Learning SDK version 0.168
20
+
> Code in this article was tested with Azure Machine Learning SDK version 0.1.74
21
21
22
22
## List of training metrics
23
23
@@ -63,7 +63,6 @@ Before adding logging and submitting an experiment, you must set up the workspac
@@ -99,7 +98,8 @@ The following example trains a simple sklearn Ridge model locally in a local Jup
99
98
2. Add experiment tracking using the Azure Machine Learning service SDK, and upload a persisted model into the experiment run record. The following code adds tags, logs, and uploads a model file to the experiment run.
0 commit comments