You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> - Evaluation results (including some evaluation predictions)
34
34
35
-
Some of these elements are automatically tracked by Azure Machine Learning when working with jobs (including code, environment, and input and output data). However, others like models, parameters, and metrics, need to be instrumented by the model builder as it is specific to the particular scenario. In this article, you will learn how to use MLflow for tracking your experiments and runs in Azure Machine Learning workspaces.
35
+
Some of these elements are automatically tracked by Azure Machine Learning when working with jobs (including code, environment, and input and output data). However, others like models, parameters, and metrics, need to be instrumented by the model builder as it is specific to the particular scenario.
36
+
37
+
In this article, you will learn how to use MLflow for tracking your experiments and runs in Azure Machine Learning workspaces.
36
38
37
39
> [!NOTE]
38
40
> If you want to track experiments running on Azure Databricks or Azure Synapse Analytics, see the dedicated articles [Track Azure Databricks ML experiments with MLflow and Azure Machine Learning](how-to-use-mlflow-azure-databricks.md) or [Track Azure Synapse Analytics ML experiments with MLflow and Azure Machine Learning](how-to-use-mlflow-azure-synapse.md).
39
41
40
-
## Benefits
42
+
## Benefits of tracking experiments
41
43
42
44
We highly encourage machine learning practitioners to instrument their experimentation by tracking them, regardless if they are training with jobs in Azure Machine Learning or interactively in notebooks. Benefits include:
43
45
@@ -46,8 +48,7 @@ We highly encourage machine learning practitioners to instrument their experimen
46
48
- Reproduce or re-run experiments to validate results.
47
49
- Improve collaboration by seeing what everyone is doing, sharing experiment results, and access experiment data programmatically.
48
50
49
-
50
-
## Using MLflow for tracking
51
+
### Why MLflow
51
52
52
53
Azure Machine Learning workspaces are MLflow-compatible, which means you can use MLflow to track runs, metrics, parameters, and artifacts with your Azure Machine Learning workspaces. By using MLflow for tracking, you don't need to change your training routines to work with Azure Machine Learning or inject any cloud-specific syntax, which is one of the main advantages of the approach.
53
54
@@ -237,13 +238,15 @@ The metrics and artifacts from MLflow logging are tracked in your workspace. To
237
238
238
239
Select the logged metrics to render charts on the right side. You can customize the charts by applying smoothing, changing the color, or plotting multiple metrics on a single graph. You can also resize and rearrange the layout as you wish. Once you have created your desired view, you can save it for future use and share it with your teammates using a direct link.
239
240
240
-
Retrieve run metric using MLflow SDK, use [mlflow.get_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.get_run).
241
+
You can also access or __query metrics, parameters and artifacts programatically__ using the MLflow SDK. Use [mlflow.get_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.get_run) as explained bellow:
241
242
242
-
```Python
243
-
frommlflow.tracking import MlflowClient
243
+
> [!TIP]
244
+
> For more details about how to retrieve information fromexperiments and runs in Azure Machine Learning using MLflow view [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
244
245
245
-
client= MlflowClient()
246
-
run= MlflowClient().get_run("<RUN_ID>")
246
+
```python
247
+
import mlflow
248
+
249
+
run= mlflow.get_run("<RUN_ID>")
247
250
248
251
metrics= run.data.metrics
249
252
tags= run.data.tags
@@ -252,20 +255,12 @@ params = run.data.params
252
255
print(metrics,tags,params)
253
256
```
254
257
255
-
To view the artifacts of a run, you can use [MlFlowClient.list_artifacts()](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.list_artifacts)
258
+
To download artifacts you have logged, like files and models, you can use [mlflow.artifacts.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.artifacts.html#mlflow.artifacts.download_artifacts)
256
259
257
-
```Python
258
-
client.list_artifacts(run_id)
259
-
```
260
-
261
-
To download an artifact to the current directory, you can use [MLFlowClient.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.download_artifacts)
For more details about how to retrieve information from experiments and runs in Azure Machine Learning using MLflow view [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
268
-
269
264
## Example notebooks
270
265
271
266
If you are looking for examples about how to use MLflow in Jupyter notebooks, please see our example's repository [Using MLflow (Jupyter Notebooks)](https://github.com/Azure/azureml-examples/tree/main/sdk/python/using-mlflow).
0 commit comments