You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-mlflow.md
+28-2Lines changed: 28 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ ms.author: mopeakande
8
8
ms.reviewer: cacrest
9
9
ms.service: azure-machine-learning
10
10
ms.subservice: mlops
11
-
ms.date: 09/25/2024
11
+
ms.date: 09/30/2024
12
12
ms.topic: concept-article
13
13
ms.custom: cliv2, sdkv2, FY25Q1-Linter
14
14
#Customer intent: As a data scientist, I want to understand what MLflow is and does so that I can use MLflow with my models.
@@ -29,11 +29,37 @@ Azure Machine Learning workspaces are MLflow-compatible, which means that you ca
29
29
> [!TIP]
30
30
> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the Azure Machine Learning v2 SDK. You can use MLflow logging to ensure that your training routines are cloud-agnostic, portable, and have no dependency on Azure Machine Learning.
31
31
32
+
## What is tracking
33
+
34
+
When you work with jobs, Azure Machine Learning automatically tracks some information about experiments, such as code, environment, and input and output data. However, models, parameters, and metrics are specific to the scenario, so model builders must configure their tracking.
35
+
36
+
The saved tracking metadata varies by experiment, and can include:
37
+
38
+
- Code
39
+
- Environment details such as OS version and Python packages
40
+
- Input data
41
+
- Parameter configurations
42
+
- Models
43
+
- Evaluation metrics
44
+
- Evaluation visualizations such as confusion matrices and importance plots
45
+
- Evaluation results, including some evaluation predictions
46
+
47
+
## Benefits of tracking experiments
48
+
49
+
Whether you train models with jobs in Azure Machine Learning or interactively in notebooks, experiment tracking helps you:
50
+
51
+
- Organize all of your machine learning experiments in a single place. You can then search and filter experiments and drill down to see details about previous experiments.
52
+
- Easily compare experiments, analyze results, and debug model training.
53
+
- Reproduce or rerun experiments to validate results.
54
+
- Improve collaboration, because you can see what other teammates are doing, share experiment results, and access experiment data programmatically.
55
+
32
56
## Tracking with MLflow
33
57
58
+
Azure Machine Learning workspaces are MLflow-compatible. This compatibility means you can use MLflow to track runs, metrics, parameters, and artifacts in workspaces without needing to change your training routines or inject any cloud-specific syntax. To learn how to use MLflow for tracking experiments and runs in Azure Machine Learning workspaces, see [Track experiments and models with MLflow](how-to-use-mlflow-cli-runs.md).
59
+
34
60
Azure Machine Learning uses MLflow tracking to log metrics and store artifacts for your experiments. When you're connected to Azure Machine Learning, all MLflow tracking materializes in the workspace you're working in.
35
61
36
-
To learn how to set up MLflow tracking for experiments and training routines, see [Log metrics, parameters, and files with MLflow](how-to-log-view-metrics.md). You can also [query and compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
62
+
To learn how to enable logging to monitor real-time run metrics with MLflow, see [Log metrics, parameters, and files with MLflow](how-to-log-view-metrics.md). You can also [query and compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
37
63
38
64
MLflow in Azure Machine Learning provides a way to centralize tracking. You can connect MLflow to Azure Machine Learning workspaces even when you're working locally or in a different cloud. The Azure Machine Learning workspace provides a centralized, secure, and scalable location to store training metrics and models.
*Tracking* is the process of saving relevant information about experiments. In this article, you learn how to use MLflow for tracking experiments and runs in Azure Machine Learning workspaces. The saved tracking metadata varies by experiment, and can include:
19
+
*Tracking* is the process of saving relevant information about experiments. In this article, you learn how to use MLflow for tracking experiments and runs in Azure Machine Learning workspaces. For more information about supported MLflow functionalities in Azure Machine Learning, see [MLflow and Azure Machine Learning](concept-mlflow.md).
20
20
21
-
- Code
22
-
- Environment details such as OS version and Python packages
23
-
- Input data
24
-
- Parameter configurations
25
-
- Models
26
-
- Evaluation metrics
27
-
- Evaluation visualizations such as confusion matrices and importance plots
28
-
- Evaluation results, including some evaluation predictions
29
-
30
-
When you work with jobs, Azure Machine Learning automatically tracks some information about experiments, such as code, environment, and input and output data. However, models, parameters, and metrics are specific to the scenario, so model builders must configure their tracking.
31
-
32
-
Whether you train models with jobs in Azure Machine Learning or interactively in notebooks, experiment tracking helps you:
33
-
34
-
- Organize all of your machine learning experiments in a single place. You can then search and filter experiments and drill down to see details about previous experiments.
35
-
- Easily compare experiments, analyze results, and debug model training.
36
-
- Reproduce or rerun experiments to validate results.
37
-
- Improve collaboration, because you can see what other teammates are doing, share experiment results, and access experiment data programmatically.
21
+
> [!NOTE]
22
+
> Some methods available in the MLflow API might not be available when connected to Azure Machine Learning. For details about supported and unsupported operations, see [Support matrix for querying runs and experiments](how-to-track-experiments-mlflow.md#support-matrix-for-querying-runs-and-experiments).
38
23
39
24
> [!NOTE]
40
25
> - To track experiments running on Azure Databricks, see [Track Azure Databricks ML experiments with MLflow and Azure Machine Learning](how-to-use-mlflow-azure-databricks.md).
41
26
> - To track experiments running on Azure Synapse Analytics, see [Track Azure Synapse Analytics ML experiments with MLflow and Azure Machine Learning](how-to-use-mlflow-azure-synapse.md).
42
27
43
-
Azure Machine Learning workspaces are MLflow-compatible. This compatibility means you can use MLflow to track runs, metrics, parameters, and artifacts in workspaces without needing to change your training routines or inject any cloud-specific syntax. For more information about supported MLflow and Azure Machine Learning functionalities, see [MLflow and Azure Machine Learning](concept-mlflow.md).
44
-
45
-
>[!NOTE]
46
-
>Some methods available in the MLflow API might not be available when connected to Azure Machine Learning. For details about supported and unsupported operations, see [Support matrix for querying runs and experiments](how-to-track-experiments-mlflow.md#support-matrix-for-querying-runs-and-experiments).
47
-
48
28
## Prerequisites
49
29
50
30
- Have an Azure subscription with the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
@@ -55,7 +35,7 @@ Azure Machine Learning workspaces are MLflow-compatible. This compatibility mean
55
35
56
36
## Configure the experiment
57
37
58
-
MLflow organizes information in experiments and runs, which are called *jobs* in Azure Machine Learning. By default, runs log to an automatically created experiment named **Default**, but you can configure which experiment to track.
38
+
MLflow organizes information in experiments and runs. Runs are called *jobs* in Azure Machine Learning. By default, runs log to an automatically created experiment named **Default**, but you can configure which experiment to track.
59
39
60
40
# [Notebooks](#tab/interactive)
61
41
@@ -266,4 +246,3 @@ For more information about how to retrieve or compare information from experimen
0 commit comments