You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Query & compare experiments and runs with MLflow
17
17
18
-
Experiments and runs tracking information in Azure Machine Learning can be queried using MLflow. You don't need to install any specific SDK to manage what happens inside of a training job, creating a more seamless transition between local runs and the cloud by removing cloud-specific dependencies. In this article, you'll learn how to query and compare experiments and runs in your workspace using Azure Machine Learning and MLflow SDK in Python.
18
+
Experiments and runs tracking information in Azure Machine Learning can be queried using MLflow. You don't need to install any specific SDK to manage what happens inside of a training job, creating a more seamless transition between local runs and the cloud by removing cloud-specific dependencies.
19
+
20
+
> [!NOTE]
21
+
> The Azure Machine Learning Python SDK v2 does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure Machine Learning.
19
22
20
23
MLflow allows you to:
21
24
22
25
* Create, query, delete and search for experiments in a workspace.
23
26
* Query, delete, and search for runs in a workspace.
24
27
* Track and retrieve metrics, parameters, artifacts and models from runs.
25
28
26
-
See [Support matrix for querying runs and experiments in Azure Machine Learning](#support-matrix-for-querying-runs-and-experiments) for a detailed comparison between MLflow Open-Source and MLflow when connected to Azure Machine Learning.
27
-
28
-
> [!NOTE]
29
-
> The Azure Machine Learning Python SDK v2 does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure Machine Learning.
29
+
In this article, you'll learn how to query and compare experiments and runs in your workspace using Azure Machine Learning and MLflow SDK in Python. See [Support matrix for querying runs and experiments in Azure Machine Learning](#support-matrix-for-querying-runs-and-experiments) for a detailed comparison between MLflow Open-Source and MLflow when connected to Azure Machine Learning.
You can get all the active experiments in the workspace using MLFlow:
38
38
@@ -57,15 +57,13 @@ for exp in experiments:
57
57
58
58
## Search experiments
59
59
60
-
The `search_experiments()` method available since Mlflow 2.0 allows searching experiment matching a criteria using `filter_string`. The following query retrieves three experiments with different IDs.
60
+
The `search_experiments()` method available since Mlflow 2.0 allows to search experiment matching a criteria using `filter_string`. The following query retrives three experiments with differents IDs.
61
61
62
62
```python
63
-
mlflow.search_experiments(
64
-
filter_string="experiment_id IN ('CDEFG-1234-5678-90AB', '1234-5678-90AB-CDEFG', '5678-1234-90AB-CDEFG')"
65
-
)
63
+
mlflow.search_experiments(filter_string="experiment_id IN ('CDEFG-1234-5678-90AB', '1234-5678-90AB-CDEFG', '5678-1234-90AB-CDEFG')")
66
64
```
67
65
68
-
## Get a specific experiment
66
+
## Getting a specific experiment
69
67
70
68
Details about a specific experiment can be retrieved using the `get_experiment_by_name` method:
71
69
@@ -82,7 +80,7 @@ MLflow allows searching runs inside of any experiment, including multiple experi
82
80
- Parameters with column's name `params.<parameter-name>`.
83
81
- Metrics (last logged value of each) with column's name `metrics.<metric-name>`.
Another important point to notice is that get returning runs, all metrics are parameters are also returned for them. However, for metrics containing multiple values (for instance, a loss curve, or a PR curve), only the last value of the metric is returned. If you want to retrieve all the values of a given metric, uses `mlflow.get_metric_history` method.
103
101
104
-
### Order runs
102
+
### Ordering runs
105
103
106
104
By default, experiments are ordered descending by `start_time`, which is the time the experiment was queue in Azure Machine Learning. However, you can change this default by using the parameter `order_by`.
107
105
@@ -124,7 +122,7 @@ You can also order by metrics to know which run generated the best results:
You can also look for a run with a specific combination in the hyperparameters using the parameter `filter_string`. Use `params` to access run's parameters and `metrics` to access metrics logged in the run. MLflow supports expressions joined by the AND keyword (the syntax does not support OR):
## Getting metrics, parameters, artifacts and models
168
166
169
167
The method `search_runs` returns a Pandas `Dataframe` containing a limited amount of information by default. You can get Python objects if needed, which may be useful to get details about them. Use the `output_format` parameter to control how output is returned:
170
168
@@ -183,7 +181,7 @@ last_run = runs[-1]
183
181
print("Last run ID:", last_run.info.run_id)
184
182
```
185
183
186
-
### Get params and metrics from a run
184
+
### Getting params and metrics from a run
187
185
188
186
When runs are returned using `output_format="list"`, you can easily access parameters using the key `data`:
Any artifact logged by a run can be queried by MLflow. Artifacts can't be access using the run object itself and the MLflow client should be used instead:
> __MLflow 2.0 advisory:__ In legacy versions of MLflow (<2.0), use the method `MlflowClient.download_artifacts()` instead.
226
224
227
-
### Get models from a run
225
+
### Getting models from a run
228
226
229
227
Models can also be logged in the run and then retrieved directly from it. To retrieve it, you need to know the artifact's path where it is stored. The method `list_artifacats` can be used to find artifacts that are representing a model since MLflow models are always folders. You can download a model by indicating the path where the model is stored using the `download_artifact` method:
230
228
@@ -250,7 +248,7 @@ model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}
250
248
> [!TIP]
251
249
> For query and loading models registered in the Model Registry, view [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
252
250
253
-
## Get child (nested) runs
251
+
## Getting child (nested) runs
254
252
255
253
MLflow supports the concept of child (nested) runs. They are useful when you need to spin off training routines requiring being tracked independently from the main training process. Hyper-parameter tuning optimization processes or Azure Machine Learning pipelines are typical examples of jobs that generate multiple child runs. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
256
254
@@ -302,7 +300,7 @@ The MLflow SDK exposes several methods to retrieve runs, including options to co
302
300
| Renaming experiments |**✓**||
303
301
304
302
> [!NOTE]
305
-
> - <sup>1</sup> Check the section [Getting runs inside an experiment](#getting-runs-inside-an-experiment) for instructions and examples on how to achieve the same functionality in Azure Machine Learning.
303
+
> - <sup>1</sup> Check the section [Query runs inside an experiment](#query-runs-inside-an-experiment) for instructions and examples on how to achieve the same functionality in Azure Machine Learning.
0 commit comments