You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Query & compare experiments and runs with MLflow
17
17
18
-
Experiments and runs tracking information in Azure Machine Learning can be queried using MLflow. You don't need to install any specific SDK to manage what happens inside of a training job, creating a more seamless transition between local runs and the cloud by removing cloud-specific dependencies.
19
-
20
-
> [!NOTE]
21
-
> The Azure Machine Learning Python SDK v2 does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure Machine Learning.
18
+
Experiments and runs tracking information in Azure Machine Learning can be queried using MLflow. You don't need to install any specific SDK to manage what happens inside of a training job, creating a more seamless transition between local runs and the cloud by removing cloud-specific dependencies. In this article, you'll learn how to query and compare experiments and runs in your workspace using Azure Machine Learning and MLflow SDK in Python.
22
19
23
20
MLflow allows you to:
24
21
25
22
* Create, query, delete and search for experiments in a workspace.
26
23
* Query, delete, and search for runs in a workspace.
27
24
* Track and retrieve metrics, parameters, artifacts and models from runs.
28
25
29
-
In this article, you'll learn how to query and compare experiments and runs in your workspace using Azure Machine Learning and MLflow SDK in Python. See [Support matrix for querying runs and experiments in Azure Machine Learning](#support-matrix-for-querying-runs-and-experiments) for a detailed comparison between MLflow Open-Source and MLflow when connected to Azure Machine Learning.
26
+
See [Support matrix for querying runs and experiments in Azure Machine Learning](#support-matrix-for-querying-runs-and-experiments) for a detailed comparison between MLflow Open-Source and MLflow when connected to Azure Machine Learning.
27
+
28
+
> [!NOTE]
29
+
> The Azure Machine Learning Python SDK v2 does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure Machine Learning.
You can get all the active experiments in the workspace using MLFlow:
38
38
@@ -57,13 +57,15 @@ for exp in experiments:
57
57
58
58
## Search experiments
59
59
60
-
The `search_experiments()` method available since Mlflow 2.0 allows to search experiment matching a criteria using `filter_string`. The following query retrives three experiments with differents IDs.
60
+
The `search_experiments()` method available since Mlflow 2.0 allows searching experiment matching a criteria using `filter_string`. The following query retrieves three experiments with different IDs.
61
61
62
62
```python
63
-
mlflow.search_experiments(filter_string="experiment_id IN ('CDEFG-1234-5678-90AB', '1234-5678-90AB-CDEFG', '5678-1234-90AB-CDEFG')")
63
+
mlflow.search_experiments(
64
+
filter_string="experiment_id IN ('CDEFG-1234-5678-90AB', '1234-5678-90AB-CDEFG', '5678-1234-90AB-CDEFG')"
65
+
)
64
66
```
65
67
66
-
## Getting a specific experiment
68
+
## Get a specific experiment
67
69
68
70
Details about a specific experiment can be retrieved using the `get_experiment_by_name` method:
69
71
@@ -80,7 +82,7 @@ MLflow allows searching runs inside of any experiment, including multiple experi
80
82
- Parameters with column's name `params.<parameter-name>`.
81
83
- Metrics (last logged value of each) with column's name `metrics.<metric-name>`.
Another important point to notice is that get returning runs, all metrics are parameters are also returned for them. However, for metrics containing multiple values (for instance, a loss curve, or a PR curve), only the last value of the metric is returned. If you want to retrieve all the values of a given metric, uses `mlflow.get_metric_history` method.
101
103
102
-
### Ordering runs
104
+
### Order runs
103
105
104
106
By default, experiments are ordered descending by `start_time`, which is the time the experiment was queue in Azure Machine Learning. However, you can change this default by using the parameter `order_by`.
105
107
@@ -122,7 +124,7 @@ You can also order by metrics to know which run generated the best results:
You can also look for a run with a specific combination in the hyperparameters using the parameter `filter_string`. Use `params` to access run's parameters and `metrics` to access metrics logged in the run. MLflow supports expressions joined by the AND keyword (the syntax does not support OR):
## Getting metrics, parameters, artifacts and models
167
+
## Get metrics, parameters, artifacts and models
166
168
167
169
The method `search_runs` returns a Pandas `Dataframe` containing a limited amount of information by default. You can get Python objects if needed, which may be useful to get details about them. Use the `output_format` parameter to control how output is returned:
168
170
@@ -181,7 +183,7 @@ last_run = runs[-1]
181
183
print("Last run ID:", last_run.info.run_id)
182
184
```
183
185
184
-
### Getting params and metrics from a run
186
+
### Get params and metrics from a run
185
187
186
188
When runs are returned using `output_format="list"`, you can easily access parameters using the key `data`:
Any artifact logged by a run can be queried by MLflow. Artifacts can't be access using the run object itself and the MLflow client should be used instead:
> __MLflow 2.0 advisory:__ In legacy versions of MLflow (<2.0), use the method `MlflowClient.download_artifacts()` instead.
224
226
225
-
### Getting models from a run
227
+
### Get models from a run
226
228
227
229
Models can also be logged in the run and then retrieved directly from it. To retrieve it, you need to know the artifact's path where it is stored. The method `list_artifacats` can be used to find artifacts that are representing a model since MLflow models are always folders. You can download a model by indicating the path where the model is stored using the `download_artifact` method:
228
230
@@ -248,7 +250,7 @@ model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}
248
250
> [!TIP]
249
251
> For query and loading models registered in the Model Registry, view [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
250
252
251
-
## Getting child (nested) runs
253
+
## Get child (nested) runs
252
254
253
255
MLflow supports the concept of child (nested) runs. They are useful when you need to spin off training routines requiring being tracked independently from the main training process. Hyper-parameter tuning optimization processes or Azure Machine Learning pipelines are typical examples of jobs that generate multiple child runs. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
0 commit comments