You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-log-view-metrics.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -236,7 +236,7 @@ tags = run.data.tags
236
236
>[!NOTE]
237
237
> The metrics dictionary returned by `mlflow.get_run` or `mlflow.search_runs` only returns the most recently logged value for a given metric name. For example, if you log a metric called `iteration` multiple times with values, *1*, then*2*, then*3*, then*4*, only *4* is returned when calling `run.data.metrics['iteration']`.
238
238
>
239
-
> To get all metrics logged fora particular metric name, you can use `MlFlowClient.get_metric_history()` as explainedin the example [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#getting-params-and-metrics-from-a-run).
239
+
> To get all metrics logged fora particular metric name, you can use `MlFlowClient.get_metric_history()` as explainedin the example [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#get-params-and-metrics-from-a-run).
For more information, please refer to [Getting metrics, parameters, artifacts and models](how-to-track-experiments-mlflow.md#getting-metrics-parameters-artifacts-and-models).
259
+
For more information, please refer to [Getting metrics, parameters, artifacts and models](how-to-track-experiments-mlflow.md#get-metrics-parameters-artifacts-and-models).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-track-experiments-mlflow.md
+17-19Lines changed: 17 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,8 +28,6 @@ For a detailed comparison between open-source MLflow and MLflow when connected t
28
28
> [!NOTE]
29
29
> The Azure Machine Learning Python SDK v2 does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure Machine Learning.
30
30
31
-
### REST API
32
-
33
31
You can also query and search experiments and runs by using the MLflow REST API. See [Using MLflow REST with Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/using-rest-api/using_mlflow_rest_api.ipynb) for an example about how to consume it.
34
32
35
33
## Prerequisites
@@ -69,7 +67,7 @@ Use MLflow to search for experiments inside of your workspace. See the following
69
67
mlflow.get_experiment('1234-5678-90AB-CDEFG')
70
68
```
71
69
72
-
### Searching experiments
70
+
### Search experiments
73
71
74
72
The `search_experiments()` method, available since Mlflow 2.0, lets you search for experiments that match criteria using `filter_string`.
75
73
@@ -98,7 +96,7 @@ The `search_experiments()` method, available since Mlflow 2.0, lets you search f
98
96
99
97
## Query and search runs
100
98
101
-
MLflow allows searching for runs inside ofany experiment, including multiple experiments at the same time. The method `mlflow.search_runs()` accepts the argument `experiment_ids`and`experiment_name` to indicate which experiments you want to search. You can also indicate `search_all_experiments=True`if you want to search across all the experiments in the workspace:
99
+
MLflow lets you search for runs inside any experiment, including multiple experiments at the same time. The method `mlflow.search_runs()` accepts the argument `experiment_ids`and`experiment_name` to indicate which experiments you want to search. You can also indicate `search_all_experiments=True`if you want to search across all the experiments in the workspace:
102
100
103
101
* By experiment name:
104
102
@@ -118,22 +116,22 @@ MLflow allows searching for runs inside of any experiment, including multiple ex
Notice that `experiment_ids` supports providing an array of experiments, so you can search runs across multiple experiments, if necessary. This might be useful in case you want to compare runs of the same model when it's being logged in different experiments (for example, by different people, different project iterations).
119
+
Notice that `experiment_ids` supports providing an array of experiments, so you can search runs across multiple experiments, if necessary. This might be useful in case you want to compare runs of the same model when it's being logged in different experiments (for example, by different people or different project iterations).
122
120
123
121
> [!IMPORTANT]
124
-
> If `experiment_ids`, `experiment_names`, or`search_all_experiments` aren't indicated, then MLflow searches by default in the current active experiment. You can set the active experiment using `mlflow.set_experiment()`.
122
+
> If `experiment_ids`, `experiment_names`, or`search_all_experiments` aren't specified, then MLflow searches by default in the current active experiment. You can set the active experiment using `mlflow.set_experiment()`.
125
123
126
124
By default, MLflow returns the data in Pandas `Dataframe`format, which makes it handy when doing further processing our analysis of the runs. Returned data includes columns with:
127
125
128
126
- Basic information about the run.
129
127
- Parameters with column's name `params.<parameter-name>`.
130
128
- Metrics (last logged value of each) with column's name `metrics.<metric-name>`.
131
129
132
-
All metrics and parameters are also returned when querying runs. However, for metrics that contain multiple values (for instance, a loss curve, or a PR curve), only the last value of the metric is returned. If you want to retrieve all the values of a given metric, uses `mlflow.get_metric_history` method. See [Getting params and metrics from a run](#getting-params-and-metrics-from-a-run) for an example.
130
+
All metrics and parameters are also returned when querying runs. However, for metrics that contain multiple values (for instance, a loss curve, or a PR curve), only the last value of the metric is returned. If you want to retrieve all the values of a given metric, uses `mlflow.get_metric_history` method. See [Getting params and metrics from a run](#get-params-and-metrics-from-a-run) for an example.
133
131
134
-
### Ordering runs
132
+
### Order runs
135
133
136
-
By default, experiments are ordered descending by `start_time`, which is the time the experiment was queued in Azure Machine Learning. However, you can change this default by using the parameter `order_by`.
134
+
By default, experiments are in descending order by `start_time`, which is the time the experiment was queued in Azure Machine Learning. However, you can change this default by using the parameter `order_by`.
137
135
138
136
* Order runs by attributes, like `start_time`:
139
137
@@ -166,9 +164,9 @@ By default, experiments are ordered descending by `start_time`, which is the tim
166
164
```
167
165
168
166
> [!WARNING]
169
-
> Using `order_by`with expressions containing `metrics.*`, `params.*`, or`tags.*`in the parameter `order_by` isn't currently supported. Please use the `order_values` method from Pandas as shown in the example.
167
+
> Using `order_by`with expressions containing `metrics.*`, `params.*`, or`tags.*`in the parameter `order_by` isn't currently supported. Instead, use the `order_values` method from Pandas as shown in the example.
170
168
171
-
### Filtering runs
169
+
### Filter runs
172
170
173
171
You can also look for a run with a specific combination in the hyperparameters using the parameter `filter_string`. Use `params` to access run's parameters, `metrics` to access metrics logged in the run, and `attributes` to access run information details. MLflow supports expressions joined by the AND keyword (the syntax doesn't support OR):
## Getting metrics, parameters, artifacts, and models
263
+
## Get metrics, parameters, artifacts, and models
266
264
267
265
The method `search_runs` returns a Pandas `Dataframe` that contains a limited amount of information by default. You can get Python objects if needed, which might be useful to get details about them. Use the `output_format` parameter to control how output is returned:
268
266
@@ -281,7 +279,7 @@ last_run = runs[-1]
281
279
print("Last run ID:", last_run.info.run_id)
282
280
```
283
281
284
-
### Getting params and metrics from a run
282
+
### Get params and metrics from a run
285
283
286
284
When runs are returned using `output_format="list"`, you can easily access parameters using the key `data`:
> In legacy versions of MLflow (<2.0), use the method `MlflowClient.download_artifacts()` instead.
324
322
325
-
### Getting models from a run
323
+
### Get models from a run
326
324
327
-
Models can also be logged in the run and then retrieved directly from it. To retrieve it, you need to know the artifact's path where it's stored. The method `list_artifacts` can be used to find artifacts that represent a model since MLflow models are always folders. You can download a model by indicating the path where the model is stored using the `download_artifact` method:
325
+
Models can also be logged in the run and then retrieved directly from it. To retrieve a model, you need to know the path to the artifact where it's stored. The method `list_artifacts` can be used to find artifacts that represent a model since MLflow models are always folders. You can download a model by specifying the path where the model is stored, using the `download_artifact` method:
328
326
329
327
```python
330
328
artifact_path="classifier"
@@ -348,9 +346,9 @@ model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}
348
346
> [!TIP]
349
347
> To query and load models registered in the model registry, see [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
350
348
351
-
## Getting child (nested) runs
349
+
## Get child (nested) runs
352
350
353
-
MLflow supports the concept of child (nested) runs. They're useful when you need to spin off training routines that must be tracked independently from the main training process. Hyper-parameter tuning optimization processes or Azure Machine Learning pipelines are typical examples of jobs that generate multiple child runs. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
351
+
MLflow supports the concept of child (nested) runs. These runs are useful when you need to spin off training routines that must be tracked independently from the main training process. Hyper-parameter tuning optimization processes or Azure Machine Learning pipelines are typical examples of jobs that generate multiple child runs. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
354
352
355
353
```python
356
354
hyperopt_run= mlflow.last_active_run()
@@ -398,7 +396,7 @@ The MLflow SDK exposes several methods to retrieve runs, including options to co
398
396
| Renaming experiments |**✓**||
399
397
400
398
> [!NOTE]
401
-
>-<sup>1</sup> Check the section [Ordering runs](#ordering-runs) for instructions and examples on how to achieve the same functionality in Azure Machine Learning.
399
+
>-<sup>1</sup> Check the section [Ordering runs](#order-runs) for instructions and examples on how to achieve the same functionality in Azure Machine Learning.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-use-mlflow-cli-runs.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -264,7 +264,7 @@ print(metrics, params, tags)
264
264
```
265
265
266
266
> [!TIP]
267
-
> For metrics, the previous example code will only return the last value of a given metric. If you want to retrieve all the values of a given metric, use the `mlflow.get_metric_history` method. For more information on retrieving values of a metric, see [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#getting-params-and-metrics-from-a-run).
267
+
> For metrics, the previous example code will only return the last value of a given metric. If you want to retrieve all the values of a given metric, use the `mlflow.get_metric_history` method. For more information on retrieving values of a metric, see [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#get-params-and-metrics-from-a-run).
268
268
269
269
To __download__ artifacts you've logged, such as files and models, use [mlflow.artifacts.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.artifacts.html#mlflow.artifacts.download_artifacts).
0 commit comments