Skip to content

Commit 0faa721

Browse files
committed
Incorporate feedback
1 parent 6c55d53 commit 0faa721

File tree

3 files changed

+20
-22
lines changed

3 files changed

+20
-22
lines changed

articles/machine-learning/how-to-log-view-metrics.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -236,7 +236,7 @@ tags = run.data.tags
236236
>[!NOTE]
237237
> The metrics dictionary returned by `mlflow.get_run` or `mlflow.search_runs` only returns the most recently logged value for a given metric name. For example, if you log a metric called `iteration` multiple times with values, *1*, then *2*, then *3*, then *4*, only *4* is returned when calling `run.data.metrics['iteration']`.
238238
>
239-
> To get all metrics logged for a particular metric name, you can use `MlFlowClient.get_metric_history()` as explained in the example [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#getting-params-and-metrics-from-a-run).
239+
> To get all metrics logged for a particular metric name, you can use `MlFlowClient.get_metric_history()` as explained in the example [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#get-params-and-metrics-from-a-run).
240240
241241
<a name="view-the-experiment-in-the-web-portal"></a>
242242
@@ -256,7 +256,7 @@ This method lists all the artifacts logged in the run, but they remain stored in
256256
file_path = client.download_artifacts("<RUN_ID>", path="feature_importance_weight.png")
257257
```
258258
259-
For more information, please refer to [Getting metrics, parameters, artifacts and models](how-to-track-experiments-mlflow.md#getting-metrics-parameters-artifacts-and-models).
259+
For more information, please refer to [Getting metrics, parameters, artifacts and models](how-to-track-experiments-mlflow.md#get-metrics-parameters-artifacts-and-models).
260260
261261
## View jobs/runs information in the studio
262262

articles/machine-learning/how-to-track-experiments-mlflow.md

Lines changed: 17 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -28,8 +28,6 @@ For a detailed comparison between open-source MLflow and MLflow when connected t
2828
> [!NOTE]
2929
> The Azure Machine Learning Python SDK v2 does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure Machine Learning.
3030
31-
### REST API
32-
3331
You can also query and search experiments and runs by using the MLflow REST API. See [Using MLflow REST with Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/using-rest-api/using_mlflow_rest_api.ipynb) for an example about how to consume it.
3432

3533
## Prerequisites
@@ -69,7 +67,7 @@ Use MLflow to search for experiments inside of your workspace. See the following
6967
mlflow.get_experiment('1234-5678-90AB-CDEFG')
7068
```
7169

72-
### Searching experiments
70+
### Search experiments
7371

7472
The `search_experiments()` method, available since Mlflow 2.0, lets you search for experiments that match criteria using `filter_string`.
7573

@@ -98,7 +96,7 @@ The `search_experiments()` method, available since Mlflow 2.0, lets you search f
9896

9997
## Query and search runs
10098

101-
MLflow allows searching for runs inside of any experiment, including multiple experiments at the same time. The method `mlflow.search_runs()` accepts the argument `experiment_ids` and `experiment_name` to indicate which experiments you want to search. You can also indicate `search_all_experiments=True` if you want to search across all the experiments in the workspace:
99+
MLflow lets you search for runs inside any experiment, including multiple experiments at the same time. The method `mlflow.search_runs()` accepts the argument `experiment_ids` and `experiment_name` to indicate which experiments you want to search. You can also indicate `search_all_experiments=True` if you want to search across all the experiments in the workspace:
102100

103101
* By experiment name:
104102

@@ -118,22 +116,22 @@ MLflow allows searching for runs inside of any experiment, including multiple ex
118116
mlflow.search_runs(filter_string="params.num_boost_round='100'", search_all_experiments=True)
119117
```
120118

121-
Notice that `experiment_ids` supports providing an array of experiments, so you can search runs across multiple experiments, if necessary. This might be useful in case you want to compare runs of the same model when it's being logged in different experiments (for example, by different people, different project iterations).
119+
Notice that `experiment_ids` supports providing an array of experiments, so you can search runs across multiple experiments, if necessary. This might be useful in case you want to compare runs of the same model when it's being logged in different experiments (for example, by different people or different project iterations).
122120

123121
> [!IMPORTANT]
124-
> If `experiment_ids`, `experiment_names`, or `search_all_experiments` aren't indicated, then MLflow searches by default in the current active experiment. You can set the active experiment using `mlflow.set_experiment()`.
122+
> If `experiment_ids`, `experiment_names`, or `search_all_experiments` aren't specified, then MLflow searches by default in the current active experiment. You can set the active experiment using `mlflow.set_experiment()`.
125123

126124
By default, MLflow returns the data in Pandas `Dataframe` format, which makes it handy when doing further processing our analysis of the runs. Returned data includes columns with:
127125

128126
- Basic information about the run.
129127
- Parameters with column's name `params.<parameter-name>`.
130128
- Metrics (last logged value of each) with column's name `metrics.<metric-name>`.
131129

132-
All metrics and parameters are also returned when querying runs. However, for metrics that contain multiple values (for instance, a loss curve, or a PR curve), only the last value of the metric is returned. If you want to retrieve all the values of a given metric, uses `mlflow.get_metric_history` method. See [Getting params and metrics from a run](#getting-params-and-metrics-from-a-run) for an example.
130+
All metrics and parameters are also returned when querying runs. However, for metrics that contain multiple values (for instance, a loss curve, or a PR curve), only the last value of the metric is returned. If you want to retrieve all the values of a given metric, uses `mlflow.get_metric_history` method. See [Getting params and metrics from a run](#get-params-and-metrics-from-a-run) for an example.
133131

134-
### Ordering runs
132+
### Order runs
135133

136-
By default, experiments are ordered descending by `start_time`, which is the time the experiment was queued in Azure Machine Learning. However, you can change this default by using the parameter `order_by`.
134+
By default, experiments are in descending order by `start_time`, which is the time the experiment was queued in Azure Machine Learning. However, you can change this default by using the parameter `order_by`.
137135

138136
* Order runs by attributes, like `start_time`:
139137

@@ -166,9 +164,9 @@ By default, experiments are ordered descending by `start_time`, which is the tim
166164
```
167165

168166
> [!WARNING]
169-
> Using `order_by` with expressions containing `metrics.*`, `params.*`, or `tags.*` in the parameter `order_by` isn't currently supported. Please use the `order_values` method from Pandas as shown in the example.
167+
> Using `order_by` with expressions containing `metrics.*`, `params.*`, or `tags.*` in the parameter `order_by` isn't currently supported. Instead, use the `order_values` method from Pandas as shown in the example.
170168

171-
### Filtering runs
169+
### Filter runs
172170

173171
You can also look for a run with a specific combination in the hyperparameters using the parameter `filter_string`. Use `params` to access run's parameters, `metrics` to access metrics logged in the run, and `attributes` to access run information details. MLflow supports expressions joined by the AND keyword (the syntax doesn't support OR):
174172

@@ -262,7 +260,7 @@ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
262260
filter_string="attributes.status = 'Failed'")
263261
```
264262

265-
## Getting metrics, parameters, artifacts, and models
263+
## Get metrics, parameters, artifacts, and models
266264

267265
The method `search_runs` returns a Pandas `Dataframe` that contains a limited amount of information by default. You can get Python objects if needed, which might be useful to get details about them. Use the `output_format` parameter to control how output is returned:
268266

@@ -281,7 +279,7 @@ last_run = runs[-1]
281279
print("Last run ID:", last_run.info.run_id)
282280
```
283281

284-
### Getting params and metrics from a run
282+
### Get params and metrics from a run
285283

286284
When runs are returned using `output_format="list"`, you can easily access parameters using the key `data`:
287285

@@ -302,7 +300,7 @@ client = mlflow.tracking.MlflowClient()
302300
client.get_metric_history("1234-5678-90AB-CDEFG", "log_loss")
303301
```
304302

305-
### Getting artifacts from a run
303+
### Get artifacts from a run
306304

307305
MLflow can query any artifact logged by a run. Artifacts can't be accessed using the run object itself, and the MLflow client should be used instead:
308306

@@ -322,9 +320,9 @@ file_path = mlflow.artifacts.download_artifacts(
322320
> [!NOTE]
323321
> In legacy versions of MLflow (<2.0), use the method `MlflowClient.download_artifacts()` instead.
324322

325-
### Getting models from a run
323+
### Get models from a run
326324

327-
Models can also be logged in the run and then retrieved directly from it. To retrieve it, you need to know the artifact's path where it's stored. The method `list_artifacts` can be used to find artifacts that represent a model since MLflow models are always folders. You can download a model by indicating the path where the model is stored using the `download_artifact` method:
325+
Models can also be logged in the run and then retrieved directly from it. To retrieve a model, you need to know the path to the artifact where it's stored. The method `list_artifacts` can be used to find artifacts that represent a model since MLflow models are always folders. You can download a model by specifying the path where the model is stored, using the `download_artifact` method:
328326

329327
```python
330328
artifact_path="classifier"
@@ -348,9 +346,9 @@ model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}
348346
> [!TIP]
349347
> To query and load models registered in the model registry, see [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
350348

351-
## Getting child (nested) runs
349+
## Get child (nested) runs
352350

353-
MLflow supports the concept of child (nested) runs. They're useful when you need to spin off training routines that must be tracked independently from the main training process. Hyper-parameter tuning optimization processes or Azure Machine Learning pipelines are typical examples of jobs that generate multiple child runs. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
351+
MLflow supports the concept of child (nested) runs. These runs are useful when you need to spin off training routines that must be tracked independently from the main training process. Hyper-parameter tuning optimization processes or Azure Machine Learning pipelines are typical examples of jobs that generate multiple child runs. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
354352

355353
```python
356354
hyperopt_run = mlflow.last_active_run()
@@ -398,7 +396,7 @@ The MLflow SDK exposes several methods to retrieve runs, including options to co
398396
| Renaming experiments | **&check;** | |
399397

400398
> [!NOTE]
401-
> - <sup>1</sup> Check the section [Ordering runs](#ordering-runs) for instructions and examples on how to achieve the same functionality in Azure Machine Learning.
399+
> - <sup>1</sup> Check the section [Ordering runs](#order-runs) for instructions and examples on how to achieve the same functionality in Azure Machine Learning.
402400
> - <sup>2</sup> `!=` for tags not supported.
403401

404402
## Related content

articles/machine-learning/how-to-use-mlflow-cli-runs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -264,7 +264,7 @@ print(metrics, params, tags)
264264
```
265265

266266
> [!TIP]
267-
> For metrics, the previous example code will only return the last value of a given metric. If you want to retrieve all the values of a given metric, use the `mlflow.get_metric_history` method. For more information on retrieving values of a metric, see [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#getting-params-and-metrics-from-a-run).
267+
> For metrics, the previous example code will only return the last value of a given metric. If you want to retrieve all the values of a given metric, use the `mlflow.get_metric_history` method. For more information on retrieving values of a metric, see [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#get-params-and-metrics-from-a-run).
268268

269269
To __download__ artifacts you've logged, such as files and models, use [mlflow.artifacts.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.artifacts.html#mlflow.artifacts.download_artifacts).
270270

0 commit comments

Comments
 (0)