Skip to content

Commit 72f37b1

Browse files
authored
Update how-to-track-experiments-mlflow.md
1 parent 71c5def commit 72f37b1

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/machine-learning/how-to-track-experiments-mlflow.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ You can also filter experiment by status. It becomes useful to find runs that ar
144144
> [!WARNING]
145145
> Expressions containing `attributes.status` in the parameter `filter_string` are not support at the moment. Please use Pandas filtering expressions as shown in the next example.
146146
147-
The following example shows all the runs that have been completed:
147+
The following example shows all the completed runs:
148148

149149
```python
150150
runs = mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ])
@@ -153,7 +153,7 @@ runs[runs.status == "FINISHED"]
153153

154154
## Getting metrics, parameters, artifacts and models
155155

156-
By default, MLflow returns runs as a Pandas `Dataframe` containing a limited amount of information. You can get Python objects if needed, which may be useful to get details about them. Use the `output_format` parameter to control how output is returned:
156+
The method `search_runs` returns a a Pandas `Dataframe` containing a limited amount of information by default. You can get Python objects if needed, which may be useful to get details about them. Use the `output_format` parameter to control how output is returned:
157157

158158
```python
159159
runs = mlflow.search_runs(
@@ -229,9 +229,9 @@ model = mlflow.xgboost.load_model(model_local_path)
229229
```
230230

231231
> [!NOTE]
232-
> In the example above, we are assuming the model was created using `xgboost`. Change it to the flavor applies to your case.
232+
> The previous example assumes the model was created using `xgboost`. Change it to the flavor applies to your case.
233233
234-
MLflow also allows you to both operations at once and download and load the model in a single instruction. MLflow will download the model to a temporary folder and load it from there. This can be done using the `load_model` method which uses an URI format to indicate from where the model has to be retrieved. In the case of loading a model from a run, the URI structure is as follows:
234+
MLflow also allows you to both operations at once and download and load the model in a single instruction. MLflow will download the model to a temporary folder and load it from there. The method `load_model` uses an URI format to indicate from where the model has to be retrieved. In the case of loading a model from a run, the URI structure is as follows:
235235

236236
```python
237237
model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}")
@@ -242,7 +242,7 @@ model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}
242242
243243
## Getting child (nested) runs
244244

245-
MLflow supports the concept of child (nested) runs. They are useful when you need to spin off training routines requiring being tracked independently from the main training process. This is the typical case of hyper-parameter tuning for instance. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
245+
MLflow supports the concept of child (nested) runs. They are useful when you need to spin off training routines requiring being tracked independently from the main training process. Hyper-parameter tuning optimization processes or Azure Machine Learning pipelines are typical examples of jobs that generate multiple child runs. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
246246

247247
```python
248248
hyperopt_run = mlflow.last_active_run()
@@ -251,7 +251,7 @@ child_runs = mlflow.search_runs(
251251
)
252252
```
253253

254-
## Compare jobs and models in AzureML Studio (preview)
254+
## Compare jobs and models in AzureML studio (preview)
255255

256256
To compare and evaluate the quality of your jobs and models in AzureML Studio, use the [preview panel](./how-to-enable-preview-features.md) to enable the feature. Once enabled, you can compare the parameters, metrics, and tags between the jobs and/or models you selected.
257257

0 commit comments

Comments
 (0)