You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-track-experiments-mlflow.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -144,7 +144,7 @@ You can also filter experiment by status. It becomes useful to find runs that ar
144
144
> [!WARNING]
145
145
> Expressions containing `attributes.status` in the parameter `filter_string` are not support at the moment. Please use Pandas filtering expressions as shown in the next example.
146
146
147
-
The following example shows all the runs that have been completed:
147
+
The following example shows all the completed runs:
## Getting metrics, parameters, artifacts and models
155
155
156
-
By default, MLflow returns runs as a Pandas `Dataframe` containing a limited amount of information. You can get Python objects if needed, which may be useful to get details about them. Use the `output_format` parameter to control how output is returned:
156
+
The method `search_runs` returns a a Pandas `Dataframe` containing a limited amount of information by default. You can get Python objects if needed, which may be useful to get details about them. Use the `output_format` parameter to control how output is returned:
157
157
158
158
```python
159
159
runs = mlflow.search_runs(
@@ -229,9 +229,9 @@ model = mlflow.xgboost.load_model(model_local_path)
229
229
```
230
230
231
231
> [!NOTE]
232
-
> In the example above, we are assuming the model was created using `xgboost`. Change it to the flavor applies to your case.
232
+
> The previous example assumes the model was created using `xgboost`. Change it to the flavor applies to your case.
233
233
234
-
MLflow also allows you to both operations at once and download and load the model in a single instruction. MLflow will download the model to a temporary folder and load it from there. This can be done using the `load_model` method which uses an URI format to indicate from where the model has to be retrieved. In the case of loading a model from a run, the URI structure is as follows:
234
+
MLflow also allows you to both operations at once and download and load the model in a single instruction. MLflow will download the model to a temporary folder and load it from there. The method `load_model` uses an URI format to indicate from where the model has to be retrieved. In the case of loading a model from a run, the URI structure is as follows:
235
235
236
236
```python
237
237
model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}")
@@ -242,7 +242,7 @@ model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}
242
242
243
243
## Getting child (nested) runs
244
244
245
-
MLflow supports the concept of child (nested) runs. They are useful when you need to spin off training routines requiring being tracked independently from the main training process. This is the typical case of hyper-parameter tuning for instance. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
245
+
MLflow supports the concept of child (nested) runs. They are useful when you need to spin off training routines requiring being tracked independently from the main training process. Hyper-parameter tuning optimization processes or Azure Machine Learning pipelines are typical examples of jobs that generate multiple child runs. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
## Compare jobs and models in AzureML Studio (preview)
254
+
## Compare jobs and models in AzureML studio (preview)
255
255
256
256
To compare and evaluate the quality of your jobs and models in AzureML Studio, use the [preview panel](./how-to-enable-preview-features.md) to enable the feature. Once enabled, you can compare the parameters, metrics, and tags between the jobs and/or models you selected.
0 commit comments