You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> - Evaluation results (including some evaluation predictions)
34
34
35
-
Some of these elements are automatically tracked by Azure Machine Learning when working with jobs (including code, environment, and input and output data). However, others like models, parameters, and metrics, need to be instrumented by the model builder as it is specific to the particular scenario.
35
+
Some of these elements are automatically tracked by Azure Machine Learning when working with jobs (including code, environment, and input and output data). However, others like models, parameters, and metrics, need to be instrumented by the model builder as it's specific to the particular scenario.
36
36
37
-
In this article, you will learn how to use MLflow for tracking your experiments and runs in Azure Machine Learning workspaces.
37
+
In this article, you'll learn how to use MLflow for tracking your experiments and runs in Azure Machine Learning workspaces.
38
38
39
39
> [!NOTE]
40
40
> If you want to track experiments running on Azure Databricks or Azure Synapse Analytics, see the dedicated articles [Track Azure Databricks ML experiments with MLflow and Azure Machine Learning](how-to-use-mlflow-azure-databricks.md) or [Track Azure Synapse Analytics ML experiments with MLflow and Azure Machine Learning](how-to-use-mlflow-azure-synapse.md).
41
41
42
42
## Benefits of tracking experiments
43
43
44
-
We highly encourage machine learning practitioners to instrument their experimentation by tracking them, regardless if they are training with jobs in Azure Machine Learning or interactively in notebooks. Benefits include:
44
+
We highly encourage machine learning practitioners to instrument their experimentation by tracking them, regardless if they're training with jobs in Azure Machine Learning or interactively in notebooks. Benefits include:
45
45
46
46
- All of your ML experiments are organized in a single place, allowing you to search and filter experiments to find the information and drill down to see what exactly it was that you tried before.
47
47
- Compare experiments, analyze results, and debug model training with little extra work.
@@ -85,9 +85,9 @@ Azure Machine Learning tracks any training job in what MLflow calls a run. Use r
85
85
86
86
# [Working interactively](#tab/interactive)
87
87
88
-
When working interactively, MLflow starts tracking your training routine as soon as you try to log information that requires an active run. For instance, when you log a metric, log a parameter, or when you start a training cycle when Mlflow's autologging functionality is enabled. However, it is usually helpful to start the run explicitly, specially if you want to capture the total time of your experiment in the field __Duration__. To start the run explicitly, use `mlflow.start_run()`.
88
+
When working interactively, MLflow starts tracking your training routine as soon as you try to log information that requires an active run. For instance, when you log a metric, log a parameter, or when you start a training cycle when Mlflow's autologging functionality is enabled. However, it's usually helpful to start the run explicitly, specially if you want to capture the total time of your experiment in the field __Duration__. To start the run explicitly, use `mlflow.start_run()`.
89
89
90
-
Regardless if you started the run manually or not, you will eventually need to stop the run to inform MLflow that your experiment run has finished and marks its status as __Completed__. To do that, all `mlflow.end_run()`. We strongly recommend starting runs manually so you don't forget to end them when working on notebooks.
90
+
Regardless if you started the run manually or not, you'll eventually need to stop the run to inform MLflow that your experiment run has finished and marks its status as __Completed__. To do that, all `mlflow.end_run()`. We strongly recommend starting runs manually so you don't forget to end them when working on notebooks.
91
91
92
92
```python
93
93
mlflow.start_run()
@@ -97,7 +97,7 @@ mlflow.start_run()
97
97
mlflow.end_run()
98
98
```
99
99
100
-
To help you avoid forgetting to end the run, it is usually helpful to use the context manager paradigm:
100
+
To help you avoid forgetting to end the run, it's usually helpful to use the context manager paradigm:
101
101
102
102
```python
103
103
with mlflow.start_run() as run:
@@ -121,21 +121,21 @@ When working with jobs, you typically place all your training logic inside of a
The previous code example doesn't uses `mlflow.start_run()` but if used you can expect MLflow to reuse the current active run so there is no need to remove those lines if migrating to Azure Machine Learning.
124
+
The previous code example doesn't uses `mlflow.start_run()` but if used you can expect MLflow to reuse the current active run so there's no need to remove those lines if migrating to Azure Machine Learning.
125
125
126
126
### Adding tracking to your routine
127
127
128
128
Use MLflow SDK to track any metric, parameter, artifacts, or models. For detailed examples about how to log each, see [Log metrics, parameters and files with MLflow](how-to-log-view-metrics.md).
129
129
130
130
### Ensure your job's environment has MLflow installed
131
131
132
-
All Azure Machine Learning environments already have MLflow installed for you, so no action is required if you are using a curated environment. If you want to use a custom environment:
132
+
All Azure Machine Learning environments already have MLflow installed for you, so no action is required if you're using a curated environment. If you want to use a custom environment:
133
133
134
134
1. Create a `conda.yml` file with the dependencies you need:
1. Reference the environment in the job you are using.
138
+
1. Reference the environment in the job you're using.
139
139
140
140
### Configuring job's name
141
141
@@ -163,11 +163,11 @@ Use the parameter `display_name` of Azure Machine Learning jobs to configure the
163
163
)
164
164
```
165
165
166
-
2. Ensure you arenot using `mlflow.start_run(run_name="")` inside of your training routine.
166
+
2. Ensure you're not using `mlflow.start_run(run_name="")` inside of your training routine.
167
167
168
168
### Submitting the job
169
169
170
-
1. First, let's connect to Azure Machine Learning workspace where we are going to work on.
170
+
1. First, let's connect to Azure Machine Learning workspace where we're going to work on.
171
171
172
172
# [Azure CLI](#tab/cli)
173
173
@@ -236,7 +236,7 @@ The metrics and artifacts from MLflow logging are tracked in your workspace. To
236
236
237
237
:::image type="content"source="media/how-to-log-view-metrics/metrics.png" alt-text="Screenshot of the metrics view.":::
238
238
239
-
Select the logged metrics to render charts on the right side. You can customize the charts by applying smoothing, changing the color, or plotting multiple metrics on a single graph. You can also resize and rearrange the layout as you wish. Once you have created your desired view, you can save it for future use and share it with your teammates using a direct link.
239
+
Select the logged metrics to render charts on the right side. You can customize the charts by applying smoothing, changing the color, or plotting multiple metrics on a single graph. You can also resize and rearrange the layout as you wish. Once you've created your desired view, you can save it for future use and share it with your teammates using a direct link.
240
240
241
241
You can also access or __query metrics, parameters and artifacts programatically__ using the MLflow SDK. Use [mlflow.get_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.get_run) as explained bellow:
242
242
@@ -255,7 +255,7 @@ print(metrics, params, tags)
255
255
> [!TIP]
256
256
> For metrics, the previous example will only return the last value of a given metric. If you want to retrieve all the values of a given metric, use `mlflow.get_metric_history` method as explained at [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#getting-params-and-metrics-from-a-run).
257
257
258
-
To download artifacts you have logged, like files and models, you can use [mlflow.artifacts.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.artifacts.html#mlflow.artifacts.download_artifacts)
258
+
To download artifacts you've logged, like files and models, you can use [mlflow.artifacts.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.artifacts.html#mlflow.artifacts.download_artifacts)
@@ -265,7 +265,7 @@ For more details about how to __retrieve or compare__ information from experimen
265
265
266
266
## Example notebooks
267
267
268
-
If you are looking for examples about how to use MLflow in Jupyter notebooks, please see our example's repository [Using MLflow (Jupyter Notebooks)](https://github.com/Azure/azureml-examples/tree/main/sdk/python/using-mlflow).
268
+
If you're looking for examples about how to use MLflow in Jupyter notebooks, please see our example's repository [Using MLflow (Jupyter Notebooks)](https://github.com/Azure/azureml-examples/tree/main/sdk/python/using-mlflow).
0 commit comments