Skip to content

Commit 450c276

Browse files
committed
Edit TOC
1 parent ed83886 commit 450c276

File tree

2 files changed

+19
-18
lines changed

2 files changed

+19
-18
lines changed

articles/machine-learning/how-to-log-view-metrics.md

Lines changed: 18 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -13,14 +13,14 @@ ms.topic: how-to
1313
ms.custom: sdkv2, event-tier1-build-2022
1414
---
1515

16-
# Log metrics, parameters and files with MLflow
16+
# Log metrics, parameters, and files with MLflow
1717

1818
[!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)]
1919

20-
Azure Machine Learning supports logging and tracking experiments using [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html). You can log models, metrics, parameters, and artifacts with MLflow as it supports local mode to cloud portability.
20+
Azure Machine Learning supports logging and tracking experiments using [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html). MLflow supports local mode to cloud portability, so you can log models, metrics, parameters, and artifacts with .
2121

2222
> [!IMPORTANT]
23-
> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the Azure Machine Learning SDK for Python (v2). If you used Azure Machine Learning SDK v1 before, we recommend you to start leveraging MLflow for tracking experiments. See [Migrate logging from SDK v1 to MLflow](reference-migrate-sdk-v1-mlflow-tracking.md) for specific guidance.
23+
> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the Azure Machine Learning SDK for Python (v2). If you used Azure Machine Learning SDK v1 before, we recommend that you leverage MLflow for tracking experiments. See [Migrate logging from SDK v1 to MLflow](reference-migrate-sdk-v1-mlflow-tracking.md) for specific guidance.
2424
2525
Logs can help you diagnose errors and warnings, or track performance metrics like parameters and model performance. This article explains how to enable logging in the following scenarios:
2626

@@ -34,13 +34,14 @@ Logs can help you diagnose errors and warnings, or track performance metrics lik
3434
3535
## Prerequisites
3636

37-
* You must have an Azure Machine Learning workspace. [Create one if you don't have any](quickstart-create-resources.md).
37+
* You must have an Azure Machine Learning workspace. If you don't have one, see [Create workspace resources](quickstart-create-resources.md).
3838
* You must have the `mlflow` and `azureml-mlflow` packages installed. If you don't, use the following command to install them in your development environment:
3939

4040
```bash
4141
pip install mlflow azureml-mlflow
4242
```
43-
* If you're doing remote tracking (tracking experiments that run outside Azure Machine Learning), configure MLflow to track experiments using Azure Machine Learning. For more information, see [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md).
43+
44+
* If you're doing remote tracking (tracking experiments that run outside Azure Machine Learning), configure MLflow to track experiments. For more information, see [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md).
4445
4546
* To log metrics, parameters, artifacts, and models in your experiments in Azure Machine Learning using MLflow, just import MLflow into your script:
4647
@@ -121,7 +122,7 @@ mlflow.log_metric('anothermetric',1)
121122
122123
---
123124
124-
## Logging parameters
125+
## Log parameters
125126
126127
MLflow supports the logging parameters used by your experiments. Parameters can be of any type, and can be logged using the following syntax:
127128
@@ -141,20 +142,20 @@ params = {
141142
mlflow.log_params(params)
142143
```
143144
144-
## Logging metrics
145+
## Log metrics
145146
146147
Metrics, as opposite to parameters, are always numeric. The following table describes how to log specific numeric types:
147148
148-
|Logged Value|Example code| Notes|
149+
|Logged value|Example code| Notes|
149150
|----|----|----|
150151
|Log a numeric value (int or float) | `mlflow.log_metric("my_metric", 1)`| |
151152
|Log a numeric value (int or float) over time | `mlflow.log_metric("my_metric", 1, step=1)`| Use parameter `step` to indicate the step at which you log the metric value. It can be any integer number. It defaults to zero. |
152153
|Log a boolean value | `mlflow.log_metric("my_metric", 0)`| 0 = True, 1 = False|
153154
154155
> [!IMPORTANT]
155-
> **Performance considerations:** If you need to log multiple metrics (or multiple values for the same metric) avoid making calls to `mlflow.log_metric` in loops. Better performance can be achieved by logging a batch of metrics. Use the method `mlflow.log_metrics` which accepts a dictionary with all the metrics you want to log at once or use `MLflowClient.log_batch` which accepts multiple type of elements for logging. See [Logging curves or list of values](#logging-curves-or-list-of-values) for an example.
156+
> **Performance considerations:** If you need to log multiple metrics (or multiple values for the same metric), avoid making calls to `mlflow.log_metric` in loops. Better performance can be achieved by logging a batch of metrics. Use the method `mlflow.log_metrics` which accepts a dictionary with all the metrics you want to log at once or use `MLflowClient.log_batch` which accepts multiple type of elements for logging. See [Logging curves or list of values](#logging-curves-or-list-of-values) for an example.
156157
157-
### Logging curves or list of values
158+
### Log curves or list of values
158159
159160
Curves (or a list of numeric values) can be logged with MLflow by logging the same metric multiple times. The following example shows how to do it:
160161
@@ -169,20 +170,20 @@ client.log_batch(mlflow.active_run().info.run_id,
169170
metrics=[Metric(key="sample_list", value=val, timestamp=int(time.time() * 1000), step=0) for val in list_to_log])
170171
```
171172
172-
## Logging images
173+
## Log images
173174
174175
MLflow supports two ways of logging images. Both ways persist the given image as an artifact inside of the run.
175176
176-
|Logged Value|Example code| Notes|
177+
|Logged value|Example code| Notes|
177178
|----|----|----|
178179
|Log numpy metrics or PIL image objects|`mlflow.log_image(img, "figure.png")`| `img` should be an instance of `numpy.ndarray` or `PIL.Image.Image`. `figure.png` is the name of the artifact generated inside of the run. It doesn't have to be an existing file.|
179180
|Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`| `figure.png` is the name of the artifact generated inside of the run. It doesn't have to be an existing file. |
180181
181-
## Logging files
182+
## Log files
182183
183184
In general, files in MLflow are called artifacts. You can log artifacts in multiple ways in Mlflow:
184185
185-
|Logged Value|Example code| Notes|
186+
|Logged value|Example code| Notes|
186187
|----|----|----|
187188
|Log text in a text file | `mlflow.log_text("text string", "notes.txt")`| Text is persisted inside of the run in a text file with name *notes.txt*. |
188189
|Log dictionaries as JSON and YAML files | `mlflow.log_dict(dictionary, "file.yaml"` | `dictionary` is a dictionary object containing all the structure that you want to persist as a JSON or YAML file. |
@@ -192,7 +193,7 @@ In general, files in MLflow are called artifacts. You can log artifacts in multi
192193
> [!TIP]
193194
> When you log large files with `log_artifact` or `log_model`, you might encounter time out errors before the upload of the file is completed. Consider increasing the timeout value by adjusting the environment variable `AZUREML_ARTIFACTS_DEFAULT_TIMEOUT`. It's default value is *300* (seconds).
194195
195-
## Logging models
196+
## Log models
196197
197198
MLflow introduces the concept of *models* as a way to package all the artifacts required for a given model to function. Models in MLflow are always a folder with an arbitrary number of files, depending on the framework used to generate the model. Logging models has the advantage of tracking all the elements of the model as a single entity that can be *registered* and then *deployed*. On top of that, MLflow models enjoy the benefit of [no-code deployment](how-to-deploy-mlflow-models.md) and can be used with the [Responsible AI dashboard](how-to-responsible-ai-dashboard.md) in studio. For more information, see [From artifacts to models in MLflow](concept-mlflow-models.md).
198199
@@ -233,7 +234,7 @@ tags = run.data.tags
233234
```
234235
235236
>[!NOTE]
236-
> The metrics dictionary returned by `mlflow.get_run` or `mlflow.seach_runs` only returns the most recently logged value for a given metric name. For example, if you log a metric called `iteration` multiple times with values, `1`, then `2`, then `3`, then `4`, only `4` is returned when calling `run.data.metrics['iteration']`.
237+
> The metrics dictionary returned by `mlflow.get_run` or `mlflow.search_runs` only returns the most recently logged value for a given metric name. For example, if you log a metric called `iteration` multiple times with values, *1*, then *2*, then *3*, then *4*, only *4* is returned when calling `run.data.metrics['iteration']`.
237238
>
238239
> To get all metrics logged for a particular metric name, you can use `MlFlowClient.get_metric_history()` as explained in the example [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#getting-params-and-metrics-from-a-run).
239240
@@ -261,7 +262,7 @@ For more information, please refer to [Getting metrics, parameters, artifacts an
261262
262263
You can browse completed job records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
263264
264-
Navigate to the **Jobs** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific Experiments by applying the **Experiment** filter in the top menu bar. Select the job of interest to enter the details view, and then select the **Metrics** tab.
265+
Navigate to the **Jobs** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific experiments by applying the **Experiment** filter in the top menu bar. Select the job of interest to enter the details view, and then select the **Metrics** tab.
265266
266267
Select the logged metrics to render charts on the right side. You can customize the charts by applying smoothing, changing the color, or plotting multiple metrics on a single graph. You can also resize and rearrange the layout as you wish. After you create your desired view, you can save it for future use and share it with your teammates using a direct link.
267268

articles/machine-learning/toc.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -213,7 +213,7 @@
213213
- name: Train with MLflow Projects
214214
displayName: log, monitor, metrics, model registry, register
215215
href: how-to-train-mlflow-projects.md
216-
- name: Log metrics, parameters and files
216+
- name: Log metrics, parameters, and files
217217
displayName: troubleshoot, log, files, tracing, metrics
218218
href: how-to-log-view-metrics.md
219219
- name: Log MLflow models

0 commit comments

Comments
 (0)