You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-log-view-metrics.md
+92-8Lines changed: 92 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,12 +3,12 @@ title: Log metrics, parameters, and files with MLflow
3
3
titleSuffix: Azure Machine Learning
4
4
description: Enable logging on your ML training runs to monitor real-time run metrics with MLflow, and to help diagnose errors and warnings.
5
5
services: machine-learning
6
-
ms.author: amipatel
7
-
author: amibp
8
-
ms.reviewer: sgilley
6
+
ms.author: fasantia
7
+
author: santiagxf
8
+
ms.reviewer: mopeakande
9
9
ms.service: machine-learning
10
-
ms.subservice: core
11
-
ms.date: 01/30/2024
10
+
ms.subservice: mlops
11
+
ms.date: 04/11/2024
12
12
ms.topic: how-to
13
13
ms.custom: sdkv2
14
14
---
@@ -27,6 +27,7 @@ Logs can help you diagnose errors and warnings, or track performance metrics lik
27
27
> [!div class="checklist"]
28
28
> * Log metrics, parameters, and models when submitting jobs.
29
29
> * Track runs when training interactively.
30
+
> * Log metrics asynchronously.
30
31
> * View diagnostic information about training.
31
32
32
33
> [!TIP]
@@ -41,6 +42,9 @@ Logs can help you diagnose errors and warnings, or track performance metrics lik
41
42
pip install mlflow azureml-mlflow
42
43
```
43
44
45
+
> [!NOTE]
46
+
> For asynchronous logging, you need to have `MLflow` version 2.8.0+ and `azureml-mlflow` version 1.55+.
47
+
44
48
* If you're doing remote tracking (tracking experiments that run outside Azure Machine Learning), configure MLflow to track experiments. For more information, see [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md).
45
49
46
50
* To log metrics, parameters, artifacts, and models in your experiments in Azure Machine Learning using MLflow, just import MLflow into your script:
@@ -142,9 +146,9 @@ params = {
142
146
mlflow.log_params(params)
143
147
```
144
148
145
-
## Log metrics
149
+
## Log metrics synchronously
146
150
147
-
Metrics, as opposite to parameters, are always numeric. The following table describes how to log specific numeric types:
151
+
Metrics, as opposite to parameters, are always numeric. MLflow allows the logging of metrics in a synchronous way, meaning that metrics are persisted and immediately available for consumption upon call return. The following table describes how to log specific numeric types:
metrics=[Metric(key="sample_list", value=val, timestamp=int(time.time() * 1000), step=0) for val in list_to_log])
171
175
```
172
176
177
+
## Log metrics asynchronously
178
+
179
+
MLflow also allows logging of metrics in an asynchronous way. Asynchronous metric logging is particularly useful in cases with high throughput where large training jobs with hundreds of compute nodes might be running and try to log metrics concurrently.
180
+
181
+
Asynchronous metric logging allows you to log metrics and wait for them to be ingested before trying to read them back. This approach scales to large training routines that log hundreds of thousands of metric values.
182
+
183
+
To log metrics asynchronously, use the MLflow logging API as you typically would, but add the extra parameter `synchronous=False`.
When you use `log_metric(synchronous=False)`, control is automatically returned to the caller once the operation is accepted; however, there is no guarantee at that moment that the metric value has been persisted.
195
+
196
+
> [!IMPORTANT]
197
+
> Even with `synchronous=False`, Azure Machine Learning guarantees the ordering of metrics.
198
+
199
+
If you need to wait for a particular value to be persisted in the backend, then you can use the metric operation returned to wait on it, as shown in the following example:
You can asynchronously log one metric at a time or log a batch of metrics:
213
+
214
+
```python
215
+
import mlflow
216
+
import time
217
+
from mlflow.entities import Metric
218
+
219
+
with mlflow.start_run() as current_run:
220
+
mlflow_client = mlflow.tracking.MlflowClient()
221
+
222
+
metrics = {"metric-0": 3.14, "metric-1": 6.28}
223
+
timestamp = int(time.time() * 1000)
224
+
metrics_arr = [Metric(key, value, timestamp, 0) for key, value in metrics.items()]
225
+
226
+
run_operation = mlflow_client.log_batch(
227
+
run_id=current_run.info.run_id,
228
+
metrics=metrics_arr,
229
+
synchronous=False,
230
+
)
231
+
```
232
+
233
+
The `wait()` operation is also available when logging a batch of metrics:
234
+
235
+
```python
236
+
run_operation.wait()
237
+
```
238
+
239
+
You don't have to call `wait()` on your routines if you don't need immediate access to the metric values. Azure Machine Learning will wait automatically when the job is about to finish if there is any pending metric to be persisted. By the time a job is completed in Azure Machine Learning, all metrics are guaranteed to be persisted.
240
+
241
+
### Changing the default logging behavior
242
+
243
+
If you're performiong [automatic logging](#automatic-logging), using `autolog`, or you're using a logger like PyTorch lightning, You won't have access to the method `log_metric(synchronous=False)` directly. In such cases, you can change the default logging behavior by using the following configuration:
244
+
245
+
```python
246
+
import mlflow
247
+
248
+
mlflow.config.enable_async_logging()
249
+
```
250
+
251
+
The same property can be set, using an environment variable:
252
+
253
+
```python
254
+
export MLFLOW_ENABLE_ASYNC_LOGGING=True
255
+
```
256
+
173
257
## Log images
174
258
175
259
MLflow supports two ways of logging images. Both ways persist the given image as an artifact inside of the run.
@@ -294,7 +378,7 @@ For jobs training on multi-compute clusters, logs are present for each IP node.
294
378
295
379
Azure Machine Learning logs information from various sources during training, such as AutoML or the Docker container that runs the training job. Many of these logs aren't documented. If you encounter problems and contact Microsoft support, they might be able to use these logs during troubleshooting.
296
380
297
-
## Next steps
381
+
## Related content
298
382
299
383
* [Train ML models with MLflow and Azure Machine Learning](how-to-train-mlflow-projects.md)
300
384
* [Migrate from SDK v1 logging to MLflow tracking](reference-migrate-sdk-v1-mlflow-tracking.md)
0 commit comments