Skip to content

Commit 0edac7e

Browse files
author
Sharon Yu
committed
add ml-diagnostic guide
1 parent 5c77d54 commit 0edac7e

File tree

3 files changed

+46
-1
lines changed

3 files changed

+46
-1
lines changed

docs/guides/monitoring_and_debugging.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,4 +26,5 @@ monitoring_and_debugging/monitor_goodput.md
2626
monitoring_and_debugging/understand_logs_and_metrics.md
2727
monitoring_and_debugging/use_vertex_ai_tensorboard.md
2828
monitoring_and_debugging/xprof_user_guide.md
29+
monitoring_and_debugging/ml_workload_diagnostics.md
2930
```
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
<!--
2+
Copyright 2025 Google LLC
3+
4+
Licensed under the Apache License, Version 2.0 (the "License");
5+
you may not use this file except in compliance with the License.
6+
You may obtain a copy of the License at
7+
8+
https://www.apache.org/licenses/LICENSE-2.0
9+
10+
Unless required by applicable law or agreed to in writing, software
11+
distributed under the License is distributed on an "AS IS" BASIS,
12+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
See the License for the specific language governing permissions and
14+
limitations under the License.
15+
-->
16+
17+
# Running a workload with Google Cloud ML Diagnostics Enabled
18+
This guide provides an overview on how to enable ML Diagnostics for your MaxText workload.
19+
20+
## Overview
21+
Google Cloud ML Diagnostics is an end-to-end managed platform for ML Engineers to optimize and diagnose their AI/ML workloads on Google Cloud. The product allows ML Engineers to collect and visualize all their workload metrics, configs and profiles with one single platform, all within the same UI. The current product offering focuses on workloads running on XLA-based frameworks (JAX, Pytorch XLA, Tensorflow/Keras) on Google Cloud TPUs and GPUs. Current support is for JAX on Google Cloud TPUs only.
22+
23+
## Enabling ML Diagnostics on Maxtext Workload
24+
MaxText has integrated the ML Diagnostics SDK in its code. You can enable ML Diagnostics with the **managed-mldiagnostics** flag. If this is enabled, it will
25+
26+
- create a managed MachineLearning run with all the MaxText configs
27+
- upload profiling traces, if the profiling is enabled by profiler="xplane".
28+
- upload training metrics, at the defined log_period interval.
29+
30+
### Examples
31+
32+
1. Enable ML Diagnostics to just capture Maxtext metrics and configs
33+
34+
python3 -m MaxText.train src/MaxText/configs/base.yml run_name=${USER}-tpu-job base_output_directory=$"gs://your-output-bucket/" dataset_path="gs://your-dataset-bucket/" steps=100 log_period=10 managed_mldiagnostics=True
35+
36+
2. Enable ML Diagnostics to capture Maxtext metrics, configs and singlehost profiles (on the first TPU device)
37+
38+
python3 -m MaxText.train src/MaxText/configs/base.yml run_name=${USER}-tpu-job base_output_directory=$"gs://your-output-bucket/" dataset_path="gs://your-dataset-bucket/" steps=100 log_period=10 profiler=xplane managed_mldiagnostics=True
39+
40+
3. Enable ML Diagnostics to capture Maxtext metrics, configs and multihost profiles (on all TPU devices)
41+
42+
python3 -m MaxText.train src/MaxText/configs/base.yml run_name=${USER}-tpu-job base_output_directory=$"gs://your-output-bucket/" dataset_path="gs://your-dataset-bucket/" steps=100 log_period=10 profiler=xplane upload_all_profiler_results=True managed_mldiagnostics=True
43+
44+
Users can deploy the workload across all supported environments, including the standard XPK workload types (**xpk workload create** or **xpk workload create-pathways**) or by running the workload directly on a standalone TPU VM.

docs/run_maxtext/run_maxtext_via_xpk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -225,4 +225,4 @@ For instance, to run a job across **four TPU slices**, you would change `--num-s
225225
226226
```
227227
xpk workload delete --cluster ${CLUSTER_NAME} --workload <your-workload-name>
228-
```
228+
```

0 commit comments

Comments
 (0)