Skip to content

Commit 0f722e5

Browse files
author
Sharon Yu
committed
add ml-diagnostic guide
1 parent 5c77d54 commit 0f722e5

File tree

3 files changed

+110
-1
lines changed

3 files changed

+110
-1
lines changed

docs/guides/monitoring_and_debugging.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,4 +26,5 @@ monitoring_and_debugging/monitor_goodput.md
2626
monitoring_and_debugging/understand_logs_and_metrics.md
2727
monitoring_and_debugging/use_vertex_ai_tensorboard.md
2828
monitoring_and_debugging/xprof_user_guide.md
29+
monitoring_and_debugging/ml_workload_diagnostics.md
2930
```
Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
<!--
2+
Copyright 2024 Google LLC
3+
4+
Licensed under the Apache License, Version 2.0 (the "License");
5+
you may not use this file except in compliance with the License.
6+
You may obtain a copy of the License at
7+
8+
https://www.apache.org/licenses/LICENSE-2.0
9+
10+
Unless required by applicable law or agreed to in writing, software
11+
distributed under the License is distributed on an "AS IS" BASIS,
12+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
See the License for the specific language governing permissions and
14+
limitations under the License.
15+
-->
16+
17+
# Running a workload with Google Cloud ML Diagnostics Enabled
18+
This guide provides an overview on how to enable ML Diagnostics for your MaxText workload.
19+
20+
## Overview
21+
Google Cloud ML Diagnostics is an end-to-end managed platform for ML Engineers to optimize and diagnose their AI/ML workloads on Google Cloud. The product allows ML Engineers to collect and visualize all their workload metrics, configs and profiles with one single platform, all within the same UI. The current product offering focuses on workloads running on XLA-based frameworks (JAX, Pytorch XLA, Tensorflow/Keras) on Google Cloud TPUs and GPUs. Current support is for JAX on Google Cloud TPUs only.
22+
23+
This feature can be enabled via a simple XPK command and is runnable across all supported environments, including Pathways and standalone TPU VMs by setting the **managed-mldiagnostics** flag in MaxText
24+
25+
## Enabling ML Diagnostics on Maxtext Workload
26+
MaxText has integrated the ML Diagnostics SDK in its code. You can enable ML Diagnostics with the **managed-mldiagnostics** flag. If this is enabled, it will
27+
28+
- create a managed MachineLearning run with all the MaxText configs
29+
- upload profiling traces, if the profiling is enabled by profiler="xplane".
30+
- upload training metrics, at the defined log_period interval.
31+
32+
1. **Set your configuration**
33+
34+
```
35+
export PROJECT_ID="your-gcp-project-id"
36+
export ZONE="your-gcp-zone" # e.g., us-central1-a
37+
export CLUSTER_NAME="your-existing-cluster-name"
38+
export BASE_OUTPUT_DIR="gs://your-output-bucket/"
39+
export DATASET_PATH="gs://your-dataset-bucket/"
40+
```
41+
42+
2. **Configure gcloud CLI**
43+
44+
```
45+
gcloud config set project $PROJECT_ID
46+
gcloud config set compute/zone $ZONE
47+
```
48+
49+
### Run Maxtext via XPK
50+
1. Enable ML Diagnostics to just capture Maxtext metrics and configs
51+
52+
xpk workload create\
53+
--cluster ${CLUSTER_NAME}\
54+
--workload ${USER}-tpu-job\
55+
--base-docker-image maxtext_base_image\
56+
--tpu-type v5litepod-256\
57+
--num-slices 1\
58+
--command "python3 -m MaxText.train src/MaxText/configs/base.yml run_name=${USER}-tpu-job base_output_directory=${BASE_OUTPUT_DIR} dataset_path=${DATASET_PATH} steps=100 log_period=10 managed_mldiagnostics=True"
59+
60+
2. Enable ML Diagnostics to capture Maxtext metrics, configs and singlehost profiles (on the first TPU device)
61+
62+
xpk workload create\
63+
--cluster ${CLUSTER_NAME}\
64+
--workload ${USER}-tpu-job\
65+
--base-docker-image maxtext_base_image\
66+
--tpu-type v5litepod-256\
67+
--num-slices 1\
68+
--command "python3 -m MaxText.train src/MaxText/configs/base.yml run_name=${USER}-tpu-job base_output_directory=${BASE_OUTPUT_DIR} dataset_path=${DATASET_PATH} steps=100 log_period=10 profiler=xplane managed_mldiagnostics=True"
69+
70+
3. Enable ML Diagnostics to capture Maxtext metrics, configs and multihost profiles (on all TPU devices)
71+
72+
xpk workload create\
73+
--cluster ${CLUSTER_NAME}\
74+
--workload ${USER}-tpu-job\
75+
--base-docker-image maxtext_base_image\
76+
--tpu-type v5litepod-256\
77+
--num-slices 1\
78+
--command "python3 -m MaxText.train src/MaxText/configs/base.yml run_name=${USER}-tpu-job base_output_directory=${BASE_OUTPUT_DIR} dataset_path=${DATASET_PATH} steps=100 log_period=10 profiler=xplane upload_all_profiler_results=True managed_mldiagnostics=True"
79+
### Run Maxtext via Pathways
80+
1. Enable ML Diagnostics to just capture Maxtext metrics and configs
81+
82+
xpk workload create-pathways\
83+
--cluster ${CLUSTER_NAME}\
84+
--workload ${USER}-tpu-job\
85+
--base-docker-image maxtext_base_image\
86+
--tpu-type v5litepod-256\
87+
--num-slices 1\
88+
--command "python3 -m MaxText.train src/MaxText/configs/base.yml run_name=${USER}-tpu-job base_output_directory=${BASE_OUTPUT_DIR} dataset_path=${DATASET_PATH} steps=100 enable_single_controller=True log_period=10 managed_mldiagnostics=True"
89+
90+
2. Enable ML Diagnostics to capture Maxtext metrics, configs and singlehost profiles (on the first TPU device)
91+
92+
xpk workload create-pathways\
93+
--cluster ${CLUSTER_NAME}\
94+
--workload ${USER}-tpu-job\
95+
--base-docker-image maxtext_base_image\
96+
--tpu-type v5litepod-256\
97+
--num-slices 1\
98+
--command "python3 -m MaxText.train src/MaxText/configs/base.yml run_name=${USER}-tpu-job base_output_directory=${BASE_OUTPUT_DIR} dataset_path=${DATASET_PATH} steps=100 enable_single_controller=True log_period=10 profiler=xplane managed_mldiagnostics=True"
99+
100+
3. Enable ML Diagnostics to capture Maxtext metrics, configs and multihost profiles (on all TPU devices)
101+
102+
xpk workload create-pathways\
103+
--cluster ${CLUSTER_NAME}\
104+
--workload ${USER}-tpu-job\
105+
--base-docker-image maxtext_base_image\
106+
--tpu-type v5litepod-256\
107+
--num-slices 1\
108+
--command "python3 -m MaxText.train src/MaxText/configs/base.yml run_name=${USER}-tpu-job base_output_directory=${BASE_OUTPUT_DIR} dataset_path=${DATASET_PATH} steps=100 enable_single_controller=True log_period=10 profiler=xplane upload_all_profiler_results=True managed_mldiagnostics=True"

docs/run_maxtext/run_maxtext_via_xpk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -225,4 +225,4 @@ For instance, to run a job across **four TPU slices**, you would change `--num-s
225225
226226
```
227227
xpk workload delete --cluster ${CLUSTER_NAME} --workload <your-workload-name>
228-
```
228+
```

0 commit comments

Comments
 (0)