Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/guides/monitoring_and_debugging.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,5 @@ monitoring_and_debugging/monitor_goodput.md
monitoring_and_debugging/understand_logs_and_metrics.md
monitoring_and_debugging/use_vertex_ai_tensorboard.md
monitoring_and_debugging/xprof_user_guide.md
monitoring_and_debugging/ml_workload_diagnostics.md
```
44 changes: 44 additions & 0 deletions docs/guides/monitoring_and_debugging/ml_workload_diagnostics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
<!--
Copyright 2024 Google LLC

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# Running a workload with Google Cloud ML Diagnostics Enabled
This guide provides an overview on how to enable ML Diagnostics for your MaxText workload.

## Overview
Google Cloud ML Diagnostics is an end-to-end managed platform for ML Engineers to optimize and diagnose their AI/ML workloads on Google Cloud. The product allows ML Engineers to collect and visualize all their workload metrics, configs and profiles with one single platform, all within the same UI. The current product offering focuses on workloads running on XLA-based frameworks (JAX, Pytorch XLA, Tensorflow/Keras) on Google Cloud TPUs and GPUs. Current support is for JAX on Google Cloud TPUs only.

## Enabling ML Diagnostics on Maxtext Workload
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for including all the details. Can we make this section simpler and focus on the "MaxText Workload" flags for the three scenarios only? We can briefly mention that the workload can be triggered by XPK, with or without Pathways, or run locally in a pure TPU VM.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

MaxText has integrated the ML Diagnostics SDK in its code. You can enable ML Diagnostics with the **managed-mldiagnostics** flag. If this is enabled, it will

- create a managed MachineLearning run with all the MaxText configs
- upload profiling traces, if the profiling is enabled by profiler="xplane".
- upload training metrics, at the defined log_period interval.

### Examples

1. Enable ML Diagnostics to just capture Maxtext metrics and configs

python3 -m MaxText.train src/MaxText/configs/base.yml run_name=${USER}-tpu-job base_output_directory=$"gs://your-output-bucket/" dataset_path="gs://your-dataset-bucket/" steps=100 log_period=10 managed_mldiagnostics=True

2. Enable ML Diagnostics to capture Maxtext metrics, configs and singlehost profiles (on the first TPU device)

python3 -m MaxText.train src/MaxText/configs/base.yml run_name=${USER}-tpu-job base_output_directory=$"gs://your-output-bucket/" dataset_path="gs://your-dataset-bucket/" steps=100 log_period=10 profiler=xplane managed_mldiagnostics=True

3. Enable ML Diagnostics to capture Maxtext metrics, configs and multihost profiles (on all TPU devices)

python3 -m MaxText.train src/MaxText/configs/base.yml run_name=${USER}-tpu-job base_output_directory=$"gs://your-output-bucket/" dataset_path="gs://your-dataset-bucket/" steps=100 log_period=10 profiler=xplane upload_all_profiler_results=True managed_mldiagnostics=True

Users can deploy the workload across all supported environments, including the standard XPK workload types (**xpk workload create** or **xpk workload create-pathways**) or by running the workload directly on a standalone TPU VM.
2 changes: 1 addition & 1 deletion docs/run_maxtext/run_maxtext_via_xpk.md
Original file line number Diff line number Diff line change
Expand Up @@ -225,4 +225,4 @@ For instance, to run a job across **four TPU slices**, you would change `--num-s

```
xpk workload delete --cluster ${CLUSTER_NAME} --workload <your-workload-name>
```
```