You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, MaxText uses a few dependencies, such as `mlperf-logging` and `google-jetstream`, that are installed directly from GitHub source. These are defined in `base_requirements/requirements.txt`, and the `seed-env` tool will carry them over to the generated requirements files.
135
135
136
-
## 5. Verify the New Dependencies
136
+
## Step 5: Verify the New Dependencies
137
137
138
138
Finally, test that the new dependencies install correctly and that MaxText runs as expected.
3.**Run tests:** Run MaxText tests to ensure there are no regressions.
158
+
3.**Run tests:** Run MaxText tests to ensure there are no regressions.
159
+
160
+
## Appendix: Install XPK for MaxText Multi-host Workloads
161
+
162
+
> **_NOTE:_** XPK is only required for multi-host TPU configurations (e.g., v5p-128, v6e-256). For single-host training, XPK is not needed and you can run MaxText directly on your TPU VM.
163
+
164
+
XPK (Accelerated Processing Kit) is a tool designed to simplify the orchestration and management of workloads on Google Kubernetes Engine (GKE) clusters with TPU or GPU accelerators. In MaxText, we use XPK to submit both pre-training and post-training jobs on multi-host TPU configurations.
165
+
166
+
For your convenience, we provide a minimal installation path below:
167
+
```bash
168
+
# Directly install xpk using pip
169
+
pip install xpk
170
+
171
+
# Install kubectl
172
+
sudo apt-get update
173
+
sudo apt install snapd
174
+
sudo snap install kubectl --classic
175
+
176
+
# Install gke-gcloud-auth-plugin
177
+
echo"deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main"| sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
Copy file name to clipboardExpand all lines: docs/tutorials/posttraining/rl_on_multi_host.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -93,7 +93,7 @@ You can install the required dependencies using either of the following two opti
93
93
### Option 1: Installing stable releases of tunix and vllm-tpu
94
94
Run the following bash script to create a docker image with all the dependencies of MaxText, Tunix, vLLM and tpu-inference installed.
95
95
96
-
In addition to MaxText dependencies, primarily, it installs `vllm-tpu` which is [vllm](https://github.com/vllm-project/vllm) and [tpu-inference](https://github.com/vllm-project/tpu-inference) and thereby providing TPU inference for vLLM, with unified JAX and PyTorch support.
96
+
In addition to MaxText dependencies, primarily, it installs `vllm-tpu` which is [vllm](https://github.com/vllm-project/vllm) and [tpu-inference](https://github.com/vllm-project/tpu-inference) and thereby providing TPU inference for vLLM, with unified JAX and PyTorch support. This build process takes approximately 10 to 15 minutes.
0 commit comments