Skip to content

add guide for SGLang-Jax on TPUs#103

Open
JamesBrianD wants to merge 1 commit intoAI-Hypercomputer:mainfrom
JamesBrianD:sglang-jax
Open

add guide for SGLang-Jax on TPUs#103
JamesBrianD wants to merge 1 commit intoAI-Hypercomputer:mainfrom
JamesBrianD:sglang-jax

Conversation

@JamesBrianD
Copy link

This is a tutorial for running the sglang-jax project on TPUs.

@google-cla
Copy link

google-cla bot commented Oct 28, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@JamesBrianD
Copy link
Author

@bvandermoon Could you please review this pr?

Copy link
Collaborator

@bvandermoon bvandermoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @JamesBrianD, thanks for reaching out and for your contribution. Could you please provide a little more context around this PR?

@JamesBrianD
Copy link
Author

JamesBrianD commented Oct 31, 2025

Hello @bvandermoon, thanks for reviewing! Happy to provide context.
SGLang-Jax is a new open-source inference engine that the LMSYS team just announced. It's built entirely on JAX/XLA and designed specifically for TPU inference. It comes with all the production features you'd expect: continuous batching, prefix caching, different parallelism strategies, speculative decoding, and custom TPU kernels.
I know there's already MaxText and vLLM-TPU in the recipes, just thought this could be another option for TPU inference that folks might find useful.
Project: https://github.com/sgl-project/sglang-jax
Blog: https://lmsys.org/blog/2025-10-29-sglang-jax/
Let me know if you'd like me to adjust anything in the PR!

@JamesBrianD
Copy link
Author

Hi @bvandermoon, friendly ping on this PR.
Whenever you have time, I’d appreciate a quick review. If you think any changes or clarifications are needed, I’m happy to update the PR accordingly.

@JamesBrianD
Copy link
Author

@anthonsu @karan please review it.

@karan
Copy link
Collaborator

karan commented Jan 13, 2026

Thanks for the ping @JamesBrianD. I'm not an SGLang expert, I'll try to find someone more knowledgable to review this.

Copy link

@depksingh depksingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

two minor comments, otherwise LGTM

- **TPU chips per node**: 4 (v6e)
- **Total TPU chips**: 64
- **Tensor Parallelism (TP)**: 32 (for non-MoE layers)
- **Expert Tensor Parallelism (ETP)**: 64 (for MoE experts)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can we please add the tpu provisioning and ssh commands like the Qwen3 readme, so that users looking at only this readme are aware of the steps.


### Launch Command

Run the following command **on each node**, replacing:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a manual process of running the same command on each node by sshing and running the same command which can be time taking. Can you please check if the below version of command can be used to run the same command on all the workers which will simplify the process.

gcloud compute tpus tpu-vm ssh tpu-name --zone=zone --worker=all --command='pip install "jax[tpu]==0.4.20" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html'

https://docs.cloud.google.com/tpu/docs/managing-tpus-tpu-vm

since node-rank is the only changing param for each node, is there any other way to pass it so that it doesn't depend on the command, that way we'll be able to run the same command on all nodes with the above single command. If not, I think the existing way should be fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants