Skip to content

Conversation

@fadara01
Copy link
Collaborator

@fadara01 fadara01 commented Dec 16, 2025

@meta-cla meta-cla bot added the cla signed label Dec 16, 2025
@fadara01 fadara01 requested a deployment to pytorch-x-vllm January 8, 2026 10:59 — with GitHub Actions In progress
@fadara01 fadara01 force-pushed the reenable-arm-vllm-perf branch from ab6d64f to 0f7978f Compare January 8, 2026 12:37
- re-enables the benchmarks disabled in #114
- related to vllm-project/vllm#26494 (not sure which needs to go in first)
- use default block_size in serving benchmarks (i.e. 128 instead of setting it to 16)

Signed-off-by: Fadi Arafeh <[email protected]>
@fadara01 fadara01 force-pushed the reenable-arm-vllm-perf branch from 0f7978f to 72306f2 Compare January 8, 2026 12:42
@fadara01
Copy link
Collaborator Author

fadara01 commented Jan 8, 2026

Hi @huydhn - this is now working. See https://hud.pytorch.org/benchmark/llms?repoName=vllm-project%2Fvllm with device=arm-cpu

Could you please review?
Thank you!

@fadara01 fadara01 requested a review from huydhn January 8, 2026 13:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants