-
-
Notifications
You must be signed in to change notification settings - Fork 11.3k
Description
Your current environment
Collecting environment information...
System Info
==============================
OS : Ubuntu 24.04.3 LTS (aarch64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 3.31.2
Libc version : glibc-2.39
==============================
PyTorch Info
PyTorch version : 2.8.0+cpu
Is debug build : False
CUDA used to build PyTorch : Could not collect
ROCM used to build PyTorch : N/A
==============================
Python Environment
Python version : 3.13.5 | packaged by Anaconda, Inc. | (main, Jun 12 2025, 16:17:47) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-6.11.0-1016-nvidia-aarch64-with-glibc2.39
==============================
CUDA / GPU Info
Is CUDA available : False
CUDA runtime version : 13.0.88
CUDA_MODULE_LOADING set to : N/A
GPU models and configuration : GPU 0: NVIDIA GB10
Nvidia driver version : 580.95.05
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: ARM
Model name: Cortex-X925
Model: 1
Thread(s) per core: 1
Core(s) per socket: 10
Socket(s): 1
Stepping: r0p1
CPU(s) scaling MHz: 110%
CPU max MHz: 4004.0000
CPU min MHz: 1378.0000
BogoMIPS: 2000.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh bti ecv afp wfxt
Model name: Cortex-A725
Model: 1
Thread(s) per core: 1
Core(s) per socket: 10
Socket(s): 1
Stepping: r0p1
CPU(s) scaling MHz: 130%
CPU max MHz: 2860.0000
CPU min MHz: 338.0000
BogoMIPS: 2000.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh bti ecv afp wfxt
L1d cache: 1.3 MiB (20 instances)
L1i cache: 1.3 MiB (20 instances)
L2 cache: 25 MiB (20 instances)
L3 cache: 24 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
[pip3] numpy==2.2.6
[pip3] nvidia-cublas==13.0.0.19
[pip3] nvidia-cuda-cupti==13.0.48
[pip3] nvidia-cuda-nvrtc==13.0.48
[pip3] nvidia-cuda-runtime==13.0.48
[pip3] nvidia-cudnn-cu13==9.13.0.50
[pip3] nvidia-cufft==12.0.0.15
[pip3] nvidia-cufile==1.15.0.42
[pip3] nvidia-curand==10.4.0.35
[pip3] nvidia-cusolver==12.0.3.29
[pip3] nvidia-cusparse==12.6.2.49
[pip3] nvidia-cusparselt-cu13==0.8.0
[pip3] nvidia-nccl-cu13==2.27.7
[pip3] nvidia-nvjitlink==13.0.39
[pip3] nvidia-nvshmem-cu13==3.3.24
[pip3] nvidia-nvtx==13.0.39
[pip3] pyzmq==27.0.2
[pip3] segmentation_models_pytorch==0.5.0
[pip3] torch==2.8.0
[pip3] torchaudio==2.8.0
[pip3] torchvision==0.23.0
[pip3] transformers==4.55.2
[pip3] triton==3.5.0
[conda] numpy 2.2.6 pypi_0 pypi
[conda] nvidia-cublas 13.0.0.19 pypi_0 pypi
[conda] nvidia-cuda-cupti 13.0.48 pypi_0 pypi
[conda] nvidia-cuda-nvrtc 13.0.48 pypi_0 pypi
[conda] nvidia-cuda-runtime 13.0.48 pypi_0 pypi
[conda] nvidia-cudnn-cu13 9.13.0.50 pypi_0 pypi
[conda] nvidia-cufft 12.0.0.15 pypi_0 pypi
[conda] nvidia-cufile 1.15.0.42 pypi_0 pypi
[conda] nvidia-curand 10.4.0.35 pypi_0 pypi
[conda] nvidia-cusolver 12.0.3.29 pypi_0 pypi
[conda] nvidia-cusparse 12.6.2.49 pypi_0 pypi
[conda] nvidia-cusparselt-cu13 0.8.0 pypi_0 pypi
[conda] nvidia-nccl-cu13 2.27.7 pypi_0 pypi
[conda] nvidia-nvjitlink 13.0.39 pypi_0 pypi
[conda] nvidia-nvshmem-cu13 3.3.24 pypi_0 pypi
[conda] nvidia-nvtx 13.0.39 pypi_0 pypi
[conda] pyzmq 27.0.2 pypi_0 pypi
[conda] segmentation-models-pytorch 0.5.0 pypi_0 pypi
[conda] torch 2.8.0 pypi_0 pypi
[conda] torchaudio 2.8.0 pypi_0 pypi
[conda] torchvision 0.23.0 pypi_0 pypi
[conda] transformers 4.55.2 pypi_0 pypi
[conda] triton 3.5.0 pypi_0 pypi
==============================
vLLM Info
ROCM Version : Could not collect
vLLM Version : 0.11.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
GPU0 NIC0 NIC1 NIC2 NIC3 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NODE NODE NODE NODE 0-19 0 N/A
NIC0 NODE X PIX NODE NODE
NIC1 NODE PIX X NODE NODE
NIC2 NODE NODE NODE X PIX
NIC3 NODE NODE NODE PIX X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: rocep1s0f0
NIC1: rocep1s0f1
NIC2: roceP2p1s0f0
NIC3: roceP2p1s0f1
==============================
Environment Variables
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
How you are installing vllm
vLLM CUDA Library Compatibility Issue - ARM64 NVIDIA GB10
Problem Summary
vLLM 0.11.0 fails to start on ARM64 NVIDIA GB10 GPU due to missing CUDA libraries. The system has CUDA 13.x installed but vLLM is compiled against CUDA 12.x, causing import errors.
Environment Details
System Information
- Architecture: ARM64 (aarch64) - Graviton-based
- GPU: NVIDIA GB10
- CUDA Version: 13.x (libcudart.so.13 available, not libcudart.so.12)
- OS: Linux
Python & Package Versions
Python: 3.12.3
vLLM: 0.11.0
PyTorch: 2.8.0
torchvision: 0.23.0
torchaudio: 2.8.0
transformers: 4.57.1
CUDA Capability: 12.1
Installed via
/usr/bin/python3.12 -m venv vllm_env
source vllm_env/bin/activate
pip install vllmError Messages
Error 1: libcudart.so.12 not found
ImportError: libcudart.so.12: cannot open shared object file: No such file or directory
Traceback:
File "/home/swatson/work/MachineLearning/vllm_env/lib/python3.12/site-packages/vllm/platforms/cuda.py", line 18, in <module>
import vllm._C # noqa
^^^^^^^^^^^^^^
ImportError: libcudart.so.12: cannot open shared object file: No such file or directory
Error 2: libtorch_cuda.so not found
ImportError: libtorch_cuda.so: cannot open shared object file: No such file or directory
Reproduction Steps
- Create clean environment with Python 3.12:
/usr/bin/python3.12 -m venv vllm_env
source vllm_env/bin/activate
pip install --upgrade pip- Install vLLM:
pip install vllm==0.11.0- Test import:
python -c "import vllm; print('OK')"- Try to run server:
export LD_LIBRARY_PATH=/usr/local/cuda/targets/sbsa-linux/lib:/usr/local/cuda/lib64:$LD_LIBRARY_PATH
python -m vllm.entrypoints.openai.api_server --model moonshotai/Kimi-Linear-48B-A3B-Instruct --port 8000Result: ImportError: libcudart.so.12: cannot open shared object file
Root Cause Analysis
- vLLM compiled for CUDA 12.x: The vllm._C extension module is compiled against CUDA Toolkit 12.x
- System has CUDA 13.x: The deployment environment has CUDA 13.x installed with libcudart.so.13
- CUDA Forward Compatibility: CUDA 13.x is not backward compatible with CUDA 12.x libraries
Available System Libraries
$ ldconfig -p | grep libcudart
libcudart.so.13 (libc6,AArch64) => /usr/local/cuda/targets/sbsa-linux/lib/libcudart.so.13
libcudart.so (libc6,AArch64) => /usr/local/cuda/targets/sbsa-linux/lib/libcudart.soAttempted Workarounds
1. LD_LIBRARY_PATH Modification
export LD_LIBRARY_PATH=/usr/local/cuda/targets/sbsa-linux/lib:$LD_LIBRARY_PATHResult: ❌ Does not work (CUDA version mismatch, not search path issue)
2. Creating Library Symlinks
Cannot symlink libcudart.so.13 → libcudart.so.12 (version incompatibility)
3. Trying Different vLLM Versions
- vLLM 0.6.3: Requires torch==2.4.0 (unavailable, newer versions: 2.6.0+)
- vLLM 0.11.0: Requires CUDA 12.x (system has 13.x)
- vLLM 0.12.0+: Not available on PyPI (unclear release status)
Expected Behavior
vLLM should either:
- Support CUDA 13.x in wheel distributions
- Build wheels for multiple CUDA versions (cu118, cu121, cu124)
- Detect system CUDA version and download compatible wheels
- Provide clear error messages indicating required CUDA version
Possible Solutions
Option A: Distribute CUDA 13.x wheels
Build and distribute vllm-0.11.0 wheels compiled for CUDA 13.x
Option B: CUDA version detection
Auto-detect system CUDA and download matching wheel (like torch does)
Option C: Documentation
Clearly document CUDA version requirements in README and installation guide
Option D: Conda packages
Provide conda packages with proper CUDA dependency resolution
Impact
- vLLM cannot run on ARM64 systems with CUDA 13.x
- Affects AWS Graviton instances and other ARM64 deployments
- Blocks users trying to run Kimi-Linear-48B or other large models on newer CUDA systems
References
- System CUDA: /usr/local/cuda/ (version 13.x)
- PyTorch CUDA Support: Works fine with torch 2.8.0
- GPU Query:
$ nvidia-smi NVIDIA GB10, CUDA 12.1 capability
Related Issues
This may be related to:
- Platform compatibility across CUDA versions
- ARM64 architecture support
- Pre-built wheel distribution strategy
Requested Action
Please:
- ✅ Acknowledge ARM64 + CUDA 13.x as supported configuration
- ✅ Provide CUDA 13.x compatible wheels
- ✅ Update documentation with CUDA version requirements
- ✅ Consider automatic CUDA version detection (like PyTorch)
Additional Notes:
- This is blocking deployment of vLLM on modern ARM64 GPU instances
- PyTorch 2.8.0 works fine on the same system, showing CUDA 13.x support is possible
- Other frameworks (transformers, llama.cpp) handle this gracefully with multi-version wheels
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.