Skip to content

[Bug]: VLLM_ROCM_USE_AITER=1 hit device_gemm with the specified compilation parameters does not support this GEMM problem for Qwen3-235B-A22B #50

@vllmellm

Description

@vllmellm

Your current environment

The output of python collect_env.py
Your output of `python collect_env.py` here
Collecting environment information...

CMake version                : version 3.31.6
Libc version                 : glibc-2.35

==============================
       PyTorch Info
==============================
PyTorch version              : 2.7.0+gitf717b2a
Is debug build               : False
CUDA used to build PyTorch   : N/A
ROCM used to build PyTorch   : 6.4.43483-a187df25c

==============================
      Python Environment
==============================
Python version               : 3.12.11 (main, Jun  4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)
Python platform              : Linux-5.15.0-116-generic-x86_64-with-glibc2.35

==============================
       CUDA / GPU Info
==============================
Is CUDA available            : True
CUDA runtime version         : Could not collect
CUDA_MODULE_LOADING set to   : LAZY
GPU models and configuration : AMD Instinct MI300X (gfx942:sramecc+:xnack-)
Nvidia driver version        : Could not collect
cuDNN version                : Could not collect
HIP runtime version          : 6.4.43483
MIOpen runtime version       : 3.4.0
Is XNNPACK available         : True

==============================
          CPU Info
==============================
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        52 bits physical, 57 bits virtual
Byte Order:                           Little Endian
CPU(s):                               192
On-line CPU(s) list:                  0-191
Vendor ID:                            AuthenticAMD
Model name:                           AMD EPYC 9654 96-Core Processor
CPU family:                           25
Model:                                17
Thread(s) per core:                   1
Core(s) per socket:                   96
Socket(s):                            2
Stepping:                             1
Frequency boost:                      enabled
CPU max MHz:                          3707.8120
CPU min MHz:                          1500.0000
BogoMIPS:                             4793.01
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization:                       AMD-V
L1d cache:                            6 MiB (192 instances)
L1i cache:                            6 MiB (192 instances)
L2 cache:                             192 MiB (192 instances)
L3 cache:                             768 MiB (24 instances)
NUMA node(s):                         2
NUMA node0 CPU(s):                    0-95
NUMA node1 CPU(s):                    96-191
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Mitigation; safe RET
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.6
[pip3] pyzmq==27.0.0
[pip3] torch==2.7.0+gitf717b2a
[pip3] torchvision==0.21.0+7af6987
[pip3] transformers==4.53.0
[pip3] triton==3.2.0+gite5be006a
[conda] Could not collect

==============================
         vLLM Info
==============================
ROCM Version                 : 6.4.43483-a187df25c
Neuron SDK Version           : N/A
vLLM Version                 : N/A (dev)
vLLM Build Flags:
  CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
  ============================ ROCm System Management Interface ============================
================================ Weight between two GPUs =================================
       GPU0         GPU1         GPU2         GPU3         GPU4         GPU5         GPU6         GPU7         
GPU0   0            15           15           15           15           15           15           15           
GPU1   15           0            15           15           15           15           15           15           
GPU2   15           15           0            15           15           15           15           15           
GPU3   15           15           15           0            15           15           15           15           
GPU4   15           15           15           15           0            15           15           15           
GPU5   15           15           15           15           15           0            15           15           
GPU6   15           15           15           15           15           15           0            15           
GPU7   15           15           15           15           15           15           15           0            

================================= Hops between two GPUs ==================================
       GPU0         GPU1         GPU2         GPU3         GPU4         GPU5         GPU6         GPU7         
GPU0   0            1            1            1            1            1            1            1            
GPU1   1            0            1            1            1            1            1            1            
GPU2   1            1            0            1            1            1            1            1            
GPU3   1            1            1            0            1            1            1            1            
GPU4   1            1            1            1            0            1            1            1            
GPU5   1            1            1            1            1            0            1            1            
GPU6   1            1            1            1            1            1            0            1            
GPU7   1            1            1            1            1            1            1            0            

=============================== Link Type between two GPUs ===============================
       GPU0         GPU1         GPU2         GPU3         GPU4         GPU5         GPU6         GPU7         
GPU0   0            XGMI         XGMI         XGMI         XGMI         XGMI         XGMI         XGMI         
GPU1   XGMI         0            XGMI         XGMI         XGMI         XGMI         XGMI         XGMI         
GPU2   XGMI         XGMI         0            XGMI         XGMI         XGMI         XGMI         XGMI         
GPU3   XGMI         XGMI         XGMI         0            XGMI         XGMI         XGMI         XGMI         
GPU4   XGMI         XGMI         XGMI         XGMI         0            XGMI         XGMI         XGMI         
GPU5   XGMI         XGMI         XGMI         XGMI         XGMI         0            XGMI         XGMI         
GPU6   XGMI         XGMI         XGMI         XGMI         XGMI         XGMI         0            XGMI         
GPU7   XGMI         XGMI         XGMI         XGMI         XGMI         XGMI         XGMI         0            

======================================= Numa Nodes =======================================
GPU[0]          : (Topology) Numa Node: 0
GPU[0]          : (Topology) Numa Affinity: 0
GPU[1]          : (Topology) Numa Node: 0
GPU[1]          : (Topology) Numa Affinity: 0
GPU[2]          : (Topology) Numa Node: 0
GPU[2]          : (Topology) Numa Affinity: 0
GPU[3]          : (Topology) Numa Node: 0
GPU[3]          : (Topology) Numa Affinity: 0
GPU[4]          : (Topology) Numa Node: 1
GPU[4]          : (Topology) Numa Affinity: 1
GPU[5]          : (Topology) Numa Node: 1
GPU[5]          : (Topology) Numa Affinity: 1
GPU[6]          : (Topology) Numa Node: 1
GPU[6]          : (Topology) Numa Affinity: 1
GPU[7]          : (Topology) Numa Node: 1
GPU[7]          : (Topology) Numa Affinity: 1
================================== End of ROCm SMI Log ===================================

==============================
     Environment Variables
==============================
PYTORCH_TUNABLEOP_TUNING=0
PYTORCH_TUNABLEOP_ENABLED=1
PYTORCH_ROCM_ARCH=gfx90a;gfx942;gfx1100;gfx1101;gfx1200;gfx1201
LD_LIBRARY_PATH=/opt/rocm/lib:/usr/local/lib:
PYTORCH_TUNABLEOP_FILENAME=/app/afo_tune_device_%d_full.csv
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY

🐛 Describe the bug

When serving Qwen3-235B-A22B with TP=8 and use aiter, vllm v0 throws the error:
RuntimeError: wrong! device_gemm with the specified compilation parameters does not support this GEMM problem

The serving command:

VLLM_ROCM_USE_AITER=1 VLLM_USE_V1=0 vllm serve /models/Qwen3-235B-A22B/                     --tensor-parallel-size 8                        --gpu-memory-utilization 0.9                    --disable-log-requests                  --trust-remote-code                    --disable-log-requests                  --max-model-len 32768

The whole backtrace:

ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen3_moe.py", line 136, in forward                                                                                                                  [155/1944]
ERROR 08-05 08:53:39 [engine.py:458]     final_hidden_states = self.experts(hidden_states=hidden_states,
ERROR 08-05 08:53:39 [engine.py:458]                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
ERROR 08-05 08:53:39 [engine.py:458]     return self._call_impl(*args, **kwargs)
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
ERROR 08-05 08:53:39 [engine.py:458]     return forward_call(*args, **kwargs)
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 1548, in forward
ERROR 08-05 08:53:39 [engine.py:458]     return torch.ops.vllm.moe_forward(hidden_states, router_logits,
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 1158, in __call__
ERROR 08-05 08:53:39 [engine.py:458]     return self._op(*args, **(kwargs or {}))
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 1733, in moe_forward
ERROR 08-05 08:53:39 [engine.py:458]     return self.forward_impl(hidden_states, router_logits)
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 1642, in forward_impl
ERROR 08-05 08:53:39 [engine.py:458]     final_hidden_states = self.quant_method.apply(
ERROR 08-05 08:53:39 [engine.py:458]                           ^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 588, in apply
ERROR 08-05 08:53:39 [engine.py:458]     return self.forward(
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/custom_op.py", line 44, in forward
ERROR 08-05 08:53:39 [engine.py:458]     return self._forward_method(*args, **kwargs)
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/custom_op.py", line 59, in forward_hip
ERROR 08-05 08:53:39 [engine.py:458]     return self.forward_cuda(*args, **kwargs)
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 639, in forward_cuda
ERROR 08-05 08:53:39 [engine.py:458]     return self.rocm_aiter_fused_experts(
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/rocm_aiter_fused_moe.py", line 376, in rocm_aiter_fused_experts
ERROR 08-05 08:53:39 [engine.py:458]     return torch.ops.vllm.rocm_aiter_fused_moe(
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 1158, in __call__
ERROR 08-05 08:53:39 [engine.py:458]     return self._op(*args, **(kwargs or {}))
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/rocm_aiter_fused_moe.py", line 197, in rocm_aiter_fused_moe_impl
ERROR 08-05 08:53:39 [engine.py:458]     return fused_moe(hidden_states, w1, w2, topk_weight, topk_ids, expert_mask,
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/aiter/fused_moe.py", line 153, in fused_moe
ERROR 08-05 08:53:39 [engine.py:458]     return fused_moe_2stages(
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^^^^^^
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/aiter/fused_moe.py", line 496, in fused_moe_2stages
ERROR 08-05 08:53:39 [engine.py:458]     stage2(
ERROR 08-05 08:53:39 [engine.py:458]   File "/usr/local/lib/python3.12/dist-packages/aiter/jit/core.py", line 607, in wrapper
ERROR 08-05 08:53:39 [engine.py:458]     return op(*args, **kwargs) 
ERROR 08-05 08:53:39 [engine.py:458]            ^^^^^^^^^^^^^^^^^^^ 
ERROR 08-05 08:53:39 [engine.py:458] RuntimeError: wrong! device_gemm with the specified compilation parameters does not support this GEMM problem

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions