Skip to content

[Bug]: Qwen3.5-397B w8a8 hybrid deployment reports errors related to gdn_attn during high-concurrency precision testing #7848

@Levi-JQ

Description

@Levi-JQ

Your current environment

The output of `python collect_env.py`
Collecting environment information...
PyTorch version: 2.9.0+cpu
Is debug build: False

OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.3) 11.4.0
Clang version: 15.0.7
CMake version: version 4.3.0
Libc version: glibc-2.35

Python version: 3.11.14 (main, Feb 26 2026, 03:57:04) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.0-182.0.0.95.r1941_123.hce2.aarch64-aarch64-with-glibc2.35

CPU:
Architecture:                         aarch64
CPU op-mode(s):                       64-bit
Byte Order:                           Little Endian
CPU(s):                               320
On-line CPU(s) list:                  0-319
Vendor ID:                            HiSilicon
BIOS Vendor ID:                       HiSilicon
BIOS Model name:                      Kunpeng 920 7285Z
Model:                                0
Thread(s) per core:                   1
Core(s) per socket:                   80
Socket(s):                            4
Stepping:                             0x0
Frequency boost:                      disabled
CPU max MHz:                          3000.0000
CPU min MHz:                          400.0000
BogoMIPS:                             200.00
Flags:                                fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint svei8mm svef32mm svef64mm svebf16 i8mm bf16 dgh rng ecv
L1d cache:                            20 MiB (320 instances)
L1i cache:                            20 MiB (320 instances)
L2 cache:                             400 MiB (320 instances)
L3 cache:                             560 MiB (8 instances)
NUMA node(s):                         8
NUMA node0 CPU(s):                    0-39
NUMA node1 CPU(s):                    40-79
NUMA node2 CPU(s):                    80-119
NUMA node3 CPU(s):                    120-159
NUMA node4 CPU(s):                    160-199
NUMA node5 CPU(s):                    200-239
NUMA node6 CPU(s):                    240-279
NUMA node7 CPU(s):                    280-319
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Not affected
Vulnerability Spectre v1:             Mitigation; __user pointer sanitization
Vulnerability Spectre v2:             Not affected
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pyzmq==27.1.0
[pip3] torch==2.9.0+cpu
[pip3] torch_npu==2.9.0
[pip3] torchaudio==2.9.0
[pip3] torchvision==0.24.0
[pip3] transformers==4.57.6
[pip3] triton-ascend==3.2.0
[conda] Could not collect
vLLM Version: 0.18.0
vLLM Ascend Version: 0.17.0rc2.dev143+gf83cb0e6d (git sha: f83cb0e6d)

ENV Variables:
ASCEND_TOOLKIT_LATEST_HOME=/usr/local/Ascend/ascend-toolkit/latest
ATB_OPSRUNNER_KERNEL_CACHE_LOCAL_COUNT=1
ATB_STREAM_SYNC_EVERY_RUNNER_ENABLE=0
ATB_STREAM_SYNC_EVERY_KERNEL_ENABLE=0
ATB_OPSRUNNER_KERNEL_CACHE_GLOABL_COUNT=5
ATB_HOME_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1
ASCEND_TOOLKIT_HOME=/usr/local/Ascend/cann-8.5.1
ATB_COMPARE_TILING_EVERY_KERNEL=0
ASCEND_OPP_PATH=/usr/local/Ascend/cann-8.5.1/opp
LD_LIBRARY_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/lib:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/examples:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/tests/atbopstest:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64/plugin:/usr/local/Ascend/ascend-toolkit/latest/lib64:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/opskernel:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/nnengine:/usr/local/Ascend/ascend-toolkit/latest/opp/built-in/op_impl/ai_core/tbe/op_tiling:/usr/local/Ascend/cann-8.5.1/tools/aml/lib64:/usr/local/Ascend/cann-8.5.1/tools/aml/lib64/plugin:/usr/local/Ascend/cann-8.5.1/lib64:/usr/local/Ascend/cann-8.5.1/lib64/plugin/opskernel:/usr/local/Ascend/cann-8.5.1/lib64/plugin/nnengine:/usr/local/Ascend/cann-8.5.1/opp/built-in/op_impl/ai_core/tbe/op_tiling:/usr/local/Ascend/driver/lib64:/usr/local/Ascend/driver/lib64/common/:/usr/local/Ascend/driver/lib64/driver/:/usr/local/python3.11.14/lib::/usr/local/lib
ASCEND_AICPU_PATH=/usr/local/Ascend/cann-8.5.1
ATB_STREAM_SYNC_EVERY_OPERATION_ENABLE=0
ASCEND_HOME_PATH=/usr/local/Ascend/cann-8.5.1
ATB_MATMUL_SHUFFLE_K_ENABLE=1
ATB_WORKSPACE_MEM_ALLOC_ALG_TYPE=1
ATB_SHARE_MEMORY_NAME_SUFFIX=
TORCH_DEVICE_BACKEND_AUTOLOAD=1
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1


NPU:
+------------------------------------------------------------------------------------------------+
| npu-smi 25.2.1                   Version: 25.2.1                                               |
+---------------------------+---------------+----------------------------------------------------+
| NPU   Name                | Health        | Power(W)    Temp(C)           Hugepages-Usage(page)|
| Chip  Phy-ID              | Bus-Id        | AICore(%)   Memory-Usage(MB)  HBM-Usage(MB)        |
+===========================+===============+====================================================+
| 0     Ascend910           | OK            | 159.7       35                0    / 0             |
| 0     0                   | 0000:9D:00.0  | 0           0    / 0          3159 / 65536         |
+------------------------------------------------------------------------------------------------+
| 0     Ascend910           | OK            | -           35                0    / 0             |
| 1     1                   | 0000:9F:00.0  | 0           0    / 0          2881 / 65536         |
+===========================+===============+====================================================+
| 1     Ascend910           | OK            | 165.5       37                0    / 0             |
| 0     2                   | 0000:99:00.0  | 0           0    / 0          3141 / 65536         |
+------------------------------------------------------------------------------------------------+
| 1     Ascend910           | OK            | -           36                0    / 0             |
| 1     3                   | 0000:9B:00.0  | 0           0    / 0          2883 / 65536         |
+===========================+===============+====================================================+
| 2     Ascend910           | OK            | 165.1       36                0    / 0             |
| 0     4                   | 0000:95:00.0  | 0           0    / 0          3145 / 65536         |
+------------------------------------------------------------------------------------------------+
| 2     Ascend910           | OK            | -           35                0    / 0             |
| 1     5                   | 0000:97:00.0  | 0           0    / 0          2884 / 65536         |
+===========================+===============+====================================================+
| 3     Ascend910           | OK            | 163.6       36                0    / 0             |
| 0     6                   | 0000:91:00.0  | 0           0    / 0          3142 / 65536         |
+------------------------------------------------------------------------------------------------+
| 3     Ascend910           | OK            | -           35                0    / 0             |
| 1     7                   | 0000:93:00.0  | 0           0    / 0          2884 / 65536         |
+===========================+===============+====================================================+
| 4     Ascend910           | OK            | 170.0       37                0    / 0             |
| 0     8                   | 0000:8D:00.0  | 0           0    / 0          3139 / 65536         |
+------------------------------------------------------------------------------------------------+
| 4     Ascend910           | OK            | -           37                0    / 0             |
| 1     9                   | 0000:8F:00.0  | 0           0    / 0          2888 / 65536         |
+===========================+===============+====================================================+
| 5     Ascend910           | OK            | 166.6       36                0    / 0             |
| 0     10                  | 0000:89:00.0  | 0           0    / 0          3135 / 65536         |
+------------------------------------------------------------------------------------------------+
| 5     Ascend910           | OK            | -           36                0    / 0             |
| 1     11                  | 0000:8B:00.0  | 0           0    / 0          2889 / 65536         |
+===========================+===============+====================================================+
| 6     Ascend910           | OK            | 164.8       36                0    / 0             |
| 0     12                  | 0000:85:00.0  | 0           0    / 0          3142 / 65536         |
+------------------------------------------------------------------------------------------------+
| 6     Ascend910           | OK            | -           37                0    / 0             |
| 1     13                  | 0000:87:00.0  | 0           0    / 0          2882 / 65536         |
+===========================+===============+====================================================+
| 7     Ascend910           | OK            | 169.0       36                0    / 0             |
| 0     14                  | 0000:81:00.0  | 0           0    / 0          3139 / 65536         |
+------------------------------------------------------------------------------------------------+
| 7     Ascend910           | OK            | -           37                0    / 0             |
| 1     15                  | 0000:83:00.0  | 0           0    / 0          2885 / 65536         |
+===========================+===============+====================================================+
+---------------------------+---------------+----------------------------------------------------+
| NPU     Chip              | Process id    | Process name             | Process memory(MB)      |
+===========================+===============+====================================================+
| No running processes found in NPU 0                                                            |
+===========================+===============+====================================================+
| No running processes found in NPU 1                                                            |
+===========================+===============+====================================================+
| No running processes found in NPU 2                                                            |
+===========================+===============+====================================================+
| No running processes found in NPU 3                                                            |
+===========================+===============+====================================================+
| No running processes found in NPU 4                                                            |
+===========================+===============+====================================================+
| No running processes found in NPU 5                                                            |
+===========================+===============+====================================================+
| No running processes found in NPU 6                                                            |
+===========================+===============+====================================================+
| No running processes found in NPU 7                                                            |
+===========================+===============+====================================================+

CANN:
package_name=Ascend-cann-toolkit
version=8.5.1
innerversion=V100R001C25SPC002B220
compatible_version=[V100R001C15],[V100R001C18],[V100R001C19],[V100R001C20],[V100R001C21],[V100R001C23]
arch=aarch64
os=linux
path=/usr/local/Ascend/cann-8.5.1

🐛 Describe the bug

使用社区官方镜像,其中commit号:
vllm-ascend: f83cb0e
vllm: bcf2be96120005e9aea171927f85055a6a5c0cf6
启动脚本:

export VLLM_USE_V1=1
export VLLM_VERSION=0.18.0
export VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS=380

export OTEL_EXPORTER_OTLP_TRACES_PROTOCOL=http/protobuf
export PROMETHEUS_MULTIPROC_DIR=/tmp/

export COMMON_ARGS="
    --trust-remote-code
    --served-model-name auto
    --distributed-executor-backend mp
    --model-loader-extra-config {\"enable_multithread_load\":true,\"num_threads\":8}
    --enable-log-requests
    --enable-prompt-tokens-details
    --otlp-traces-endpoint https://antcollector.alipay.com/namespace/aicloud/task/otlptrace/otlp/api/v1/traces
"
export HCCL_BUFFSIZE=1024
export VLLM_ASCEND_ENABLE_NZ=2
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export VLLM_ASCEND_ENABLE_FUSED_MC2=1

#提升图编译速度
export HCCL_OP_EXPANSION_MODE="AIV"
#单线程
export OMP_NUM_THREADS=1
#优先加载jemalloc内存分配器
export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD
#开启算子异步执行
export TASK_QUEUE_ENABLE=1

#CPU性能配置
#锁频
echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
#禁止swap
sysctl -w vm.swappiness=0
#关闭numa平衡
sysctl -w kernel.numa_balancing=0
#减少调度,一次调度成本为50000ns
sysctl kernel.sched_migration_cost_ns=50000


nohup vllm serve /sfs_turbo/models/Qwen3.5-397B-A17B-w8a8-mtp \
    $COMMON_ARGS \
    --port 8006 \
    -tp 4 \
    -dp 4 \
    --max-model-len 262144 \
    --max-num-batched-tokens 16384 \
    --max-num-seqs 128 \
    --gpu-memory-utilization 0.90 \
    --compilation-config '{"cudagraph_capture_sizes":[1,4,8,16,24,32,48,64,72,128], "cudagraph_mode":"FULL_DECODE_ONLY"}' \
    --speculative-config '{"method": "qwen3_5_mtp", "num_speculative_tokens": 3, "enforce_eager": true}' \
    --async-scheduling \
    --quantization ascend \
    --mm-processor-cache-gb 0 \
    --enable-expert-parallel \
    --additional-config '{"enable_cpu_binding":true, "multistream_overlap_shared_expert": true}' > only-revert-vllm-pr.log 2>&1 &

复现方法:高并发(1024)执行多模态数据集(OCRbench)精度测试脚本立即报错

报错信息:

(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] WorkerProc hit an exception.
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] Traceback (most recent call last):
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/sfs_turbo/hw/yujinqi/issue/flashcomm2-official/vllm/vllm/v1/executor/multiproc_executor.py", line 927, in worker_busy_loop
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     output = func(*args, **kwargs)
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]              ^^^^^^^^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/sfs_turbo/hw/yujinqi/issue/flashcomm2-official/vllm/vllm/v1/worker/worker_base.py", line 332, in execute_model
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     return self.worker.execute_model(scheduler_output)
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/worker.py", line 395, in execute_model
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     output = self.model_runner.execute_model(scheduler_output, intermediate_tensors)
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/usr/local/python3.11.14/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     return func(*args, **kwargs)
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]            ^^^^^^^^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/model_runner_v1.py", line 1300, in execute_model
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     (attn_metadata, spec_decode_common_attn_metadata) = self._build_attention_metadata(
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/model_runner_v1.py", line 2219, in _build_attention_metadata
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     _build_attn_group_metadata(kv_cache_gid, attn_gid, cm)
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/model_runner_v1.py", line 2163, in _build_attn_group_metadata
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     attn_metadata_i = builder.build(
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]                       ^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/vllm-workspace/vllm-ascend/vllm_ascend/patch/worker/patch_gdn_attn.py", line 292, in _patched_build
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     attn_metadata = _ORIGINAL_BUILD(
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]                     ^^^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/sfs_turbo/hw/yujinqi/issue/flashcomm2-official/vllm/vllm/v1/attention/backends/gdn_attn.py", line 310, in build
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     compute_causal_conv1d_metadata(
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/sfs_turbo/hw/yujinqi/issue/flashcomm2-official/vllm/vllm/v1/attention/backends/utils.py", line 777, in compute_causal_conv1d_metadata
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     batch_ptr[0:mlist_len].copy_(mlist, non_blocking=True)
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] RuntimeError: The Inner error is reported as above. The process exits for this inner error, and the current copy params are srclen=504, dstlen=504, kind=1.
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] Since the operator is called asynchronously, the stacktrace may be inaccurate. If you want to get the accurate stacktrace, please set the environment variable ASCEND_LAUNCH_BLOCKING=1.
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] Note: ASCEND_LAUNCH_BLOCKING=1 will force ops to run in synchronous mode, resulting in performance degradation. Please unset ASCEND_LAUNCH_BLOCKING in time after debugging.
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] [ERROR] 2026-03-31-02:50:19 (PID:512, Device:1, RankID:-1) ERR00100 PTA call acl api failed.
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] EZ9999: Inner Error!
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] EZ9999[PID: 512] 2026-03-31-02:50:19.636.480 (EZ9999):  The error from device(chipId:0, dieId:1), serial number is 43, there is an exception of aivec error, core id is 34, error code = 0x800000, dump info: pc start: 0x12c0c1242778, current: 0x12c0c1245414, vec error info: 0x5819bc273d, mte error info: 0xc030000e4, ifu error info: 0x7f1e2da872740, ccu error info: 0x5eee09457a0000d8, cube error info: 0, biu error info: 0, aic error mask: 0x6500020bd00028c, para base: 0x12c144e1b000.[FUNC:PrintCoreInfo][FILE:device_error_core_proc.cc][LINE:347]
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]         TraceBack (most recent call last):
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]        The extend info: errcode:(0x800000, 0, 0) errorStr: The DDR address of the MTE instruction is out of range. fixp_error0 info: 0x30000e4, fixp_error1 info: 0xc, fsmId:1, tslot:6, thread:0, ctxid:0, blk:0, sublk:0, subErrType:4.[FUNC:PrintCoreInfo][FILE:device_error_core_proc.cc][LINE:360]
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]        rtMemcpyAsync execution failed, reason=the model stream execute failed[FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:61]
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]        [Call][Rts]call rts api [rtMemcpyAsync] failed, retCode is 507011[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:148]
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] 
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] Traceback (most recent call last):
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/sfs_turbo/hw/yujinqi/issue/flashcomm2-official/vllm/vllm/v1/executor/multiproc_executor.py", line 927, in worker_busy_loop
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     output = func(*args, **kwargs)
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]              ^^^^^^^^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/sfs_turbo/hw/yujinqi/issue/flashcomm2-official/vllm/vllm/v1/worker/worker_base.py", line 332, in execute_model
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     return self.worker.execute_model(scheduler_output)
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/worker.py", line 395, in execute_model
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     output = self.model_runner.execute_model(scheduler_output, intermediate_tensors)
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/usr/local/python3.11.14/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     return func(*args, **kwargs)
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]            ^^^^^^^^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/model_runner_v1.py", line 1300, in execute_model
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     (attn_metadata, spec_decode_common_attn_metadata) = self._build_attention_metadata(
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/model_runner_v1.py", line 2219, in _build_attention_metadata
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     _build_attn_group_metadata(kv_cache_gid, attn_gid, cm)
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/model_runner_v1.py", line 2163, in _build_attn_group_metadata
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     attn_metadata_i = builder.build(
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]                       ^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/vllm-workspace/vllm-ascend/vllm_ascend/patch/worker/patch_gdn_attn.py", line 292, in _patched_build
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     attn_metadata = _ORIGINAL_BUILD(
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]                     ^^^^^^^^^^^^^^^^
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/sfs_turbo/hw/yujinqi/issue/flashcomm2-official/vllm/vllm/v1/attention/backends/gdn_attn.py", line 310, in build
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     compute_causal_conv1d_metadata(
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]   File "/sfs_turbo/hw/yujinqi/issue/flashcomm2-official/vllm/vllm/v1/attention/backends/utils.py", line 777, in compute_causal_conv1d_metadata
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]     batch_ptr[0:mlist_len].copy_(mlist, non_blocking=True)
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] RuntimeError: The Inner error is reported as above. The process exits for this inner error, and the current copy params are srclen=504, dstlen=504, kind=1.
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] Since the operator is called asynchronously, the stacktrace may be inaccurate. If you want to get the accurate stacktrace, please set the environment variable ASCEND_LAUNCH_BLOCKING=1.
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] Note: ASCEND_LAUNCH_BLOCKING=1 will force ops to run in synchronous mode, resulting in performance degradation. Please unset ASCEND_LAUNCH_BLOCKING in time after debugging.
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] [ERROR] 2026-03-31-02:50:19 (PID:512, Device:1, RankID:-1) ERR00100 PTA call acl api failed.
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] EZ9999: Inner Error!
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] EZ9999[PID: 512] 2026-03-31-02:50:19.636.480 (EZ9999):  The error from device(chipId:0, dieId:1), serial number is 43, there is an exception of aivec error, core id is 34, error code = 0x800000, dump info: pc start: 0x12c0c1242778, current: 0x12c0c1245414, vec error info: 0x5819bc273d, mte error info: 0xc030000e4, ifu error info: 0x7f1e2da872740, ccu error info: 0x5eee09457a0000d8, cube error info: 0, biu error info: 0, aic error mask: 0x6500020bd00028c, para base: 0x12c144e1b000.[FUNC:PrintCoreInfo][FILE:device_error_core_proc.cc][LINE:347]
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]         TraceBack (most recent call last):
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]        The extend info: errcode:(0x800000, 0, 0) errorStr: The DDR address of the MTE instruction is out of range. fixp_error0 info: 0x30000e4, fixp_error1 info: 0xc, fsmId:1, tslot:6, thread:0, ctxid:0, blk:0, sublk:0, subErrType:4.[FUNC:PrintCoreInfo][FILE:device_error_core_proc.cc][LINE:360]
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]        rtMemcpyAsync execution failed, reason=the model stream execute failed[FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:61]
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932]        [Call][Rts]call rts api [rtMemcpyAsync] failed, retCode is 507011[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:148]
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] 
(Worker_DP0_TP1_EP1 pid=512) ERROR 03-31 02:50:19 [multiproc_executor.py:932] 
(ApiServer_2 pid=407) INFO:     33.213.203.41:51616 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
(Worker_DP2_TP1_EP9 pid=496) ERROR 03-31 02:50:19 [multiproc_executor.py:932] WorkerProc hit an exception.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions