Skip to content

[Bug]: GLM5-W4A8 stuck on Loading safetensors checkpointย #6734

@nightraid-yyf

Description

@nightraid-yyf

Your current environment

PyTorch version: 2.9.0+cpu Is debug build: False

OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version: 15.0.7
CMake version: version 4.2.1
Libc version: glibc-2.35

Python version: 3.11.14 (main, Jan 19 2026, 06:28:35) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.6.0-98.0.0.101.oe2403sp1.aarch64-aarch64-with-glibc2.35

CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 640
On-line CPU(s) list: 0-639
Vendor ID: HiSilicon
Model: 0
Thread(s) per core: 2
Core(s) per cluster: 80
Socket(s): -
Cluster(s): 4
Stepping: 0x0
Frequency boost: disabled
CPU max MHz: 2900.0000
CPU min MHz: 400.0000
BogoMIPS: 200.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb dcpodp flagm2 frint svei8mm svef32mm svef64mm svebf16 i8mm bf16 dgh rng ecv
L1d cache: 20 MiB (320 instances)
L1i cache: 20 MiB (320 instances)
L2 cache: 400 MiB (320 instances)
L3 cache: 560 MiB (8 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-79
NUMA node1 CPU(s): 80-159
NUMA node2 CPU(s): 160-239
NUMA node3 CPU(s): 240-319
NUMA node4 CPU(s): 320-399
NUMA node5 CPU(s): 400-479
NUMA node6 CPU(s): 480-559
NUMA node7 CPU(s): 560-639
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] mypy==1.11.1
[pip3] mypy_extensions==1.1.0
[pip3] numpy==1.26.4
[pip3] pyzmq==27.1.0
[pip3] sentence-transformers==5.2.2
[pip3] torch==2.9.0+cpu
[pip3] torch_npu==2.9.0
[pip3] torchvision==0.24.0
[pip3] transformers==5.2.0.dev0
[pip3] triton-ascend==3.2.0
[pip3] zmq==0.0.0
[conda] Could not collect
vLLM Version: 0.16.0rc2.dev3+g978a37c82 (git sha: 978a37c82)
vLLM Ascend Version: 0.14.0rc2.dev150+gff3a50d01 (git sha: ff3a50d)

ENV Variables:
ASCEND_TOOLKIT_LATEST_HOME=/usr/local/Ascend/ascend-toolkit/latest
ATB_OPSRUNNER_KERNEL_CACHE_LOCAL_COUNT=1
ATB_STREAM_SYNC_EVERY_RUNNER_ENABLE=0
ATB_STREAM_SYNC_EVERY_KERNEL_ENABLE=0
ATB_OPSRUNNER_KERNEL_CACHE_GLOABL_COUNT=5
ATB_HOME_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1
ASCEND_TOOLKIT_HOME=/usr/local/Ascend/cann-8.5.0
ATB_COMPARE_TILING_EVERY_KERNEL=0
ASCEND_OPP_PATH=/usr/local/Ascend/cann-8.5.0/opp
LD_LIBRARY_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/lib:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/examples:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/tests/atbopstest:/usr/local/Ascend/cann-8.5.0/lib64:/usr/local/Ascend/cann-8.5.0/lib64/plugin/opskernel:/usr/local/Ascend/cann-8.5.0/lib64/plugin/nnengine:/usr/local/Ascend/cann-8.5.0/opp/built-in/op_impl/ai_core/tbe/op_tiling/lib/linux/aarch64:/usr/local/Ascend/cann-8.5.0/tools/aml/lib64:/usr/local/Ascend/cann-8.5.0/tools/aml/lib64/plugin:/usr/local/Ascend/driver/lib64:/usr/local/Ascend/driver/lib64/common:/usr/local/Ascend/driver/lib64/driver:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/lib:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/examples:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/tests/atbopstest:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64/plugin:/usr/local/Ascend/ascend-toolkit/latest/lib64:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/opskernel:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/nnengine:/usr/local/Ascend/ascend-toolkit/latest/opp/built-in/op_impl/ai_core/tbe/op_tiling:/usr/local/Ascend/driver/lib64:/usr/local/Ascend/driver/lib64/common/:/usr/local/Ascend/driver/lib64/driver/:/usr/local/python3.11.14/lib:
ASCEND_AICPU_PATH=/usr/local/Ascend/cann-8.5.0
ATB_STREAM_SYNC_EVERY_OPERATION_ENABLE=0
ASCEND_HOME_PATH=/usr/local/Ascend/cann-8.5.0
ATB_MATMUL_SHUFFLE_K_ENABLE=1
ATB_WORKSPACE_MEM_ALLOC_ALG_TYPE=1
ATB_SHARE_MEMORY_NAME_SUFFIX=
TORCH_DEVICE_BACKEND_AUTOLOAD=1
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1

NPU:
+------------------------------------------------------------------------------------------------+
| npu-smi 25.5.0 Version: 25.5.0 |
+---------------------------+---------------+----------------------------------------------------+
| NPU Name | Health | Power(W) Temp(C) Hugepages-Usage(page)|
| Chip Phy-ID | Bus-Id | AICore(%) Memory-Usage(MB) HBM-Usage(MB) |
+===========================+===============+====================================================+
| 0 Ascend910 | OK | 162.0 40 0 / 0 |
| 0 0 | 0000:9D:00.0 | 0 0 / 0 38752/ 65536 |
+------------------------------------------------------------------------------------------------+
| 0 Ascend910 | OK | - 38 0 / 0 |
| 1 1 | 0000:9F:00.0 | 0 0 / 0 37671/ 65536 |
+===========================+===============+====================================================+
| 1 Ascend910 | OK | 158.8 39 0 / 0 |
| 0 2 | 0000:99:00.0 | 0 0 / 0 37924/ 65536 |
+------------------------------------------------------------------------------------------------+
| 1 Ascend910 | OK | - 39 0 / 0 |
| 1 3 | 0000:9B:00.0 | 0 0 / 0 37682/ 65536 |
+===========================+===============+====================================================+
| 2 Ascend910 | OK | 165.4 38 0 / 0 |
| 0 4 | 0000:95:00.0 | 0 0 / 0 37923/ 65536 |
+------------------------------------------------------------------------------------------------+
| 2 Ascend910 | OK | - 39 0 / 0 |
| 1 5 | 0000:97:00.0 | 0 0 / 0 37681/ 65536 |
+===========================+===============+====================================================+
| 3 Ascend910 | OK | 161.6 40 0 / 0 |
| 0 6 | 0000:91:00.0 | 0 0 / 0 37922/ 65536 |
+------------------------------------------------------------------------------------------------+
| 3 Ascend910 | OK | - 39 0 / 0 |
| 1 7 | 0000:93:00.0 | 0 0 / 0 37683/ 65536 |
+===========================+===============+====================================================+
| 4 Ascend910 | OK | 175.0 39 0 / 0 |
| 0 8 | 0000:8D:00.0 | 0 0 / 0 37935/ 65536 |
+------------------------------------------------------------------------------------------------+
| 4 Ascend910 | OK | - 41 0 / 0 |
| 1 9 | 0000:8F:00.0 | 0 0 / 0 37670/ 65536 |
+===========================+===============+====================================================+
| 5 Ascend910 | OK | 166.5 39 0 / 0 |
| 0 10 | 0000:89:00.0 | 0 0 / 0 37932/ 65536 |
+------------------------------------------------------------------------------------------------+
| 5 Ascend910 | OK | - 39 0 / 0 |
| 1 11 | 0000:8B:00.0 | 0 0 / 0 37670/ 65536 |
+===========================+===============+====================================================+
| 6 Ascend910 | OK | 162.6 40 0 / 0 |
| 0 12 | 0000:85:00.0 | 0 0 / 0 37934/ 65536 |
+------------------------------------------------------------------------------------------------+
| 6 Ascend910 | OK | - 39 0 / 0 |
| 1 13 | 0000:87:00.0 | 0 0 / 0 37669/ 65536 |
+===========================+===============+====================================================+
| 7 Ascend910 | OK | 159.1 40 0 / 0 |
| 0 14 | 0000:81:00.0 | 0 0 / 0 37934/ 65536 |
+------------------------------------------------------------------------------------------------+
| 7 Ascend910 | OK | - 39 0 / 0 |
| 1 15 | 0000:83:00.0 | 0 0 / 0 37671/ 65536 |
+===========================+===============+====================================================+
+---------------------------+---------------+----------------------------------------------------+
| NPU Chip | Process id | Process name | Process memory(MB) |
+===========================+===============+====================================================+
| 0 0 | 1590 | VLLMWorker_TP | 72 |
| 0 0 | 1242 | VLLMWorker_TP | 72 |
| 0 0 | 1068 | VLLMWorker_TP | 72 |
| 0 0 | 633 | VLLMWorker_TP | 72 |
| 0 0 | 1329 | VLLMWorker_TP | 72 |
| 0 0 | 720 | VLLMWorker_TP | 72 |
| 0 0 | 1765 | VLLMWorker_TP | 72 |
| 0 0 | 981 | VLLMWorker_TP | 72 |
| 0 1 | 549 | VLLMWorker_TP | 34810 |
+===========================+===============+====================================================+
| 1 0 | 633 | VLLMWorker_TP | 34810 |
| 1 1 | 720 | VLLMWorker_TP | 34810 |
+===========================+===============+====================================================+
| 2 0 | 807 | VLLMWorker_TP | 34810 |
| 2 1 | 894 | VLLMWorker_TP | 34810 |
+===========================+===============+====================================================+
| 3 0 | 981 | VLLMWorker_TP | 34810 |
| 3 1 | 1068 | VLLMWorker_TP | 34810 |
+===========================+===============+====================================================+
| 4 0 | 1155 | VLLMWorker_TP | 34810 |
| 4 1 | 1242 | VLLMWorker_TP | 34810 |
+===========================+===============+====================================================+
| 5 0 | 1329 | VLLMWorker_TP | 34810 |
| 5 1 | 1416 | VLLMWorker_TP | 34810 |
+===========================+===============+====================================================+
| 6 0 | 1503 | VLLMWorker_TP | 34810 |
| 6 1 | 1590 | VLLMWorker_TP | 34810 |
+===========================+===============+====================================================+
| 7 0 | 1677 | VLLMWorker_TP | 34810 |
| 7 1 | 1765 | VLLMWorker_TP | 34810 |
+===========================+===============+====================================================+

CANN:
package_name=Ascend-cann-toolkit
version=8.5.0
innerversion=V100R001C25SPC001B232
compatible_version=[V100R001C15],[V100R001C18],[V100R001C19],[V100R001C20],[V100R001C21],[V100R001C23]
arch=aarch64
os=linux
path=/usr/local/Ascend/cann-8.5.0

๐Ÿ› Describe the bug

I am currently using Vllm-ascend to deploy the W4A8 quantized version of GLM5 with the Docker image: quay.io/ascend/vllm-ascend:glm5-a3. However, during model loading, the process gets stuck at 60/99. How can I troubleshoot this issue?

vllm start cmd:

export VLLM_USE_V1=1
export HCCL_BUFFSIZE=1024
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=10
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export HCCL_OP_EXPANSION_MODE=AIV
export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
#export PYTHONPATH=$PYTHONPATH:/xxxxx/vllm
export MOONCAKE_CONFIG_PATH="/nvme/Model/scripts/mooncake.json"
export ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
export PYTHONHASHSEED=0
export ACL_OP_INIT_MODE=1
export ASCEND_BUFFER_POOL=16:32
export ASCEND_CONNECT_TIMEOUT=10000
export ASCEND_TRANSFER_TIMEOUT=10000
export VLLM_ASCEND_BALANCE_SCHEDULING=1

vllm serve /nvme/Model/weights \
  --host 0.0.0.0 \
  --port 1025 \
  --data-parallel-size 1 \
  --tensor-parallel-size 16 \
  --enable-expert-parallel \
  --seed 1024 \
  --served-model-name GLM-5 \
  --max-model-len 202752 \
  --max-num-batched-tokens 202752 \
  --max-num-seqs 16 \
  --trust-remote-code \
  --gpu-memory-utilization 0.95 \
  --enable-prefix-caching \
  --enable-chunked-prefill \
  --quantization ascend \
  --async-scheduling \
  --enable-auto-tool-choice \
  --tool-call-parser glm47 \
  --reasoning-parser glm45 \
  --additional-config '{"multistream_overlap_shared_expert":true}' \
  --compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY"}' \
  --speculative-config '{"num_speculative_tokens": 3, "method": "deepseek_mtp"}'
  --kv-transfer-config \
    '{
        "kv_connector": "AscendStoreConnector",
        "kv_role": "kv_both",
        "kv_connector_extra_config": {
                "lookup_rpc_port":"1",
                "backend": "mooncake"
        }
    }'

vllm output

(Worker_TP13_EP13 pid=5766) INFO 02-13 06:25:33 [compilation.py:903] Using OOT custom backend for compilation.
Loading safetensors checkpoint shards:   1% Completed | 1/99 [00:01<01:53,  1.16s/it]
Loading safetensors checkpoint shards:   2% Completed | 2/99 [00:02<01:52,  1.16s/it]
Loading safetensors checkpoint shards:   3% Completed | 3/99 [00:03<02:07,  1.33s/it]
Loading safetensors checkpoint shards:   4% Completed | 4/99 [00:04<01:57,  1.24s/it]
Loading safetensors checkpoint shards:   5% Completed | 5/99 [00:05<01:41,  1.08s/it]
Loading safetensors checkpoint shards:   6% Completed | 6/99 [00:07<01:52,  1.21s/it]
Loading safetensors checkpoint shards:   7% Completed | 7/99 [00:08<01:58,  1.29s/it]
Loading safetensors checkpoint shards:   8% Completed | 8/99 [00:10<01:59,  1.32s/it]
Loading safetensors checkpoint shards:   9% Completed | 9/99 [00:11<02:03,  1.38s/it]
Loading safetensors checkpoint shards:  10% Completed | 10/99 [00:12<01:40,  1.13s/it]
Loading safetensors checkpoint shards:  11% Completed | 11/99 [00:13<01:52,  1.27s/it]
Loading safetensors checkpoint shards:  12% Completed | 12/99 [00:15<02:07,  1.47s/it]
Loading safetensors checkpoint shards:  13% Completed | 13/99 [00:17<02:14,  1.57s/it]
Loading safetensors checkpoint shards:  14% Completed | 14/99 [00:19<02:20,  1.65s/it]
Loading safetensors checkpoint shards:  15% Completed | 15/99 [00:21<02:23,  1.70s/it]
Loading safetensors checkpoint shards:  16% Completed | 16/99 [00:21<01:50,  1.34s/it]
Loading safetensors checkpoint shards:  17% Completed | 17/99 [00:23<02:01,  1.48s/it]
Loading safetensors checkpoint shards:  18% Completed | 18/99 [00:25<02:06,  1.57s/it]
Loading safetensors checkpoint shards:  19% Completed | 19/99 [00:27<02:12,  1.65s/it]
Loading safetensors checkpoint shards:  20% Completed | 20/99 [00:28<02:14,  1.71s/it]
Loading safetensors checkpoint shards:  21% Completed | 21/99 [00:30<02:18,  1.77s/it]
Loading safetensors checkpoint shards:  22% Completed | 22/99 [00:31<01:46,  1.39s/it]
Loading safetensors checkpoint shards:  23% Completed | 23/99 [00:33<01:54,  1.50s/it]
Loading safetensors checkpoint shards:  24% Completed | 24/99 [00:34<02:01,  1.62s/it]
Loading safetensors checkpoint shards:  25% Completed | 25/99 [00:36<02:03,  1.68s/it]
Loading safetensors checkpoint shards:  26% Completed | 26/99 [00:38<02:05,  1.71s/it]
Loading safetensors checkpoint shards:  27% Completed | 27/99 [00:40<02:02,  1.70s/it]
Loading safetensors checkpoint shards:  28% Completed | 28/99 [00:40<01:39,  1.40s/it]
Loading safetensors checkpoint shards:  29% Completed | 29/99 [00:42<01:46,  1.52s/it]
Loading safetensors checkpoint shards:  30% Completed | 30/99 [00:44<01:51,  1.61s/it]
Loading safetensors checkpoint shards:  31% Completed | 31/99 [00:46<01:53,  1.68s/it]
Loading safetensors checkpoint shards:  32% Completed | 32/99 [00:48<01:56,  1.73s/it]
Loading safetensors checkpoint shards:  33% Completed | 33/99 [00:48<01:32,  1.41s/it]
Loading safetensors checkpoint shards:  34% Completed | 34/99 [00:50<01:36,  1.49s/it]
Loading safetensors checkpoint shards:  35% Completed | 35/99 [00:52<01:41,  1.58s/it]
Loading safetensors checkpoint shards:  36% Completed | 36/99 [00:54<01:43,  1.65s/it]
Loading safetensors checkpoint shards:  37% Completed | 37/99 [00:55<01:45,  1.70s/it]
Loading safetensors checkpoint shards:  38% Completed | 38/99 [00:57<01:45,  1.74s/it]
Loading safetensors checkpoint shards:  39% Completed | 39/99 [00:58<01:21,  1.36s/it]
Loading safetensors checkpoint shards:  40% Completed | 40/99 [01:00<01:28,  1.49s/it]
Loading safetensors checkpoint shards:  41% Completed | 41/99 [01:01<01:31,  1.58s/it]
Loading safetensors checkpoint shards:  42% Completed | 42/99 [01:03<01:34,  1.66s/it]
Loading safetensors checkpoint shards:  43% Completed | 43/99 [01:05<01:36,  1.72s/it]
Loading safetensors checkpoint shards:  44% Completed | 44/99 [01:07<01:36,  1.76s/it]
Loading safetensors checkpoint shards:  45% Completed | 45/99 [01:07<01:14,  1.37s/it]
Loading safetensors checkpoint shards:  46% Completed | 46/99 [01:09<01:17,  1.46s/it]
Loading safetensors checkpoint shards:  47% Completed | 47/99 [01:11<01:21,  1.58s/it]
Loading safetensors checkpoint shards:  48% Completed | 48/99 [01:13<01:25,  1.67s/it]
Loading safetensors checkpoint shards:  49% Completed | 49/99 [01:15<01:27,  1.75s/it]
Loading safetensors checkpoint shards:  51% Completed | 50/99 [01:17<01:27,  1.79s/it]
Loading safetensors checkpoint shards:  52% Completed | 51/99 [01:17<01:06,  1.39s/it]
Loading safetensors checkpoint shards:  53% Completed | 52/99 [01:19<01:11,  1.53s/it]
Loading safetensors checkpoint shards:  54% Completed | 53/99 [01:21<01:16,  1.67s/it]
Loading safetensors checkpoint shards:  55% Completed | 54/99 [01:23<01:19,  1.76s/it]
Loading safetensors checkpoint shards:  56% Completed | 55/99 [01:25<01:21,  1.85s/it]
Loading safetensors checkpoint shards:  57% Completed | 56/99 [01:26<01:11,  1.66s/it]
Loading safetensors checkpoint shards:  58% Completed | 57/99 [01:28<01:06,  1.58s/it]
Loading safetensors checkpoint shards:  59% Completed | 58/99 [01:30<01:12,  1.77s/it]
Loading safetensors checkpoint shards:  60% Completed | 59/99 [01:32<01:17,  1.94s/it]
Loading safetensors checkpoint shards:  61% Completed | 60/99 [01:35<01:21,  2.08s/it]
<<<stuck here

I have tried using the following Python commands to check each weight file, and all of them can be loaded individually.

from safetensors import safe_open
with safe_open("[filename].safetensors", framework="pt") as f:
    print(f.keys())

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingglm5

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions