-
Notifications
You must be signed in to change notification settings - Fork 583
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Your current environment
The output of `python collect_env.py`
Collecting environment information...
PyTorch version: 2.7.1+cpu
Is debug build: False
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version: Could not collect
CMake version: version 4.1.2
Libc version: glibc-2.35
Python version: 3.11.13 (main, Nov 2 2025, 10:27:27) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.0-216.0.0.115.oe2203sp4.aarch64-aarch64-with-glibc2.35
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 640
On-line CPU(s) list: 0-639
Vendor ID: HiSilicon
BIOS Vendor ID: HiSilicon
BIOS Model name: Kunpeng 920 7280Z
Model: 0
Thread(s) per core: 2
Core(s) per socket: 80
Socket(s): 4
Stepping: 0x0
Frequency boost: disabled
CPU max MHz: 2900.0000
CPU min MHz: 400.0000
BogoMIPS: 200.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb dcpodp flagm2 frint svei8mm svef32mm svef64mm svebf16 i8mm bf16 dgh rng ecv
L1d cache: 20 MiB (320 instances)
L1i cache: 20 MiB (320 instances)
L2 cache: 400 MiB (320 instances)
L3 cache: 560 MiB (8 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-79
NUMA node1 CPU(s): 80-159
NUMA node2 CPU(s): 160-239
NUMA node3 CPU(s): 240-319
NUMA node4 CPU(s): 320-399
NUMA node5 CPU(s): 400-479
NUMA node6 CPU(s): 480-559
NUMA node7 CPU(s): 560-639
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pyzmq==27.1.0
[pip3] torch==2.7.1+cpu
[pip3] torch_npu==2.7.1
[pip3] torchvision==0.22.1
[pip3] transformers==4.57.1
[conda] Could not collect
vLLM Version: 0.11.0
vLLM Ascend Version: 0.11.0rc2.dev2+g650ce8ad1 (git sha: 650ce8ad1)
ENV Variables:
ATB_OPSRUNNER_KERNEL_CACHE_LOCAL_COUNT=1
ATB_STREAM_SYNC_EVERY_RUNNER_ENABLE=0
ATB_OPSRUNNER_SETUP_CACHE_ENABLE=1
ATB_WORKSPACE_MEM_ALLOC_GLOBAL=1
ATB_DEVICE_TILING_BUFFER_BLOCK_NUM=32
ATB_STREAM_SYNC_EVERY_KERNEL_ENABLE=0
ATB_OPSRUNNER_KERNEL_CACHE_GLOABL_COUNT=5
ATB_HOME_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1
ASCEND_TOOLKIT_HOME=/usr/local/Ascend/ascend-toolkit/latest
ATB_COMPARE_TILING_EVERY_KERNEL=0
ASCEND_OPP_PATH=/usr/local/Ascend/ascend-toolkit/latest/opp
LD_LIBRARY_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/lib:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/examples:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/tests/atbopstest:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64/plugin:/usr/local/Ascend/ascend-toolkit/latest/lib64:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/opskernel:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/nnengine:/usr/local/Ascend/ascend-toolkit/latest/opp/built-in/op_impl/ai_core/tbe/op_tiling/lib/linux/aarch64:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/lib:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/examples:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/tests/atbopstest:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64/plugin:/usr/local/Ascend/ascend-toolkit/latest/lib64:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/opskernel:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/nnengine:/usr/local/Ascend/ascend-toolkit/latest/opp/built-in/op_impl/ai_core/tbe/op_tiling:/usr/local/Ascend/driver/lib64/common/:/usr/local/Ascend/driver/lib64/driver/:
ASCEND_AICPU_PATH=/usr/local/Ascend/ascend-toolkit/latest
ATB_STREAM_SYNC_EVERY_OPERATION_ENABLE=0
ASCEND_HOME_PATH=/usr/local/Ascend/ascend-toolkit/latest
ATB_MATMUL_SHUFFLE_K_ENABLE=1
ATB_WORKSPACE_MEM_ALLOC_ALG_TYPE=1
ATB_HOST_TILING_BUFFER_BLOCK_NUM=128
ATB_SHARE_MEMORY_NAME_SUFFIX=
TORCH_DEVICE_BACKEND_AUTOLOAD=1
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
NPU:
+------------------------------------------------------------------------------------------------+
| npu-smi 25.2.3 Version: 25.2.3 |
+---------------------------+---------------+----------------------------------------------------+
| NPU Name | Health | Power(W) Temp(C) Hugepages-Usage(page)|
| Chip Phy-ID | Bus-Id | AICore(%) Memory-Usage(MB) HBM-Usage(MB) |
+===========================+===============+====================================================+
| 0 Ascend910 | OK | 167.9 48 0 / 0 |
| 0 0 | 0000:9D:00.0 | 0 0 / 0 3105 / 65536 |
+------------------------------------------------------------------------------------------------+
| 0 Ascend910 | OK | - 47 0 / 0 |
| 1 1 | 0000:9F:00.0 | 0 0 / 0 2877 / 65536 |
+===========================+===============+====================================================+
| 1 Ascend910 | OK | 170.2 47 0 / 0 |
| 0 2 | 0000:99:00.0 | 0 0 / 0 3099 / 65536 |
+------------------------------------------------------------------------------------------------+
| 1 Ascend910 | OK | - 48 0 / 0 |
| 1 3 | 0000:9B:00.0 | 0 0 / 0 2883 / 65536 |
+===========================+===============+====================================================+
| 2 Ascend910 | OK | 171.6 47 0 / 0 |
| 0 4 | 0000:95:00.0 | 0 0 / 0 3096 / 65536 |
+------------------------------------------------------------------------------------------------+
| 2 Ascend910 | OK | - 47 0 / 0 |
| 1 5 | 0000:97:00.0 | 0 0 / 0 2886 / 65536 |
+===========================+===============+====================================================+
| 3 Ascend910 | OK | 169.2 46 0 / 0 |
| 0 6 | 0000:91:00.0 | 0 0 / 0 3096 / 65536 |
+------------------------------------------------------------------------------------------------+
| 3 Ascend910 | OK | - 45 0 / 0 |
| 1 7 | 0000:93:00.0 | 0 0 / 0 2886 / 65536 |
+===========================+===============+====================================================+
| 4 Ascend910 | OK | 166.6 47 0 / 0 |
| 0 8 | 0000:8D:00.0 | 0 0 / 0 3100 / 65536 |
+------------------------------------------------------------------------------------------------+
| 4 Ascend910 | OK | - 46 0 / 0 |
| 1 9 | 0000:8F:00.0 | 0 0 / 0 2883 / 65536 |
+===========================+===============+====================================================+
| 5 Ascend910 | OK | 174.0 47 0 / 0 |
| 0 10 | 0000:89:00.0 | 0 0 / 0 3098 / 65536 |
+------------------------------------------------------------------------------------------------+
| 5 Ascend910 | OK | - 47 0 / 0 |
| 1 11 | 0000:8B:00.0 | 0 0 / 0 2883 / 65536 |
+===========================+===============+====================================================+
| 6 Ascend910 | OK | 169.7 47 0 / 0 |
| 0 12 | 0000:85:00.0 | 0 0 / 0 3105 / 65536 |
+------------------------------------------------------------------------------------------------+
| 6 Ascend910 | OK | - 44 0 / 0 |
| 1 13 | 0000:87:00.0 | 0 0 / 0 2877 / 65536 |
+===========================+===============+====================================================+
| 7 Ascend910 | OK | 172.8 47 0 / 0 |
| 0 14 | 0000:81:00.0 | 0 0 / 0 3099 / 65536 |
+------------------------------------------------------------------------------------------------+
| 7 Ascend910 | OK | - 47 0 / 0 |
| 1 15 | 0000:83:00.0 | 0 0 / 0 2883 / 65536 |
+===========================+===============+====================================================+
+---------------------------+---------------+----------------------------------------------------+
| NPU Chip | Process id | Process name | Process memory(MB) |
+===========================+===============+====================================================+
| No running processes found in NPU 0 |
+===========================+===============+====================================================+
| No running processes found in NPU 1 |
+===========================+===============+====================================================+
| No running processes found in NPU 2 |
+===========================+===============+====================================================+
| No running processes found in NPU 3 |
+===========================+===============+====================================================+
| No running processes found in NPU 4 |
+===========================+===============+====================================================+
| No running processes found in NPU 5 |
+===========================+===============+====================================================+
| No running processes found in NPU 6 |
+===========================+===============+====================================================+
| No running processes found in NPU 7 |
+===========================+===============+====================================================+
CANN:
package_name=Ascend-cann-toolkit
version=8.3.RC1
innerversion=V100R001C23SPC001B235
compatible_version=[V100R001C15],[V100R001C18],[V100R001C19],[V100R001C20],[V100R001C21],[V100R001C23]
arch=aarch64
os=linux
path=/usr/local/Ascend/ascend-toolkit/8.3.RC1/aarch64-linux
๐ Describe the bug
I try to run deepseek-v3-w8a8 with dp2 tp8 in single A3 node using the offline_data_parallel.py in examples. Command is:
cd examples/
python offline_data_parallel.py \
--model="/home/weight/vllm-ascend/DeepSeek-V3-w8a8" \
--dp-size=2 \
--tp-size=8 \
Corresponding errors:
ERROR 11-20 15:54:04 [multiproc_executor.py:597] WorkerProc failed to start.
ERROR 11-20 15:54:04 [multiproc_executor.py:597] Traceback (most recent call last):
ERROR 11-20 15:54:04 [multiproc_executor.py:597] File "/vllm-workspace/vllm/vllm/v1/executor/multiproc_executor.py", line 571, in worker_main
ERROR 11-20 15:54:04 [multiproc_executor.py:597] worker = WorkerProc(*args, **kwargs)
ERROR 11-20 15:54:04 [multiproc_executor.py:597] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 11-20 15:54:04 [multiproc_executor.py:597] File "/vllm-workspace/vllm/vllm/v1/executor/multiproc_executor.py", line 430, in __init__
ERROR 11-20 15:54:04 [multiproc_executor.py:597] self.worker.init_device()
ERROR 11-20 15:54:04 [multiproc_executor.py:597] File "/vllm-workspace/vllm/vllm/worker/worker_base.py", line 259, in init_device
ERROR 11-20 15:54:04 [multiproc_executor.py:597] self.worker.init_device() # type: ignore
ERROR 11-20 15:54:04 [multiproc_executor.py:597] ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 11-20 15:54:04 [multiproc_executor.py:597] File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/worker_v1.py", line 215, in init_device
ERROR 11-20 15:54:04 [multiproc_executor.py:597] device = self._init_device()
ERROR 11-20 15:54:04 [multiproc_executor.py:597] ^^^^^^^^^^^^^^^^^^^
ERROR 11-20 15:54:04 [multiproc_executor.py:597] File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/worker_v1.py", line 209, in _init_device
ERROR 11-20 15:54:04 [multiproc_executor.py:597] self._init_worker_distributed_environment()
ERROR 11-20 15:54:04 [multiproc_executor.py:597] File "/vllm-workspace/vllm-ascend/vllm_ascend/worker/worker_v1.py", line 387, in _init_worker_distributed_environment
ERROR 11-20 15:54:04 [multiproc_executor.py:597] init_distributed_environment(self.parallel_config.world_size,
ERROR 11-20 15:54:04 [multiproc_executor.py:597] File "/vllm-workspace/vllm/vllm/distributed/parallel_state.py", line 1064, in init_distributed_environment
ERROR 11-20 15:54:04 [multiproc_executor.py:597] torch.distributed.init_process_group(
ERROR 11-20 15:54:04 [multiproc_executor.py:597] File "/usr/local/python3.11.13/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
ERROR 11-20 15:54:04 [multiproc_executor.py:597] return func(*args, **kwargs)
ERROR 11-20 15:54:04 [multiproc_executor.py:597] ^^^^^^^^^^^^^^^^^^^^^
ERROR 11-20 15:54:04 [multiproc_executor.py:597] File "/usr/local/python3.11.13/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 95, in wrapper
ERROR 11-20 15:54:04 [multiproc_executor.py:597] func_return = func(*args, **kwargs)
ERROR 11-20 15:54:04 [multiproc_executor.py:597] ^^^^^^^^^^^^^^^^^^^^^
ERROR 11-20 15:54:04 [multiproc_executor.py:597] File "/usr/local/python3.11.13/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 1710, in init_process_group
ERROR 11-20 15:54:04 [multiproc_executor.py:597] store, rank, world_size = next(rendezvous_iterator)
ERROR 11-20 15:54:04 [multiproc_executor.py:597] ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 11-20 15:54:04 [multiproc_executor.py:597] File "/usr/local/python3.11.13/lib/python3.11/site-packages/torch/distributed/rendezvous.py", line 230, in _tcp_rendezvous_handler
ERROR 11-20 15:54:04 [multiproc_executor.py:597] store = _create_c10d_store(
ERROR 11-20 15:54:04 [multiproc_executor.py:597] ^^^^^^^^^^^^^^^^^^^
ERROR 11-20 15:54:04 [multiproc_executor.py:597] File "/usr/local/python3.11.13/lib/python3.11/site-packages/torch/distributed/rendezvous.py", line 198, in _create_c10d_store
ERROR 11-20 15:54:04 [multiproc_executor.py:597] return TCPStore(
ERROR 11-20 15:54:04 [multiproc_executor.py:597] ^^^^^^^^^
ERROR 11-20 15:54:04 [multiproc_executor.py:597] torch.distributed.DistNetworkError: The server socket has failed to listen on any local network address. port: 37111, useIpv6: false, code: -98, name: EADDRINUSE, message: address already in use
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working