Skip to content

[BugFix] Fix load_weights error when loading HunyuanImage3.0#1598

Merged
hsliuustc0106 merged 1 commit intovllm-project:mainfrom
Semmer2:load_weight_error
Mar 2, 2026
Merged

[BugFix] Fix load_weights error when loading HunyuanImage3.0#1598
hsliuustc0106 merged 1 commit intovllm-project:mainfrom
Semmer2:load_weight_error

Conversation

@Semmer2
Copy link
Contributor

@Semmer2 Semmer2 commented Mar 2, 2026

Move some submodule load weights code of HunyuanImage3Pipeline to AutoWeightsLoader:load_weights, fix weights not initialized error.

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

DiffusersPipelineLoader:load_weights added strictly weights load gap verification. Which reports bug when loading HunyuanImage3.0 mode. Move some submodule loading code to AutoWeightsLoader:load_weights to fix this bug.

Test Plan

python examples/offline_inference/text_to_image/text_to_image.py --mode /data/HunyuanImage-3.0/ --prompt "A brown and white dog is running on the grass" --output output_image.png --num-inference-steps 50 --guidance-scale 5.0 --tensor-parallel-size 8 --seed 1234

Test Result

INFO 03-01 22:53:10 [omni.py:181] Initializing stages for model: /data/HunyuanImage-3.0/
INFO 03-01 22:53:10 [omni.py:313] No omni_master_address provided, defaulting to localhost (127.0.0.1)
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
You are using a model of type hunyuan_image_3_moe to instantiate a model of type Hunyuan. This is not supported for all configurations of models and can yield errors.
INFO 03-01 22:53:10 [config.py:379] Replacing legacy 'type' key with 'rope_type'
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
You are using a model of type hunyuan_image_3_moe to instantiate a model of type Hunyuan. This is not supported for all configurations of models and can yield errors.
INFO 03-01 22:53:10 [config.py:379] Replacing legacy 'type' key with 'rope_type'
INFO 03-01 22:53:10 [initialization.py:35] No OmniTransferConfig provided
INFO 03-01 22:53:10 [omni.py:347] [Orchestrator] Loaded 1 stages
INFO 03-01 22:53:10 [omni.py:458] [Orchestrator] Waiting for 1 stages to initialize (timeout: 300s)
[Stage-0] INFO 03-01 22:53:32 [omni_stage.py:679] Starting stage worker with model: /data/HunyuanImage-3.0/
[Stage-0] INFO 03-01 22:53:32 [omni_stage.py:694] [Stage] Set VLLM_WORKER_MULTIPROC_METHOD=spawn
[Stage-0] INFO 03-01 22:53:32 [omni_stage.py:725] [Stage-0] ZMQ transport detected; disabling SHM IPC (shm_threshold_bytes set to maxsize)
[Stage-0] INFO 03-01 22:53:32 [omni_stage.py:85] Using sequential init locks (nvml_available=True, pid_host=False)
[Stage-0] INFO 03-01 22:53:34 [multiproc_executor.py:74] Starting server...
[Stage-0] INFO 03-01 22:53:57 [diffusion_worker.py:349] Worker 0 created result MessageQueue
[Stage-0] INFO 03-01 22:53:59 [scheduler.py:224] Chunked prefill is enabled with max_num_batched_tokens=2048.
[Stage-0] INFO 03-01 22:53:59 [vllm.py:689] Asynchronous scheduling is enabled.
[Stage-0] INFO 03-01 22:53:59 [scheduler.py:224] Chunked prefill is enabled with max_num_batched_tokens=2048.
[Stage-0] INFO 03-01 22:53:59 [vllm.py:689] Asynchronous scheduling is enabled.
[Stage-0] INFO 03-01 22:53:59 [scheduler.py:224] Chunked prefill is enabled with max_num_batched_tokens=2048.
[Stage-0] INFO 03-01 22:53:59 [vllm.py:689] Asynchronous scheduling is enabled.
[Stage-0] INFO 03-01 22:54:00 [scheduler.py:224] Chunked prefill is enabled with max_num_batched_tokens=2048.
[Stage-0] INFO 03-01 22:54:00 [vllm.py:689] Asynchronous scheduling is enabled.
[Stage-0] INFO 03-01 22:54:03 [scheduler.py:224] Chunked prefill is enabled with max_num_batched_tokens=2048.
[Stage-0] INFO 03-01 22:54:03 [scheduler.py:224] Chunked prefill is enabled with max_num_batched_tokens=2048.
[Stage-0] INFO 03-01 22:54:03 [vllm.py:689] Asynchronous scheduling is enabled.
[Stage-0] INFO 03-01 22:54:03 [vllm.py:689] Asynchronous scheduling is enabled.
[Stage-0] INFO 03-01 22:54:03 [scheduler.py:224] Chunked prefill is enabled with max_num_batched_tokens=2048.
[Stage-0] INFO 03-01 22:54:03 [scheduler.py:224] Chunked prefill is enabled with max_num_batched_tokens=2048.
[Stage-0] INFO 03-01 22:54:03 [vllm.py:689] Asynchronous scheduling is enabled.
[Stage-0] INFO 03-01 22:54:03 [vllm.py:689] Asynchronous scheduling is enabled.
[Gloo] Rank 7[Gloo] Rank  is connected to 67 is connected to  peer ranks. 7Expected number of connected peer ranks is :  peer ranks. 7Expected number of connected peer ranks is : 
7
[Gloo] Rank 0 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7
[Gloo] Rank 1 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7
[Gloo] Rank 2 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7
[Gloo] Rank 3 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7
[Gloo] Rank [Gloo] Rank 45 is connected to  is connected to 77 peer ranks.  peer ranks. Expected number of connected peer ranks is : Expected number of connected peer ranks is : 77

[Stage-0] INFO 03-01 22:54:03 [diffusion_worker.py:114] Worker 7: Initialized device and distributed environment.
[Stage-0] INFO 03-01 22:54:03 [diffusion_worker.py:114] Worker 6: Initialized device and distributed environment.
[Stage-0] INFO 03-01 22:54:03 [diffusion_worker.py:114] Worker 0: Initialized device and distributed environment.
[Stage-0] INFO 03-01 22:54:03 [diffusion_worker.py:114] Worker 1: Initialized device and distributed environment.
[Stage-0] INFO 03-01 22:54:03 [diffusion_worker.py:114] Worker 2: Initialized device and distributed environment.
[Stage-0] INFO 03-01 22:54:03 [diffusion_worker.py:114] Worker 4: Initialized device and distributed environment.
[Stage-0] INFO 03-01 22:54:03 [diffusion_worker.py:114] Worker 5: Initialized device and distributed environment.
[Stage-0] INFO 03-01 22:54:03 [diffusion_worker.py:114] Worker 3: Initialized device and distributed environment.
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0[Gloo] Rank  peer ranks. [Gloo] Rank 0Expected number of connected peer ranks is : 0 is connected to 0 is connected to 0
0 peer ranks.  peer ranks. Expected number of connected peer ranks is : Expected number of connected peer ranks is : 00

[Gloo] Rank 0 is connected to [Gloo] Rank 00 peer ranks.  is connected to Expected number of connected peer ranks is : 00 peer ranks. 
Expected number of connected peer ranks is : 0[Gloo] Rank 
0[Gloo] Rank  is connected to 00 is connected to  peer ranks. 0Expected number of connected peer ranks is :  peer ranks. 0Expected number of connected peer ranks is : 
0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0[Gloo] Rank  is connected to 00 peer ranks.  is connected to Expected number of connected peer ranks is : 00[Gloo] Rank  peer ranks. 
Expected number of connected peer ranks is : 00 is connected to 
0[Gloo] Rank  peer ranks. 0[Gloo] Rank Expected number of connected peer ranks is :  is connected to 000 is connected to 
 peer ranks. 0Expected number of connected peer ranks is :  peer ranks. 0Expected number of connected peer ranks is : 
0
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:575] Building SP subgroups from explicit sp_group_ranks (sp_size=1, ulysses=1, ring=1, use_ulysses_low=True).
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:617] SP group details for rank 0: sp_group=[0], ulysses_group=[0], ring_group=[0]
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:575] Building SP subgroups from explicit sp_group_ranks (sp_size=1, ulysses=1, ring=1, use_ulysses_low=True).
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:575] Building SP subgroups from explicit sp_group_ranks (sp_size=1, ulysses=1, ring=1, use_ulysses_low=True).
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:575] Building SP subgroups from explicit sp_group_ranks (sp_size=1, ulysses=1, ring=1, use_ulysses_low=True).
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:617] SP group details for rank 7: sp_group=[7], ulysses_group=[7], ring_group=[7]
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:617] SP group details for rank 5: sp_group=[5], ulysses_group=[5], ring_group=[5]
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:617] SP group details for rank 6: sp_group=[6], ulysses_group=[6], ring_group=[6]
[Gloo] Rank 0 is connected to 0[Gloo] Rank  peer ranks. 0[Gloo] Rank Expected number of connected peer ranks is :  is connected to 000
 is connected to  peer ranks. 0Expected number of connected peer ranks is :  peer ranks. 0Expected number of connected peer ranks is : 
0
[Gloo] Rank 0[Gloo] Rank  is connected to 00 peer ranks.  is connected to Expected number of connected peer ranks is : 00 peer ranks. 
Expected number of connected peer ranks is : 0
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:575] Building SP subgroups from explicit sp_group_ranks (sp_size=1, ulysses=1, ring=1, use_ulysses_low=True).
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:575] Building SP subgroups from explicit sp_group_ranks (sp_size=1, ulysses=1, ring=1, use_ulysses_low=True).
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:617] SP group details for rank 4: sp_group=[4], ulysses_group=[4], ring_group=[4]
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:617] SP group details for rank 3: sp_group=[3], ulysses_group=[3], ring_group=[3]
[Gloo] Rank 0 is connected to 0 peer ranks. [Gloo] Rank Expected number of connected peer ranks is : 00 is connected to 
0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to [Gloo] Rank 0 peer ranks. 0Expected number of connected peer ranks is :  is connected to 00
 peer ranks. Expected number of connected peer ranks is : 0
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:575] Building SP subgroups from explicit sp_group_ranks (sp_size=1, ulysses=1, ring=1, use_ulysses_low=True).
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:575] Building SP subgroups from explicit sp_group_ranks (sp_size=1, ulysses=1, ring=1, use_ulysses_low=True).
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:617] SP group details for rank 1: sp_group=[1], ulysses_group=[1], ring_group=[1]
[Stage-0] INFO 03-01 22:54:03 [parallel_state.py:617] SP group details for rank 2: sp_group=[2], ulysses_group=[2], ring_group=[2]
[Gloo] Rank 0[Gloo] Rank  is connected to 00 peer ranks.  is connected to Expected number of connected peer ranks is : 00 peer ranks. 
Expected number of connected peer ranks is : 0
[Gloo] Rank 4 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7
[Gloo] Rank 1 is connected to 7 peer ranks. [Gloo] Rank Expected number of connected peer ranks is : 70
 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7
[Gloo] Rank 2 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7
[Gloo] Rank 3 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7
[Gloo] Rank 5 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7
[Gloo] Rank [Gloo] Rank 76 is connected to  is connected to 77 peer ranks.  peer ranks. Expected number of connected peer ranks is : Expected number of connected peer ranks is : 77

[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0[Gloo] Rank  is connected to 00[Gloo] Rank  peer ranks.  is connected to 0Expected number of connected peer ranks is : 0 is connected to 0 peer ranks. 0
Expected number of connected peer ranks is :  peer ranks. 0Expected number of connected peer ranks is : 
[Gloo] Rank 00
 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
You are using a model of type hunyuan_image_3_moe to instantiate a model of type Hunyuan. This is not supported for all configurations of models and can yield errors.
You are using a model of type hunyuan_image_3_moe to instantiate a model of type Hunyuan. This is not supported for all configurations of models and can yield errors.
You are using a model of type hunyuan_image_3_moe to instantiate a model of type Hunyuan. This is not supported for all configurations of models and can yield errors.
You are using a model of type hunyuan_image_3_moe to instantiate a model of type Hunyuan. This is not supported for all configurations of models and can yield errors.
You are using a model of type hunyuan_image_3_moe to instantiate a model of type Hunyuan. This is not supported for all configurations of models and can yield errors.
You are using a model of type hunyuan_image_3_moe to instantiate a model of type Hunyuan. This is not supported for all configurations of models and can yield errors.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
The module name  (originally ) is not a valid Python identifier. Please rename the original module to avoid import issues.
You are using a model of type hunyuan_image_3_moe to instantiate a model of type Hunyuan. This is not supported for all configurations of models and can yield errors.
You are using a model of type hunyuan_image_3_moe to instantiate a model of type Hunyuan. This is not supported for all configurations of models and can yield errors.
[Stage-0] INFO 03-01 22:54:04 [config.py:379] Replacing legacy 'type' key with 'rope_type'
[Stage-0] INFO 03-01 22:54:04 [config.py:379] Replacing legacy 'type' key with 'rope_type'
[Stage-0] INFO 03-01 22:54:04 [config.py:379] Replacing legacy 'type' key with 'rope_type'
[Stage-0] INFO 03-01 22:54:04 [config.py:379] Replacing legacy 'type' key with 'rope_type'
[Stage-0] INFO 03-01 22:54:04 [config.py:379] Replacing legacy 'type' key with 'rope_type'
[Stage-0] INFO 03-01 22:54:04 [config.py:379] Replacing legacy 'type' key with 'rope_type'
[Stage-0] INFO 03-01 22:54:04 [config.py:379] Replacing legacy 'type' key with 'rope_type'
[Stage-0] INFO 03-01 22:54:04 [config.py:379] Replacing legacy 'type' key with 'rope_type'
[Stage-0] INFO 03-01 22:54:04 [platform.py:77] Defaulting to diffusion attention backend FLASH_ATTN
[Stage-0] INFO 03-01 22:54:04 [platform.py:77] Defaulting to diffusion attention backend FLASH_ATTN
[Stage-0] INFO 03-01 22:54:04 [platform.py:77] Defaulting to diffusion attention backend FLASH_ATTN
[Stage-0] INFO 03-01 22:54:04 [platform.py:77] Defaulting to diffusion attention backend FLASH_ATTN
[Stage-0] INFO 03-01 22:54:04 [platform.py:77] Defaulting to diffusion attention backend FLASH_ATTN
[Stage-0] INFO 03-01 22:54:04 [platform.py:77] Defaulting to diffusion attention backend FLASH_ATTN
[Stage-0] INFO 03-01 22:54:04 [platform.py:77] Defaulting to diffusion attention backend FLASH_ATTN
[Stage-0] INFO 03-01 22:54:04 [platform.py:77] Defaulting to diffusion attention backend FLASH_ATTN
[Stage-0] INFO 03-01 22:54:05 [unquantized.py:131] Using TRITON backend for Unquantized MoE
Loading safetensors checkpoint shards:   0% Completed | 0/32 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:   3% Completed | 1/32 [00:02<01:13,  2.36s/it]
Loading safetensors checkpoint shards:   6% Completed | 2/32 [00:05<01:31,  3.05s/it]
Loading safetensors checkpoint shards:   9% Completed | 3/32 [00:09<01:37,  3.35s/it]
Loading safetensors checkpoint shards:  12% Completed | 4/32 [00:13<01:41,  3.64s/it]
Loading safetensors checkpoint shards:  16% Completed | 5/32 [00:17<01:38,  3.65s/it]
Loading safetensors checkpoint shards:  19% Completed | 6/32 [00:21<01:38,  3.80s/it]
Loading safetensors checkpoint shards:  22% Completed | 7/32 [00:24<01:32,  3.69s/it]
Loading safetensors checkpoint shards:  25% Completed | 8/32 [00:28<01:28,  3.67s/it]
Loading safetensors checkpoint shards:  28% Completed | 9/32 [00:32<01:23,  3.62s/it]
Loading safetensors checkpoint shards:  31% Completed | 10/32 [00:35<01:18,  3.58s/it]
Loading safetensors checkpoint shards:  34% Completed | 11/32 [00:39<01:14,  3.57s/it]
Loading safetensors checkpoint shards:  38% Completed | 12/32 [00:42<01:11,  3.58s/it]
Loading safetensors checkpoint shards:  41% Completed | 13/32 [00:46<01:07,  3.54s/it]
Loading safetensors checkpoint shards:  44% Completed | 14/32 [00:49<01:04,  3.56s/it]
Loading safetensors checkpoint shards:  47% Completed | 15/32 [00:53<01:00,  3.55s/it]
Loading safetensors checkpoint shards:  50% Completed | 16/32 [00:56<00:56,  3.54s/it]
Loading safetensors checkpoint shards:  53% Completed | 17/32 [01:00<00:52,  3.52s/it]
Loading safetensors checkpoint shards:  56% Completed | 18/32 [01:03<00:49,  3.54s/it]
Loading safetensors checkpoint shards:  59% Completed | 19/32 [01:08<00:49,  3.77s/it]
Loading safetensors checkpoint shards:  62% Completed | 20/32 [01:11<00:44,  3.73s/it]
Loading safetensors checkpoint shards:  66% Completed | 21/32 [01:15<00:40,  3.66s/it]
Loading safetensors checkpoint shards:  69% Completed | 22/32 [01:19<00:36,  3.68s/it]
Loading safetensors checkpoint shards:  72% Completed | 23/32 [01:22<00:33,  3.68s/it]
Loading safetensors checkpoint shards:  75% Completed | 24/32 [01:26<00:29,  3.67s/it]
Loading safetensors checkpoint shards:  78% Completed | 25/32 [01:29<00:25,  3.60s/it]
Loading safetensors checkpoint shards:  81% Completed | 26/32 [01:33<00:21,  3.59s/it]
Loading safetensors checkpoint shards:  84% Completed | 27/32 [01:36<00:17,  3.55s/it]
[Stage-0] INFO 03-01 22:55:45 [diffusers_loader.py:301] Loading weights took 97.34 seconds
[Stage-0] INFO 03-01 22:55:46 [diffusion_model_runner.py:118] Model loading took 23.6234 GiB and 102.764971 seconds
[Stage-0] INFO 03-01 22:55:46 [diffusion_model_runner.py:123] Model runner: Model loaded successfully.
[Stage-0] WARNING 03-01 22:55:46 [diffusion_model_runner.py:141] Model runner: torch.compile failed with error: 'HunyuanImage3Pipeline' object has no attribute 'transformer'. Using eager mode.
[Stage-0] INFO 03-01 22:55:46 [diffusion_model_runner.py:163] Model runner: Initialization complete.
[Stage-0] INFO 03-01 22:55:47 [diffusion_worker.py:142] Worker 5: Process-scoped GPU memory after model loading: 0.00 GiB.
[Stage-0] INFO 03-01 22:55:47 [manager.py:91] Initializing DiffusionLoRAManager: device=cuda:5, dtype=torch.bfloat16, max_cached_adapters=1, static_lora_path=None
[Stage-0] INFO 03-01 22:55:47 [diffusion_worker.py:84] Worker 5: Initialization complete.
[Stage-0] INFO 03-01 22:55:47 [diffusion_worker.py:483] Worker 5: Scheduler loop started.
[Stage-0] INFO 03-01 22:55:47 [diffusion_worker.py:406] Worker 5 ready to receive requests via shared memory
Loading safetensors checkpoint shards:  88% Completed | 28/32 [01:39<00:13,  3.41s/it]
Loading safetensors checkpoint shards:  91% Completed | 29/32 [01:42<00:09,  3.30s/it]
Loading safetensors checkpoint shards:  94% Completed | 30/32 [01:45<00:06,  3.22s/it]
Loading safetensors checkpoint shards:  97% Completed | 31/32 [01:48<00:02,  2.93s/it]
Loading safetensors checkpoint shards: 100% Completed | 32/32 [01:49<00:00,  2.57s/it]
Loading safetensors checkpoint shards: 100% Completed | 32/32 [01:49<00:00,  3.44s/it]

[Stage-0] INFO 03-01 22:55:57 [diffusers_loader.py:301] Loading weights took 110.06 seconds
[Stage-0] INFO 03-01 22:55:58 [diffusers_loader.py:301] Loading weights took 110.57 seconds
[Stage-0] INFO 03-01 22:55:58 [diffusers_loader.py:301] Loading weights took 110.77 seconds
[Stage-0] INFO 03-01 22:55:58 [diffusers_loader.py:301] Loading weights took 110.89 seconds
[Stage-0] INFO 03-01 22:55:58 [diffusers_loader.py:301] Loading weights took 110.90 seconds
[Stage-0] INFO 03-01 22:55:58 [diffusers_loader.py:301] Loading weights took 110.92 seconds
[Stage-0] INFO 03-01 22:55:58 [diffusers_loader.py:301] Loading weights took 110.96 seconds
[Stage-0] INFO 03-01 22:55:58 [diffusion_model_runner.py:118] Model loading took 23.6234 GiB and 115.113843 seconds
[Stage-0] INFO 03-01 22:55:58 [diffusion_model_runner.py:123] Model runner: Model loaded successfully.
[Stage-0] WARNING 03-01 22:55:58 [diffusion_model_runner.py:141] Model runner: torch.compile failed with error: 'HunyuanImage3Pipeline' object has no attribute 'transformer'. Using eager mode.
[Stage-0] INFO 03-01 22:55:58 [diffusion_model_runner.py:163] Model runner: Initialization complete.
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:118] Model loading took 23.6234 GiB and 115.452985 seconds
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:123] Model runner: Model loaded successfully.
[Stage-0] WARNING 03-01 22:55:59 [diffusion_model_runner.py:141] Model runner: torch.compile failed with error: 'HunyuanImage3Pipeline' object has no attribute 'transformer'. Using eager mode.
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:163] Model runner: Initialization complete.
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:118] Model loading took 23.6234 GiB and 115.529497 seconds
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:123] Model runner: Model loaded successfully.
[Stage-0] WARNING 03-01 22:55:59 [diffusion_model_runner.py:141] Model runner: torch.compile failed with error: 'HunyuanImage3Pipeline' object has no attribute 'transformer'. Using eager mode.
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:163] Model runner: Initialization complete.
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:118] Model loading took 23.6234 GiB and 115.653362 seconds
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:123] Model runner: Model loaded successfully.
[Stage-0] WARNING 03-01 22:55:59 [diffusion_model_runner.py:141] Model runner: torch.compile failed with error: 'HunyuanImage3Pipeline' object has no attribute 'transformer'. Using eager mode.
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:163] Model runner: Initialization complete.
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:118] Model loading took 23.6234 GiB and 115.673151 seconds
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:123] Model runner: Model loaded successfully.
[Stage-0] WARNING 03-01 22:55:59 [diffusion_model_runner.py:141] Model runner: torch.compile failed with error: 'HunyuanImage3Pipeline' object has no attribute 'transformer'. Using eager mode.
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:163] Model runner: Initialization complete.
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:118] Model loading took 23.6234 GiB and 115.692698 seconds
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:123] Model runner: Model loaded successfully.
[Stage-0] WARNING 03-01 22:55:59 [diffusion_model_runner.py:141] Model runner: torch.compile failed with error: 'HunyuanImage3Pipeline' object has no attribute 'transformer'. Using eager mode.
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:163] Model runner: Initialization complete.
[Stage-0] INFO 03-01 22:55:59 [diffusion_worker.py:142] Worker 0: Process-scoped GPU memory after model loading: 0.00 GiB.
[Stage-0] INFO 03-01 22:55:59 [diffusion_worker.py:142] Worker 1: Process-scoped GPU memory after model loading: 0.00 GiB.
[Stage-0] INFO 03-01 22:55:59 [diffusion_worker.py:142] Worker 7: Process-scoped GPU memory after model loading: 0.00 GiB.
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:118] Model loading took 23.6234 GiB and 115.866355 seconds
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:123] Model runner: Model loaded successfully.
[Stage-0] WARNING 03-01 22:55:59 [diffusion_model_runner.py:141] Model runner: torch.compile failed with error: 'HunyuanImage3Pipeline' object has no attribute 'transformer'. Using eager mode.
[Stage-0] INFO 03-01 22:55:59 [diffusion_model_runner.py:163] Model runner: Initialization complete.
[Stage-0] INFO 03-01 22:55:59 [manager.py:91] Initializing DiffusionLoRAManager: device=cuda:0, dtype=torch.bfloat16, max_cached_adapters=1, static_lora_path=None
[Stage-0] INFO 03-01 22:55:59 [diffusion_worker.py:84] Worker 0: Initialization complete.
[Stage-0] INFO 03-01 22:55:59 [diffusion_worker.py:483] Worker 0: Scheduler loop started.
[Stage-0] INFO 03-01 22:55:59 [diffusion_worker.py:406] Worker 0 ready to receive requests via shared memory
[Stage-0] INFO 03-01 22:55:59 [manager.py:91] Initializing DiffusionLoRAManager: device=cuda:7, dtype=torch.bfloat16, max_cached_adapters=1, static_lora_path=None
[Stage-0] INFO 03-01 22:55:59 [diffusion_worker.py:84] Worker 7: Initialization complete.
[Stage-0] INFO 03-01 22:55:59 [diffusion_worker.py:483] Worker 7: Scheduler loop started.
[Stage-0] INFO 03-01 22:55:59 [manager.py:91] Initializing DiffusionLoRAManager: device=cuda:1, dtype=torch.bfloat16, max_cached_adapters=1, static_lora_path=None
[Stage-0] INFO 03-01 22:55:59 [diffusion_worker.py:406] Worker 7 ready to receive requests via shared memory
[Stage-0] INFO 03-01 22:55:59 [diffusion_worker.py:84] Worker 1: Initialization complete.
[Stage-0] INFO 03-01 22:55:59 [diffusion_worker.py:483] Worker 1: Scheduler loop started.
[Stage-0] INFO 03-01 22:55:59 [diffusion_worker.py:406] Worker 1 ready to receive requests via shared memory
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:142] Worker 6: Process-scoped GPU memory after model loading: 0.00 GiB.
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:142] Worker 2: Process-scoped GPU memory after model loading: 0.00 GiB.
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:142] Worker 4: Process-scoped GPU memory after model loading: 0.00 GiB.
[Stage-0] INFO 03-01 22:56:00 [manager.py:91] Initializing DiffusionLoRAManager: device=cuda:6, dtype=torch.bfloat16, max_cached_adapters=1, static_lora_path=None
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:84] Worker 6: Initialization complete.
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:483] Worker 6: Scheduler loop started.
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:406] Worker 6 ready to receive requests via shared memory
[Stage-0] INFO 03-01 22:56:00 [manager.py:91] Initializing DiffusionLoRAManager: device=cuda:2, dtype=torch.bfloat16, max_cached_adapters=1, static_lora_path=None
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:84] Worker 2: Initialization complete.
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:483] Worker 2: Scheduler loop started.
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:406] Worker 2 ready to receive requests via shared memory
[Stage-0] INFO 03-01 22:56:00 [manager.py:91] Initializing DiffusionLoRAManager: device=cuda:4, dtype=torch.bfloat16, max_cached_adapters=1, static_lora_path=None
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:84] Worker 4: Initialization complete.
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:483] Worker 4: Scheduler loop started.
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:406] Worker 4 ready to receive requests via shared memory
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:142] Worker 3: Process-scoped GPU memory after model loading: 0.00 GiB.
[Stage-0] INFO 03-01 22:56:00 [manager.py:91] Initializing DiffusionLoRAManager: device=cuda:3, dtype=torch.bfloat16, max_cached_adapters=1, static_lora_path=None
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:84] Worker 3: Initialization complete.
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:483] Worker 3: Scheduler loop started.
[Stage-0] INFO 03-01 22:56:00 [diffusion_worker.py:406] Worker 3 ready to receive requests via shared memory
[Stage-0] INFO 03-01 22:56:00 [scheduler.py:41] SyncScheduler initialized result MessageQueue
[Stage-0] INFO 03-01 22:56:00 [diffusion_engine.py:341] dummy run to warm up the model
[Stage-0] INFO 03-01 22:56:00 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] INFO 03-01 22:56:00 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] INFO 03-01 22:56:00 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] INFO 03-01 22:56:00 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] INFO 03-01 22:56:00 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] INFO 03-01 22:56:00 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] INFO 03-01 22:56:00 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] WARNING 03-01 22:56:00 [kv_transfer_manager.py:517] Request has no ID, cannot receive KV cache
[Stage-0] WARNING 03-01 22:56:00 [kv_transfer_manager.py:517] Request has no ID, cannot receive KV cache
[Stage-0] WARNING 03-01 22:56:00 [kv_transfer_manager.py:517] Request has no ID, cannot receive KV cache
[Stage-0] WARNING 03-01 22:56:00 [kv_transfer_manager.py:517] Request has no ID, cannot receive KV cache
[Stage-0] WARNING 03-01 22:56:00 [kv_transfer_manager.py:517] Request has no ID, cannot receive KV cache
[Stage-0] WARNING 03-01 22:56:00 [kv_transfer_manager.py:517] Request has no ID, cannot receive KV cache
[Stage-0] WARNING 03-01 22:56:00 [kv_transfer_manager.py:517] Request has no ID, cannot receive KV cache
[Stage-0] INFO 03-01 22:56:00 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] WARNING 03-01 22:56:00 [kv_transfer_manager.py:517] Request has no ID, cannot receive KV cache
  0%|                                                                                                                                                                                                             | 0/1 [00:00<?, ?it/s][Stage-0] WARNING 03-01 22:56:12 [fused_moe.py:1087] Using default MoE config. Performance might be sub-optimal! Config file not found at /usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/configs/E=64,N=384,device_name=NVIDIA_A100-SXM4-80GB.json
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:13<00:00, 13.45s/it]

100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:13<00:00, 13.42s/it]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:13<00:00, 13.68s/it]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:13<00:00, 13.67s/it]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:13<00:00, 13.67s/it]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:13<00:00, 13.67s/it]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:13<00:00, 13.68s/it]
[Stage-0] INFO 03-01 22:56:23 [omni_stage.py:794] Max batch size: 1
INFO 03-01 22:56:23 [omni.py:448] [Orchestrator] Stage-0 reported ready
INFO 03-01 22:56:23 [omni.py:477] [Orchestrator] All stages initialized successfully

============================================================
Generation Configuration:
  Model: /data/HunyuanImage-3.0/
  Inference steps: 50
  Cache backend: None (no acceleration)
  Quantization: None (BF16)
  Parallel configuration: tensor_parallel_size=8, ulysses_degree=1, ring_degree=1, cfg_parallel_size=1, vae_patch_parallel_size=1
  CPU offload: False
  Image size: 1024x1024
============================================================

Adding requests:   0%|                                                                                                                                                                                            | 0/1 [00:00<?, ?it/s[Stage-0] INFO 03-01 22:56:23 [manager.py:566] Deactivating all adapters: 0 layers                                                                             | 0/1 [00:00<?, ?it/s, est. speed input: 0.00 unit/s, output: 0.00 unit/s]
[Stage-0] WARNING 03-01 22:56:23 [kv_transfer_manager.py:421] No connector available for receiving KV cache
[Stage-0] INFO 03-01 22:56:23 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] WARNING 03-01 22:56:23 [kv_transfer_manager.py:421] No connector available for receiving KV cache
  0%|                                                                                                                                                                                                            | 0/50 [00:00<?, ?it/s][Stage-0] INFO 03-01 22:56:23 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] WARNING 03-01 22:56:23 [kv_transfer_manager.py:421] No connector available for receiving KV cache
[Stage-0] INFO 03-01 22:56:23 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] WARNING 03-01 22:56:23 [kv_transfer_manager.py:421] No connector available for receiving KV cache
  0%|                                                                                                                                                                                                            | 0/50 [00:00<?, ?it/s][Stage-0] INFO 03-01 22:56:24 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] WARNING 03-01 22:56:24 [kv_transfer_manager.py:421] No connector available for receiving KV cache
[Stage-0] INFO 03-01 22:56:24 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] WARNING 03-01 22:56:24 [kv_transfer_manager.py:421] No connector available for receiving KV cache
  0%|                                                                                                                                                                                                            | 0/50 [00:00<?, ?it/s][Stage-0] INFO 03-01 22:56:24 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] WARNING 03-01 22:56:24 [kv_transfer_manager.py:421] No connector available for receiving KV cache
  0%|                                                                                                                                                                                                            | 0/50 [00:00<?, ?it/s][Stage-0] INFO 03-01 22:56:24 [manager.py:566] Deactivating all adapters: 0 layers
[Stage-0] WARNING 03-01 22:56:24 [kv_transfer_manager.py:421] No connector available for receiving KV cache
 72%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍                                                      | 36/50 [00:57<00:22,  1.59s/it][Stage-0] INFO 03-01 22:57:23 [shm_broadcast.py:542] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-consuming work (e.g. compilation, weight/kv cache quantization).
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:19<00:00,  1.60s/it]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:20<00:00,  1.61s/it]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:20<00:00,  1.60s/it]


100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:19<00:00,  1.60s/it]


[Stage-0] INFO 03-01 22:57:49 [diffusion_engine.py:80] Generation completed successfully.
[Stage-0] INFO 03-01 22:57:49 [diffusion_engine.py:98] Post-processing completed in 0.0000 seconds
Processed prompts: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:26<00:00, 86.04s/img, est. speed stage-0 img/s: 0.00, avg e2e_lat: 0.0ms]
Total generation time: 86.0467 seconds (86046.67 ms)█████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:26<00:00, 86.04s/img, est. speed stage-0 img/s: 0.00, avg e2e_lat: 0.0ms]
INFO 03-01 22:57:49 [text_to_image.py:401] Outputs: [OmniRequestOutput(request_id='', finished=True, stage_id=0, final_output_type='image', request_output=[OmniRequestOutput(request_id='0_ae4da804-a11b-4c26-826a-28bd46516550', finished=True, stage_id=None, final_output_type='image', request_output=None, images=[1 PIL Images], prompt={'prompt': 'A brown and white dog is running on the grass', 'negative_prompt': None, 'additional_information': {'global_request_id': ['0_ae4da804-a11b-4c26-826a-28bd46516550']}}, latents=None, metrics={'image_num': 1, 'resolution': 640, 'postprocess_time_ms': 0.023603439331054688}, multimodal_output={})], images=[], prompt=None, latents=None, metrics={}, multimodal_output={})]
Adding requests:   0%|                                                                                                                                                                                            | 0/1 [01:26<?, ?it/s]
Saved generated image to output_image.png
WARNING 03-01 22:57:50 [omni_stage.py:542] Failed to send shutdown to in_q: Socket operation on non-socket
[Stage-0] INFO 03-01 22:57:50 [omni_stage.py:842] Received shutdown signal
/usr/lib/python3.12/multiprocessing/resource_tracker.py:279: UserWarning: resource_tracker: There appear to be 8 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '
/usr/lib/python3.12/multiprocessing/resource_tracker.py:279: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '
076EF8B6-120F-4274-A4EA-EBA265BA76A4
Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan. Please provide the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the test style doc
  • The test results. Please paste the results comparison before and after, or the e2e results.
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Please run mkdocs serve to sync the documentation editions to ./docs.
  • (Optional) Release notes update. If your change is user-facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

@Semmer2 Semmer2 requested a review from hsliuustc0106 as a code owner March 2, 2026 07:18
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 01276c1bea

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

custom_pos_emb = self.get_pos_emb(custom_pos_emb, position_ids)

inputs_embeds = self.model.wte(input_ids)
inputs_embeds = self.model.embed_tokens(input_ids)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid calling embed_tokens on non-first PP ranks

forward_call now unconditionally does self.model.embed_tokens(input_ids), but HunyuanImage3Model.__init__ only creates embed_tokens on the first PP rank (or last when tied embeddings); other pipeline-parallel ranks get PPMissingLayer. With pipeline_parallel_size > 1 and default tie_word_embeddings=False, this change makes non-first ranks invoke a missing layer and fail during inference, whereas the previous self.model.wte path existed on every rank.

Useful? React with 👍 / 👎.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid calling embed_tokens on non-first PP ranks

forward_call now unconditionally does self.model.embed_tokens(input_ids), but HunyuanImage3Model.__init__ only creates embed_tokens on the first PP rank (or last when tied embeddings); other pipeline-parallel ranks get PPMissingLayer. With pipeline_parallel_size > 1 and default tie_word_embeddings=False, this change makes non-first ranks invoke a missing layer and fail during inference, whereas the previous self.model.wte path existed on every rank.

Useful? React with 👍 / 👎.

Current model do not support PP, so PP.is_first_rank is always true. No need to check for now.

@Semmer2 Semmer2 force-pushed the load_weight_error branch from 01276c1 to 883a04c Compare March 2, 2026 07:26
Move some submodule load weights code of HunyuanImage3Pipeline to
AutoWeightsLoader:load_weights, fix weights not initialized error.

Signed-off-by: Semmer2 <semmer@live.cn>
@Semmer2 Semmer2 force-pushed the load_weight_error branch from 883a04c to 86bbf58 Compare March 2, 2026 07:32
@princepride princepride added the ready label to trigger buildkite CI label Mar 2, 2026
@hsliuustc0106 hsliuustc0106 enabled auto-merge (squash) March 2, 2026 08:30
@hsliuustc0106 hsliuustc0106 merged commit 1ca198e into vllm-project:main Mar 2, 2026
6 of 7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready label to trigger buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants