Skip to content

[Bugfix] fix kernel error for qwen3-omni#1602

Merged
hsliuustc0106 merged 1 commit intovllm-project:mainfrom
R2-Y:benchmark_bugfix
Mar 2, 2026
Merged

[Bugfix] fix kernel error for qwen3-omni#1602
hsliuustc0106 merged 1 commit intovllm-project:mainfrom
R2-Y:benchmark_bugfix

Conversation

@R2-Y
Copy link
Contributor

@R2-Y R2-Y commented Mar 2, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

When vLLM groups requests into a batch, it builds sampling metadata where prompt_token_ids is a tensor with shape [num_reqs, max_prompt_len]. When request lengths are shorter than max_prompt_len, they are padded to the batch's maximum number of prompt tokens using the model's vocab_size. For multi-stage models, each stage has a different vocab_size. In Qwen3-Omni, the talker incorrectly uses the thinker's vocab_size during the sampling phase, causing an out-of-bounds computation error. I clamped the padding value of prompt_token_ids to match the correct vocab size for each stage.

This can help solve #1520 & #1532

Test Plan

bash benchmarks/qwen3-omni/vllm_omni/eval_qwen3_moe_omni.sh

Test Result

image image image image image image image image image
Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan. Please provide the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the test style doc
  • The test results. Please paste the results comparison before and after, or the e2e results.
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Please run mkdocs serve to sync the documentation editions to ./docs.
  • (Optional) Release notes update. If your change is user-facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

@R2-Y R2-Y requested a review from hsliuustc0106 as a code owner March 2, 2026 08:39
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: e71b4753fe

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@R2-Y
Copy link
Contributor Author

R2-Y commented Mar 2, 2026

@amy-why-3459 @hsliuustc0106 PTAL

@R2-Y R2-Y force-pushed the benchmark_bugfix branch from e71b475 to 1dff83e Compare March 2, 2026 09:02
@R2-Y R2-Y changed the title fix kernel error for qwen3-omni [Bugfix] fix kernel error for qwen3-omni Mar 2, 2026
Copy link
Collaborator

@hsliuustc0106 hsliuustc0106 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this the root cause for this problem or just a workround?

runtime:
devices: "1"
max_batch_size: 64
max_batch_size: 32
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why we need to change config?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we only use two cards now, batch size = 64 for code2wav will OOM for qwen3-omni convolution computation

Signed-off-by: Rein Yang <ruiruyang2@gmail.com>
@R2-Y R2-Y force-pushed the benchmark_bugfix branch from 1dff83e to 9098756 Compare March 2, 2026 09:06
@R2-Y
Copy link
Contributor Author

R2-Y commented Mar 2, 2026

is this the root cause for this problem or just a workround?

root cause

@hsliuustc0106 hsliuustc0106 added the ready label to trigger buildkite CI label Mar 2, 2026
@hsliuustc0106 hsliuustc0106 merged commit f082f6e into vllm-project:main Mar 2, 2026
7 checks passed
yJader pushed a commit to omni-nicelab/vllm-omni-batching that referenced this pull request Mar 3, 2026
Signed-off-by: Rein Yang <ruiruyang2@gmail.com>

Signed-off-by: jader <yjader@foxmail.com>
yJader pushed a commit to omni-nicelab/vllm-omni-batching that referenced this pull request Mar 3, 2026
Signed-off-by: Rein Yang <ruiruyang2@gmail.com>

Signed-off-by: jader <yjader@foxmail.com>
yJader pushed a commit to omni-nicelab/vllm-omni-batching that referenced this pull request Mar 3, 2026
Signed-off-by: Rein Yang <ruiruyang2@gmail.com>
@pengchengneo
Copy link

I can still not run this benchmark on 2* H100, my test command is

uv run bash vllm-omni/benchmarks/qwen3-omni/vllm_omni/eval_qwen3_moe_omni.sh

env is

CUDA used to build PyTorch   : 12.8
ROCM used to build PyTorch   : N/A

==============================
      Python Environment
==============================
Python version               : 3.12.12 (main, Feb 12 2026, 00:42:14) [Clang 21.1.4 ] (64-bit runtime)
Python platform              : Linux-6.8.0-1015-gcp-x86_64-with-glibc2.35

==============================
       CUDA / GPU Info
==============================
Is CUDA available            : True
CUDA runtime version         : Could not collect
CUDA_MODULE_LOADING set to   :
GPU models and configuration :
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3

Nvidia driver version        : 535.216.01
cuDNN version                : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version          : N/A
MIOpen runtime version       : N/A
Is XNNPACK available         : True

==============================
          CPU Info
==============================
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        52 bits physical, 57 bits virtual
Byte Order:                           Little Endian

Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.6.3
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cudnn-frontend==1.18.0
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-cutlass-dsl==4.4.0
[pip3] nvidia-cutlass-dsl-libs-base==4.4.0
[pip3] nvidia-ml-py==13.590.48
[pip3] nvidia-nccl-cu12==2.27.5
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvshmem-cu12==3.4.5
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] onnxruntime==1.24.2
[pip3] pyzmq==27.1.0
[pip3] torch==2.10.0
[pip3] torchaudio==2.10.0
[pip3] torchsde==0.2.6
[pip3] torchvision==0.25.0
[pip3] transformers==4.57.6
[pip3] triton==3.6.0
[conda] numpy                     2.1.2                    pypi_0    pypi

==============================
         vLLM Info
==============================
ROCM Version                 : Could not collect
vLLM Version                 : 0.16.0
vLLM-Omni Version            : 0.16.1.dev13+ge7f2c8870 (git sha: e7f2c8870)
vLLM Build Flags:
  CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
        GPU0    GPU1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV18    0-51    0               N/A
GPU1    NV18     X      0-51    0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

==============================
     Environment Variables
==============================
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
TORCHINDUCTOR_CACHE_DIR=/tmp/torchinductor_gcpuser
LD_LIBRARY_PATH=/home/gcpuser/.venv/lib/python3.12/site-packages/cv2/../../lib64:

(gcpuser) (base) gcpuser@sky-4a8b-niu-d913a5d5-head-bl5pykie-compute:~$

error is

Adding requests:   0%|                                                                                                                                                                                                                                  | 0/100 [00:00<?, ?it/s(Worker pid=25171) [Stage-0] WARNING 03-03 08:58:08 [gpu_model_runner.py:332] additional_information on request data is deprecated, use model_intermediate_buffer                                    | 0/100 [00:00<?, ?it/s, est. speed input: 0.00 unit/s, output: 0.00 unit/s]
(Worker pid=25180) [Stage-1] WARNING 03-03 08:58:47 [gpu_model_runner.py:332] additional_information on request data is deprecated, use model_intermediate_buffer
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] WARNING 03-03 08:58:47 [gpu_model_runner.py:1297] _merge_additional_information_update is deprecated, use _update_intermediate_buffer
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:31 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:31 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:31 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:31 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:33 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:33 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:33 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:33 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:33 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:33 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:35 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:35 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:35 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:35 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:35 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:35 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863] WorkerProc hit an exception.
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863] Traceback (most recent call last):
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 858, in worker_busy_loop
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     output = func(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]              ^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/worker/worker_base.py", line 361, in execute_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self.worker.execute_model(scheduler_output)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 124, in decorate_context
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return func(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 652, in execute_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     output = self.model_runner.execute_model(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 124, in decorate_context
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return func(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/worker/gpu_generation_model_runner.py", line 280, in execute_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     outputs = self._run_generation_model(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/worker/gpu_generation_model_runner.py", line 442, in _run_generation_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._model_forward(**kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/worker/gpu_model_runner.py", line 1263, in _model_forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     model_output = super()._model_forward(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                    ^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3047, in _model_forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self.model(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni.py", line 362, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     audio_tensors = self.generate_audio(codes, voice_type, seq_token_counts)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni.py", line 482, in generate_audio
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     audio_tensors = self.code2wav.chunked_decode(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni_code2wav.py", line 194, in chunked_decode
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     wav_chunk = self(codes_chunk)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                 ^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni_code2wav.py", line 154, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     wav = block(wav)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]           ^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py", line 3700, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     hidden = block(hidden)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]              ^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py", line 3675, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     hidden_state = self.conv1(hidden_state)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                    ^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py", line 3242, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self.conv(hidden_state).contiguous()
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 375, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._conv_forward(input, self.weight, self.bias)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 370, in _conv_forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return F.conv1d(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.35 GiB. GPU 0 has a total capacity of 79.11 GiB of which 1.72 GiB is free. Including non-PyTorch memory, this process has 25.73 GiB memory in use. Process 25180 has 51.63 GiB memory in use. Of the allocated memory 16.69 GiB is allocated by PyTorch, and 8.31 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863] Traceback (most recent call last):
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 858, in worker_busy_loop
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     output = func(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]              ^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/worker/worker_base.py", line 361, in execute_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self.worker.execute_model(scheduler_output)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 124, in decorate_context
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return func(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 652, in execute_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     output = self.model_runner.execute_model(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 124, in decorate_context
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return func(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/worker/gpu_generation_model_runner.py", line 280, in execute_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     outputs = self._run_generation_model(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/worker/gpu_generation_model_runner.py", line 442, in _run_generation_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._model_forward(**kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/worker/gpu_model_runner.py", line 1263, in _model_forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     model_output = super()._model_forward(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                    ^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3047, in _model_forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self.model(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni.py", line 362, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     audio_tensors = self.generate_audio(codes, voice_type, seq_token_counts)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni.py", line 482, in generate_audio
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     audio_tensors = self.code2wav.chunked_decode(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni_code2wav.py", line 194, in chunked_decode
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     wav_chunk = self(codes_chunk)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                 ^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni_code2wav.py", line 154, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     wav = block(wav)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]           ^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py", line 3700, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     hidden = block(hidden)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]              ^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py", line 3675, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     hidden_state = self.conv1(hidden_state)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                    ^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py", line 3242, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self.conv(hidden_state).contiguous()
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 375, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._conv_forward(input, self.weight, self.bias)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 370, in _conv_forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return F.conv1d(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.35 GiB. GPU 0 has a total capacity of 79.11 GiB of which 1.72 GiB is free. Including non-PyTorch memory, this process has 25.73 GiB memory in use. Process 25180 has 51.63 GiB memory in use. Of the allocated memory 16.69 GiB is allocated by PyTorch, and 8.31 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [dump_input.py:72] Dumping input data for V1 LLM engine (v0.16.0) with config: model='Qwen/Qwen3-Omni-30B-A3B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen3-Omni-30B-A3B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=65536, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, enable_return_routed_experts=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=Qwen/Qwen3-Omni-30B-A3B-Instruct, enable_prefix_caching=False, enable_chunked_prefill=False, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.NONE: 0>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['all'], 'splitting_ops': [], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [1000000], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.NONE: 0>, 'cudagraph_num_of_warmups': 0, 'cudagraph_capture_sizes': [], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': False, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False, 'fuse_act_padding': False}, 'max_cudagraph_capture_size': 0, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': False}, 'local_cache_dir': None, 'fast_moe_cold_start': True, 'static_all_moe_layers': []},
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [dump_input.py:79] Dumping scheduler output for model execution: SchedulerOutput(scheduled_new_reqs=[NewRequestData(req_id=2-a1e3a1ee,prompt_token_ids_len=9088,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=3-85532143,prompt_token_ids_len=8256,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=4-afb79101,prompt_token_ids_len=19648,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=5-895d2783,prompt_token_ids_len=19936,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=6-b71539c6,prompt_token_ids_len=18352,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=7-8da2cea8,prompt_token_ids_len=17168,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=8-84c8ee08,prompt_token_ids_len=22816,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=9-91242bc0,prompt_token_ids_len=18368,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=10-b89dc74b,prompt_token_ids_len=25792,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=11-affc15a1,prompt_token_ids_len=44080,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=12-a682d12e,prompt_token_ids_len=864,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=13-820a3c92,prompt_token_ids_len=816,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=14-97980c98,prompt_token_ids_len=16720,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=15-bae74e96,prompt_token_ids_len=18272,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=16-954a31fc,prompt_token_ids_len=19504,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=17-a57ee92e,prompt_token_ids_len=21808,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=18-a51777f6,prompt_token_ids_len=11040,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=19-93878839,prompt_token_ids_len=10352,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=20-925720bc,prompt_token_ids_len=13072,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=21-beeb442d,prompt_token_ids_len=12688,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=22-929fa0c6,prompt_token_ids_len=16496,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=23-af081677,prompt_token_ids_len=9824,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=24-9f12d844,prompt_token_ids_len=14080,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=25-9dffcef9,prompt_token_ids_len=14544,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=26-93e13601,prompt_token_ids_len=13232,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=27-b304ed7d,prompt_token_ids_len=17360,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=28-aa1f0254,prompt_token_ids_len=19040,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=29-a9ac2184,prompt_token_ids_len=17360,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=30-bcc53f50,prompt_token_ids_len=12352,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=31-ab9e0056,prompt_token_ids_len=15744,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None)], scheduled_cached_reqs=CachedRequestData(req_ids=[],resumed_req_ids=set(),new_token_ids_lens=[],all_token_ids_lens={},new_block_ids=[],num_computed_tokens=[],num_output_tokens=[]), num_scheduled_tokens={29-a9ac2184: 17360, 24-9f12d844: 14080, 6-b71539c6: 18352, 12-a682d12e: 864, 11-affc15a1: 44080, 22-929fa0c6: 16496, 2-a1e3a1ee: 9088, 20-925720bc: 13072, 15-bae74e96: 18272, 18-a51777f6: 11040, 26-93e13601: 13232, 17-a57ee92e: 21808, 9-91242bc0: 18368, 14-97980c98: 16720, 25-9dffcef9: 14544, 8-84c8ee08: 22816, 10-b89dc74b: 25792, 31-ab9e0056: 15744, 27-b304ed7d: 17360, 13-820a3c92: 816, 5-895d2783: 19936, 23-af081677: 9824, 7-8da2cea8: 17168, 3-85532143: 8256, 28-aa1f0254: 19040, 16-954a31fc: 19504, 19-93878839: 10352, 30-bcc53f50: 12352, 21-beeb442d: 12688, 4-afb79101: 19648}, total_num_scheduled_tokens=478672, scheduled_spec_decode_tokens={}, scheduled_encoder_inputs={}, num_common_prefix_blocks=[], finished_req_ids=['1-b03aa283', '0-ad4bb8ea'], free_encoder_mm_hashes=[], preempted_req_ids=[], has_structured_output_requests=false, pending_structured_output_tokens=false, num_invalid_spec_tokens=null, kv_connector_metadata=null, ec_connector_metadata=null)
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008] EngineCore encountered a fatal error.
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008] Traceback (most recent call last):
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 999, in run_engine_core
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     engine_core.run_busy_loop()
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 1026, in run_busy_loop
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     self._process_engine_step()
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 1060, in _process_engine_step
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     outputs, model_executed = self.step_fn()
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]                               ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 411, in step
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     model_output = future.result()
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]                    ^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 81, in result
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     return super().result()
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]            ^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 449, in result
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     return self.__get_result()
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]            ^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     raise self._exception
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 85, in wait_for_response
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     response = self.aggregate(get_response())
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]                               ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 358, in get_response
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     raise RuntimeError(
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008] RuntimeError: Worker failed with error 'CUDA out of memory. Tried to allocate 3.35 GiB. GPU 0 has a total capacity of 79.11 GiB of which 1.72 GiB is free. Including non-PyTorch memory, this process has 25.73 GiB memory in use. Process 25180 has 51.63 GiB memory in use. Of the allocated memory 16.69 GiB is allocated by PyTorch, and 8.31 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)', please check the stack trace above for the root cause
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:37 [multiproc_executor.py:732] Parent process exited, terminating worker
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:37 [multiproc_executor.py:785] WorkerProc shutting down.
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082] Failed on batch ['0_f466866f-9827-43f3-8ffc-4912e278d43c', '1_4ce9a33d-9b61-4be6-ad60-fb62c8aaede6', '2_6998bc1a-3433-47a8-a613-fdfe7af63d7f', '3_48e34033-ac98-4258-8040-357bb769c28a', '4_2c1f9ab1-6853-45d9-9560-2361adac7dfe', '5_6f47dfa0-8d49-4bd3-b415-5e1226ee32d8', '6_d43c7f13-b4dd-4f16-a60f-3c67a1bf5daa', '7_07cf9d04-75f8-405a-9b80-a07fea5d491d', '8_ba82c748-1413-4797-95b7-e824fad502b3', '9_51f62d85-7a65-40b6-94b5-c7c1a5fd5e63', '10_881c6e00-cabf-4998-8516-384ea7a9294c', '11_5c223695-b87e-4e45-94a2-93f87982dc39', '12_ed61520f-dbc3-4e24-9e87-e008dd6dc56b', '13_8bed2151-3adf-4519-84cf-4bb64b10bc30', '14_139d61cf-8156-4649-9db1-a3f3fe1a5463', '15_7f7a1915-b832-4acf-af37-1f807b856bbb', '16_fa7d480d-f4d1-424e-bfb0-d9eb7d52e801', '17_1c01d312-f56a-455c-b72a-d8c1609834bb', '18_a7911dbb-e798-483e-a2d5-c7201e320aab', '19_8b3693ec-1e33-40cc-ac06-8f935db675fc', '20_c2787457-12d2-4e9a-a64c-dc663e347b42', '21_ed1f34b4-1aee-4e2b-bb58-ce6c2dc09cf8', '22_bc5a537b-23e1-4023-8b80-a52e62fac0c1', '23_495aef1f-8ffe-4055-b858-04ba0c79deab', '24_6c5add7b-ea50-461e-9bfd-497eb0a80201', '25_0f4403ad-df29-45c4-9b5a-52b3798ec679', '26_0f8fa55d-d828-40e8-aeb2-3d50cc7f528a', '27_e1e1b215-c4a1-441a-a49b-e173206215a4', '28_a407e4f6-aca4-491c-81d5-09deebbad96d', '29_e3ebe9bd-7193-403f-b58e-6668eca29a65', '30_fe07d131-9364-4e25-9b5f-879a04f548a9', '31_b9349e5a-b4dc-44b0-a40c-5007b216c7f2']: EngineCore encountered an issue. See stack trace (above) for the root cause.
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082] Traceback (most recent call last):
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/vllm-omni/vllm_omni/entrypoints/omni_stage.py", line 1002, in _stage_worker
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     results = stage_engine.generate(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]               ^^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 449, in generate
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     outputs = self._run_completion(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]               ^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1760, in _run_completion
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     return self._run_engine(use_tqdm=use_tqdm)
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/vllm-omni/vllm_omni/entrypoints/omni_llm.py", line 217, in _run_engine
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     step_outputs = self.llm_engine.step()
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]                    ^^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 294, in step
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     outputs = self.engine_core.get_output()
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 728, in get_output
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     raise self._format_exception(outputs) from None
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082] vllm.v1.engine.exceptions.EngineDeadError: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 0_f466866f-9827-43f3-8ffc-4912e278d43c: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 1_4ce9a33d-9b61-4be6-ad60-fb62c8aaede6: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 2_6998bc1a-3433-47a8-a613-fdfe7af63d7f: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 3_48e34033-ac98-4258-8040-357bb769c28a: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 4_2c1f9ab1-6853-45d9-9560-2361adac7dfe: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 5_6f47dfa0-8d49-4bd3-b415-5e1226ee32d8: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 6_d43c7f13-b4dd-4f16-a60f-3c67a1bf5daa: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 7_07cf9d04-75f8-405a-9b80-a07fea5d491d: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 8_ba82c748-1413-4797-95b7-e824fad502b3: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 9_51f62d85-7a65-40b6-94b5-c7c1a5fd5e63: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 10_881c6e00-cabf-4998-8516-384ea7a9294c: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 11_5c223695-b87e-4e45-94a2-93f87982dc39: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 12_ed61520f-dbc3-4e24-9e87-e008dd6dc56b: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 13_8bed2151-3adf-4519-84cf-4bb64b10bc30: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 14_139d61cf-8156-4649-9db1-a3f3fe1a5463: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 15_7f7a1915-b832-4acf-af37-1f807b856bbb: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 16_fa7d480d-f4d1-424e-bfb0-d9eb7d52e801: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 17_1c01d312-f56a-455c-b72a-d8c1609834bb: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 18_a7911dbb-e798-483e-a2d5-c7201e320aab: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 19_8b3693ec-1e33-40cc-ac06-8f935db675fc: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 20_c2787457-12d2-4e9a-a64c-dc663e347b42: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 21_ed1f34b4-1aee-4e2b-bb58-ce6c2dc09cf8: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 22_bc5a537b-23e1-4023-8b80-a52e62fac0c1: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 23_495aef1f-8ffe-4055-b858-04ba0c79deab: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 24_6c5add7b-ea50-461e-9bfd-497eb0a80201: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 25_0f4403ad-df29-45c4-9b5a-52b3798ec679: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 26_0f8fa55d-d828-40e8-aeb2-3d50cc7f528a: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 27_e1e1b215-c4a1-441a-a49b-e173206215a4: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 28_a407e4f6-aca4-491c-81d5-09deebbad96d: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 29_e3ebe9bd-7193-403f-b58e-6668eca29a65: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 30_fe07d131-9364-4e25-9b5f-879a04f548a9: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 31_b9349e5a-b4dc-44b0-a40c-5007b216c7f2: EngineCore encountered an issue. See stack trace (above) for the root cause.
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082] Failed on batch ['32_81336837-07cc-43a4-bf05-6f0a8b724181', '33_f167b77c-6ee3-4de4-9a12-9aa649056634', '34_bc1272f6-a1b0-4597-a45f-3ee0431424ad', '35_1fbbd29e-478d-4557-90be-12049cc5d8f7', '36_914b29c4-2d4a-4f87-a877-491377920948', '37_135e3ea0-7859-4ebe-b05b-14fd132d1df4', '38_40603229-e298-4e03-b6b8-498ad8155bd4', '39_dd473ff6-d071-4d43-bbd4-f2a4a1ed1912', '40_9bba8916-6306-4dd0-8555-1a02017d6d8c', '41_2e6fd3ed-7f6c-42bc-bcec-4eeb1dc7163c', '42_762aeea3-15d5-448e-84cb-66deae54cafd', '43_6d540138-9eb7-4b93-9b56-6ed91dc457e1', '44_469d9873-7f04-4d98-955d-70c6f78fc397', '45_dca2cb8e-48cf-42c9-aaad-1c39e3dc8048', '46_08232f2a-a86d-4b8e-9743-13804cd6b6f6', '47_bb334b7b-fb68-4d14-b570-43fc8d024bf0', '48_37b81ccc-79b1-42c9-92a4-051ffe8b8130', '49_b0ae288a-1fd8-4c9e-bdbe-cdf50e13dbe5', '50_e82d16cb-8a2f-4fdc-90a5-0c10bf96d75d', '51_30edadc5-4cbb-436c-94b3-e3d355b6d2c9', '52_9c842fd2-3391-4cfe-8061-04417693166b', '53_e9fc620f-a21c-48d2-9f9a-0c52cea901df', '54_2ed3e162-28f3-4ed9-8d60-6c229ac970d6', '55_868e1f85-6f3d-411f-bb3e-bf8bb14b85b2', '56_248b5369-7a6c-40b2-b872-ed4906e45ba4', '57_f781119e-34f7-431c-b822-3a5855443a0e', '58_5fef2852-ff39-46da-ab9d-721f4d0c05a0', '59_a40ab85d-94e7-406d-932d-a8e9b9e336a5', '60_c05803ca-bd0f-414e-bdf1-4af0fe490f00', '61_ae997d0c-bffc-4840-8fbe-e84222dab179', '62_c46246ec-218d-4463-96a6-615239a9a742', '63_4fb7b0a0-e374-489f-8d30-705f94fa4c93']: EngineCore encountered an issue. See stack trace (above) for the root cause.
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082] Traceback (most recent call last):
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/vllm-omni/vllm_omni/entrypoints/omni_stage.py", line 1002, in _stage_worker
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     results = stage_engine.generate(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]               ^^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 449, in generate
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     outputs = self._run_completion(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]               ^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1749, in _run_completion
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     self._validate_and_add_requests(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1848, in _validate_and_add_requests
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     raise e
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1837, in _validate_and_add_requests
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     request_id = self._add_request(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]                  ^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1889, in _add_request
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     self.llm_engine.add_request(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 268, in add_request
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     self.engine_core.add_request(request)
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 761, in add_request
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     self._send_input(EngineCoreRequestType.ADD, request)
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 734, in _send_input
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     self.ensure_alive()
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 576, in ensure_alive
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     raise EngineDeadError()
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082] vllm.v1.engine.exceptions.EngineDeadError: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 32_81336837-07cc-43a4-bf05-6f0a8b724181: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 33_f167b77c-6ee3-4de4-9a12-9aa649056634: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 34_bc1272f6-a1b0-4597-a45f-3ee0431424ad: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 35_1fbbd29e-478d-4557-90be-12049cc5d8f7: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 36_914b29c4-2d4a-4f87-a877-491377920948: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 37_135e3ea0-7859-4ebe-b05b-14fd132d1df4: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 38_40603229-e298-4e03-b6b8-498ad8155bd4: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 39_dd473ff6-d071-4d43-bbd4-f2a4a1ed1912: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 40_9bba8916-6306-4dd0-8555-1a02017d6d8c: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 41_2e6fd3ed-7f6c-42bc-bcec-4eeb1dc7163c: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 42_762aeea3-15d5-448e-84cb-66deae54cafd: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 43_6d540138-9eb7-4b93-9b56-6ed91dc457e1: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 44_469d9873-7f04-4d98-955d-70c6f78fc397: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 45_dca2cb8e-48cf-42c9-aaad-1c39e3dc8048: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 46_08232f2a-a86d-4b8e-9743-13804cd6b6f6: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 47_bb334b7b-fb68-4d14-b570-43fc8d024bf0: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 48_37b81ccc-79b1-42c9-92a4-051ffe8b8130: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 49_b0ae288a-1fd8-4c9e-bdbe-cdf50e13dbe5: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 50_e82d16cb-8a2f-4fdc-90a5-0c10bf96d75d: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 51_30edadc5-4cbb-436c-94b3-e3d355b6d2c9: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 52_9c842fd2-3391-4cfe-8061-04417693166b: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 53_e9fc620f-a21c-48d2-9f9a-0c52cea901df: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 54_2ed3e162-28f3-4ed9-8d60-6c229ac970d6: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 55_868e1f85-6f3d-411f-bb3e-bf8bb14b85b2: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 56_248b5369-7a6c-40b2-b872-ed4906e45ba4: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 57_f781119e-34f7-431c-b822-3a5855443a0e: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 58_5fef2852-ff39-46da-ab9d-721f4d0c05a0: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 59_a40ab85d-94e7-406d-932d-a8e9b9e336a5: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 60_c05803ca-bd0f-414e-bdf1-4af0fe490f00: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 61_ae997d0c-bffc-4840-8fbe-e84222dab179: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 62_c46246ec-218d-4463-96a6-615239a9a742: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 63_4fb7b0a0-e374-489f-8d30-705f94fa4c93: EngineCore encountered an issue. See stack trace (above) for the root cause.
(EngineCore_DP0 pid=24688) Process EngineCore_DP0:
(EngineCore_DP0 pid=24688) Traceback (most recent call last):
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_DP0 pid=24688)     self.run()
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/multiprocessing/process.py", line 108, in run
(EngineCore_DP0 pid=24688)     self._target(*self._args, **self._kwargs)
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 1010, in run_engine_core
(EngineCore_DP0 pid=24688)     raise e
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 999, in run_engine_core
(EngineCore_DP0 pid=24688)     engine_core.run_busy_loop()
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 1026, in run_busy_loop
(EngineCore_DP0 pid=24688)     self._process_engine_step()
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 1060, in _process_engine_step
(EngineCore_DP0 pid=24688)     outputs, model_executed = self.step_fn()
(EngineCore_DP0 pid=24688)                               ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 411, in step
(EngineCore_DP0 pid=24688)     model_output = future.result()
(EngineCore_DP0 pid=24688)                    ^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 81, in result
(EngineCore_DP0 pid=24688)     return super().result()
(EngineCore_DP0 pid=24688)            ^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 449, in result
(EngineCore_DP0 pid=24688)     return self.__get_result()
(EngineCore_DP0 pid=24688)            ^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
(EngineCore_DP0 pid=24688)     raise self._exception
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 85, in wait_for_response
(EngineCore_DP0 pid=24688)     response = self.aggregate(get_response())
(EngineCore_DP0 pid=24688)                               ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 358, in get_response
(EngineCore_DP0 pid=24688)     raise RuntimeError(
(EngineCore_DP0 pid=24688) RuntimeError: Worker failed with error 'CUDA out of memory. Tried to allocate 3.35 GiB. GPU 0 has a total capacity of 79.11 GiB of which 1.72 GiB is free. Including non-PyTorch memory, this process has 25.73 GiB memory in use. Process 25180 has 51.63 GiB memory in use. Of the allocated memory 16.69 GiB is allocated by PyTorch, and 8.31 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)', please check the stack trace above for the root cause
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:41 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:41 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:41 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!

@R2-Y
Copy link
Contributor Author

R2-Y commented Mar 3, 2026

I can still not run this benchmark on 2* H100, my test command is

uv run bash vllm-omni/benchmarks/qwen3-omni/vllm_omni/eval_qwen3_moe_omni.sh

env is

CUDA used to build PyTorch   : 12.8
ROCM used to build PyTorch   : N/A

==============================
      Python Environment
==============================
Python version               : 3.12.12 (main, Feb 12 2026, 00:42:14) [Clang 21.1.4 ] (64-bit runtime)
Python platform              : Linux-6.8.0-1015-gcp-x86_64-with-glibc2.35

==============================
       CUDA / GPU Info
==============================
Is CUDA available            : True
CUDA runtime version         : Could not collect
CUDA_MODULE_LOADING set to   :
GPU models and configuration :
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3

Nvidia driver version        : 535.216.01
cuDNN version                : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version          : N/A
MIOpen runtime version       : N/A
Is XNNPACK available         : True

==============================
          CPU Info
==============================
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        52 bits physical, 57 bits virtual
Byte Order:                           Little Endian

Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.6.3
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cudnn-frontend==1.18.0
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-cutlass-dsl==4.4.0
[pip3] nvidia-cutlass-dsl-libs-base==4.4.0
[pip3] nvidia-ml-py==13.590.48
[pip3] nvidia-nccl-cu12==2.27.5
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvshmem-cu12==3.4.5
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] onnxruntime==1.24.2
[pip3] pyzmq==27.1.0
[pip3] torch==2.10.0
[pip3] torchaudio==2.10.0
[pip3] torchsde==0.2.6
[pip3] torchvision==0.25.0
[pip3] transformers==4.57.6
[pip3] triton==3.6.0
[conda] numpy                     2.1.2                    pypi_0    pypi

==============================
         vLLM Info
==============================
ROCM Version                 : Could not collect
vLLM Version                 : 0.16.0
vLLM-Omni Version            : 0.16.1.dev13+ge7f2c8870 (git sha: e7f2c8870)
vLLM Build Flags:
  CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
        GPU0    GPU1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV18    0-51    0               N/A
GPU1    NV18     X      0-51    0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

==============================
     Environment Variables
==============================
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
TORCHINDUCTOR_CACHE_DIR=/tmp/torchinductor_gcpuser
LD_LIBRARY_PATH=/home/gcpuser/.venv/lib/python3.12/site-packages/cv2/../../lib64:

(gcpuser) (base) gcpuser@sky-4a8b-niu-d913a5d5-head-bl5pykie-compute:~$

error is

Adding requests:   0%|                                                                                                                                                                                                                                  | 0/100 [00:00<?, ?it/s(Worker pid=25171) [Stage-0] WARNING 03-03 08:58:08 [gpu_model_runner.py:332] additional_information on request data is deprecated, use model_intermediate_buffer                                    | 0/100 [00:00<?, ?it/s, est. speed input: 0.00 unit/s, output: 0.00 unit/s]
(Worker pid=25180) [Stage-1] WARNING 03-03 08:58:47 [gpu_model_runner.py:332] additional_information on request data is deprecated, use model_intermediate_buffer
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] WARNING 03-03 08:58:47 [gpu_model_runner.py:1297] _merge_additional_information_update is deprecated, use _update_intermediate_buffer
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:47 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 08:58:48 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:31 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:31 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:31 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:31 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:32 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:33 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:33 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:33 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:33 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:33 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:33 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:34 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:35 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:35 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:35 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:35 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:35 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:35 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863] WorkerProc hit an exception.
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863] Traceback (most recent call last):
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 858, in worker_busy_loop
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     output = func(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]              ^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/worker/worker_base.py", line 361, in execute_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self.worker.execute_model(scheduler_output)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 124, in decorate_context
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return func(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 652, in execute_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     output = self.model_runner.execute_model(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 124, in decorate_context
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return func(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/worker/gpu_generation_model_runner.py", line 280, in execute_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     outputs = self._run_generation_model(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/worker/gpu_generation_model_runner.py", line 442, in _run_generation_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._model_forward(**kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/worker/gpu_model_runner.py", line 1263, in _model_forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     model_output = super()._model_forward(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                    ^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3047, in _model_forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self.model(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni.py", line 362, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     audio_tensors = self.generate_audio(codes, voice_type, seq_token_counts)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni.py", line 482, in generate_audio
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     audio_tensors = self.code2wav.chunked_decode(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni_code2wav.py", line 194, in chunked_decode
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     wav_chunk = self(codes_chunk)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                 ^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni_code2wav.py", line 154, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     wav = block(wav)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]           ^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py", line 3700, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     hidden = block(hidden)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]              ^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py", line 3675, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     hidden_state = self.conv1(hidden_state)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                    ^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py", line 3242, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self.conv(hidden_state).contiguous()
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 375, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._conv_forward(input, self.weight, self.bias)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 370, in _conv_forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return F.conv1d(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.35 GiB. GPU 0 has a total capacity of 79.11 GiB of which 1.72 GiB is free. Including non-PyTorch memory, this process has 25.73 GiB memory in use. Process 25180 has 51.63 GiB memory in use. Of the allocated memory 16.69 GiB is allocated by PyTorch, and 8.31 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863] Traceback (most recent call last):
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 858, in worker_busy_loop
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     output = func(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]              ^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/worker/worker_base.py", line 361, in execute_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self.worker.execute_model(scheduler_output)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 124, in decorate_context
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return func(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 652, in execute_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     output = self.model_runner.execute_model(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 124, in decorate_context
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return func(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/worker/gpu_generation_model_runner.py", line 280, in execute_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     outputs = self._run_generation_model(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/worker/gpu_generation_model_runner.py", line 442, in _run_generation_model
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._model_forward(**kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/worker/gpu_model_runner.py", line 1263, in _model_forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     model_output = super()._model_forward(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                    ^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3047, in _model_forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self.model(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni.py", line 362, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     audio_tensors = self.generate_audio(codes, voice_type, seq_token_counts)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni.py", line 482, in generate_audio
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     audio_tensors = self.code2wav.chunked_decode(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni_code2wav.py", line 194, in chunked_decode
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     wav_chunk = self(codes_chunk)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                 ^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/vllm-omni/vllm_omni/model_executor/models/qwen3_omni/qwen3_omni_code2wav.py", line 154, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     wav = block(wav)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]           ^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py", line 3700, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     hidden = block(hidden)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]              ^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py", line 3675, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     hidden_state = self.conv1(hidden_state)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]                    ^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py", line 3242, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self.conv(hidden_state).contiguous()
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._call_impl(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return forward_call(*args, **kwargs)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 375, in forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return self._conv_forward(input, self.weight, self.bias)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 370, in _conv_forward
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]     return F.conv1d(
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]            ^^^^^^^^^
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.35 GiB. GPU 0 has a total capacity of 79.11 GiB of which 1.72 GiB is free. Including non-PyTorch memory, this process has 25.73 GiB memory in use. Process 25180 has 51.63 GiB memory in use. Of the allocated memory 16.69 GiB is allocated by PyTorch, and 8.31 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
(Worker pid=25166) [Stage-2] ERROR 03-03 09:00:37 [multiproc_executor.py:863]
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [dump_input.py:72] Dumping input data for V1 LLM engine (v0.16.0) with config: model='Qwen/Qwen3-Omni-30B-A3B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen3-Omni-30B-A3B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=65536, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, enable_return_routed_experts=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=Qwen/Qwen3-Omni-30B-A3B-Instruct, enable_prefix_caching=False, enable_chunked_prefill=False, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.NONE: 0>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['all'], 'splitting_ops': [], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [1000000], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.NONE: 0>, 'cudagraph_num_of_warmups': 0, 'cudagraph_capture_sizes': [], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': False, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False, 'fuse_act_padding': False}, 'max_cudagraph_capture_size': 0, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': False}, 'local_cache_dir': None, 'fast_moe_cold_start': True, 'static_all_moe_layers': []},
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [dump_input.py:79] Dumping scheduler output for model execution: SchedulerOutput(scheduled_new_reqs=[NewRequestData(req_id=2-a1e3a1ee,prompt_token_ids_len=9088,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=3-85532143,prompt_token_ids_len=8256,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=4-afb79101,prompt_token_ids_len=19648,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=5-895d2783,prompt_token_ids_len=19936,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=6-b71539c6,prompt_token_ids_len=18352,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=7-8da2cea8,prompt_token_ids_len=17168,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=8-84c8ee08,prompt_token_ids_len=22816,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=9-91242bc0,prompt_token_ids_len=18368,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=10-b89dc74b,prompt_token_ids_len=25792,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=11-affc15a1,prompt_token_ids_len=44080,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=12-a682d12e,prompt_token_ids_len=864,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=13-820a3c92,prompt_token_ids_len=816,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=14-97980c98,prompt_token_ids_len=16720,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=15-bae74e96,prompt_token_ids_len=18272,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=16-954a31fc,prompt_token_ids_len=19504,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=17-a57ee92e,prompt_token_ids_len=21808,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=18-a51777f6,prompt_token_ids_len=11040,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=19-93878839,prompt_token_ids_len=10352,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=20-925720bc,prompt_token_ids_len=13072,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=21-beeb442d,prompt_token_ids_len=12688,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=22-929fa0c6,prompt_token_ids_len=16496,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=23-af081677,prompt_token_ids_len=9824,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=24-9f12d844,prompt_token_ids_len=14080,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=25-9dffcef9,prompt_token_ids_len=14544,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=26-93e13601,prompt_token_ids_len=13232,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=27-b304ed7d,prompt_token_ids_len=17360,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=28-aa1f0254,prompt_token_ids_len=19040,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=29-a9ac2184,prompt_token_ids_len=17360,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=30-bcc53f50,prompt_token_ids_len=12352,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None), NewRequestData(req_id=31-ab9e0056,prompt_token_ids_len=15744,prefill_token_ids_len=None,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=42, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=65536, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=(),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None)], scheduled_cached_reqs=CachedRequestData(req_ids=[],resumed_req_ids=set(),new_token_ids_lens=[],all_token_ids_lens={},new_block_ids=[],num_computed_tokens=[],num_output_tokens=[]), num_scheduled_tokens={29-a9ac2184: 17360, 24-9f12d844: 14080, 6-b71539c6: 18352, 12-a682d12e: 864, 11-affc15a1: 44080, 22-929fa0c6: 16496, 2-a1e3a1ee: 9088, 20-925720bc: 13072, 15-bae74e96: 18272, 18-a51777f6: 11040, 26-93e13601: 13232, 17-a57ee92e: 21808, 9-91242bc0: 18368, 14-97980c98: 16720, 25-9dffcef9: 14544, 8-84c8ee08: 22816, 10-b89dc74b: 25792, 31-ab9e0056: 15744, 27-b304ed7d: 17360, 13-820a3c92: 816, 5-895d2783: 19936, 23-af081677: 9824, 7-8da2cea8: 17168, 3-85532143: 8256, 28-aa1f0254: 19040, 16-954a31fc: 19504, 19-93878839: 10352, 30-bcc53f50: 12352, 21-beeb442d: 12688, 4-afb79101: 19648}, total_num_scheduled_tokens=478672, scheduled_spec_decode_tokens={}, scheduled_encoder_inputs={}, num_common_prefix_blocks=[], finished_req_ids=['1-b03aa283', '0-ad4bb8ea'], free_encoder_mm_hashes=[], preempted_req_ids=[], has_structured_output_requests=false, pending_structured_output_tokens=false, num_invalid_spec_tokens=null, kv_connector_metadata=null, ec_connector_metadata=null)
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008] EngineCore encountered a fatal error.
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008] Traceback (most recent call last):
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 999, in run_engine_core
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     engine_core.run_busy_loop()
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 1026, in run_busy_loop
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     self._process_engine_step()
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 1060, in _process_engine_step
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     outputs, model_executed = self.step_fn()
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]                               ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 411, in step
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     model_output = future.result()
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]                    ^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 81, in result
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     return super().result()
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]            ^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 449, in result
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     return self.__get_result()
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]            ^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     raise self._exception
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 85, in wait_for_response
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     response = self.aggregate(get_response())
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]                               ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 358, in get_response
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008]     raise RuntimeError(
(EngineCore_DP0 pid=24688) [Stage-2] ERROR 03-03 09:00:37 [core.py:1008] RuntimeError: Worker failed with error 'CUDA out of memory. Tried to allocate 3.35 GiB. GPU 0 has a total capacity of 79.11 GiB of which 1.72 GiB is free. Including non-PyTorch memory, this process has 25.73 GiB memory in use. Process 25180 has 51.63 GiB memory in use. Of the allocated memory 16.69 GiB is allocated by PyTorch, and 8.31 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)', please check the stack trace above for the root cause
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:37 [multiproc_executor.py:732] Parent process exited, terminating worker
(Worker pid=25166) [Stage-2] INFO 03-03 09:00:37 [multiproc_executor.py:785] WorkerProc shutting down.
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082] Failed on batch ['0_f466866f-9827-43f3-8ffc-4912e278d43c', '1_4ce9a33d-9b61-4be6-ad60-fb62c8aaede6', '2_6998bc1a-3433-47a8-a613-fdfe7af63d7f', '3_48e34033-ac98-4258-8040-357bb769c28a', '4_2c1f9ab1-6853-45d9-9560-2361adac7dfe', '5_6f47dfa0-8d49-4bd3-b415-5e1226ee32d8', '6_d43c7f13-b4dd-4f16-a60f-3c67a1bf5daa', '7_07cf9d04-75f8-405a-9b80-a07fea5d491d', '8_ba82c748-1413-4797-95b7-e824fad502b3', '9_51f62d85-7a65-40b6-94b5-c7c1a5fd5e63', '10_881c6e00-cabf-4998-8516-384ea7a9294c', '11_5c223695-b87e-4e45-94a2-93f87982dc39', '12_ed61520f-dbc3-4e24-9e87-e008dd6dc56b', '13_8bed2151-3adf-4519-84cf-4bb64b10bc30', '14_139d61cf-8156-4649-9db1-a3f3fe1a5463', '15_7f7a1915-b832-4acf-af37-1f807b856bbb', '16_fa7d480d-f4d1-424e-bfb0-d9eb7d52e801', '17_1c01d312-f56a-455c-b72a-d8c1609834bb', '18_a7911dbb-e798-483e-a2d5-c7201e320aab', '19_8b3693ec-1e33-40cc-ac06-8f935db675fc', '20_c2787457-12d2-4e9a-a64c-dc663e347b42', '21_ed1f34b4-1aee-4e2b-bb58-ce6c2dc09cf8', '22_bc5a537b-23e1-4023-8b80-a52e62fac0c1', '23_495aef1f-8ffe-4055-b858-04ba0c79deab', '24_6c5add7b-ea50-461e-9bfd-497eb0a80201', '25_0f4403ad-df29-45c4-9b5a-52b3798ec679', '26_0f8fa55d-d828-40e8-aeb2-3d50cc7f528a', '27_e1e1b215-c4a1-441a-a49b-e173206215a4', '28_a407e4f6-aca4-491c-81d5-09deebbad96d', '29_e3ebe9bd-7193-403f-b58e-6668eca29a65', '30_fe07d131-9364-4e25-9b5f-879a04f548a9', '31_b9349e5a-b4dc-44b0-a40c-5007b216c7f2']: EngineCore encountered an issue. See stack trace (above) for the root cause.
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082] Traceback (most recent call last):
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/vllm-omni/vllm_omni/entrypoints/omni_stage.py", line 1002, in _stage_worker
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     results = stage_engine.generate(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]               ^^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 449, in generate
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     outputs = self._run_completion(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]               ^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1760, in _run_completion
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     return self._run_engine(use_tqdm=use_tqdm)
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/vllm-omni/vllm_omni/entrypoints/omni_llm.py", line 217, in _run_engine
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     step_outputs = self.llm_engine.step()
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]                    ^^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 294, in step
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     outputs = self.engine_core.get_output()
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 728, in get_output
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     raise self._format_exception(outputs) from None
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082] vllm.v1.engine.exceptions.EngineDeadError: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 0_f466866f-9827-43f3-8ffc-4912e278d43c: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 1_4ce9a33d-9b61-4be6-ad60-fb62c8aaede6: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 2_6998bc1a-3433-47a8-a613-fdfe7af63d7f: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 3_48e34033-ac98-4258-8040-357bb769c28a: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 4_2c1f9ab1-6853-45d9-9560-2361adac7dfe: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 5_6f47dfa0-8d49-4bd3-b415-5e1226ee32d8: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 6_d43c7f13-b4dd-4f16-a60f-3c67a1bf5daa: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 7_07cf9d04-75f8-405a-9b80-a07fea5d491d: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 8_ba82c748-1413-4797-95b7-e824fad502b3: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 9_51f62d85-7a65-40b6-94b5-c7c1a5fd5e63: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 10_881c6e00-cabf-4998-8516-384ea7a9294c: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 11_5c223695-b87e-4e45-94a2-93f87982dc39: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 12_ed61520f-dbc3-4e24-9e87-e008dd6dc56b: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 13_8bed2151-3adf-4519-84cf-4bb64b10bc30: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 14_139d61cf-8156-4649-9db1-a3f3fe1a5463: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 15_7f7a1915-b832-4acf-af37-1f807b856bbb: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 16_fa7d480d-f4d1-424e-bfb0-d9eb7d52e801: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 17_1c01d312-f56a-455c-b72a-d8c1609834bb: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 18_a7911dbb-e798-483e-a2d5-c7201e320aab: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 19_8b3693ec-1e33-40cc-ac06-8f935db675fc: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 20_c2787457-12d2-4e9a-a64c-dc663e347b42: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 21_ed1f34b4-1aee-4e2b-bb58-ce6c2dc09cf8: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 22_bc5a537b-23e1-4023-8b80-a52e62fac0c1: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 23_495aef1f-8ffe-4055-b858-04ba0c79deab: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 24_6c5add7b-ea50-461e-9bfd-497eb0a80201: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 25_0f4403ad-df29-45c4-9b5a-52b3798ec679: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 26_0f8fa55d-d828-40e8-aeb2-3d50cc7f528a: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 27_e1e1b215-c4a1-441a-a49b-e173206215a4: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 28_a407e4f6-aca4-491c-81d5-09deebbad96d: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 29_e3ebe9bd-7193-403f-b58e-6668eca29a65: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 30_fe07d131-9364-4e25-9b5f-879a04f548a9: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 31_b9349e5a-b4dc-44b0-a40c-5007b216c7f2: EngineCore encountered an issue. See stack trace (above) for the root cause.
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082] Failed on batch ['32_81336837-07cc-43a4-bf05-6f0a8b724181', '33_f167b77c-6ee3-4de4-9a12-9aa649056634', '34_bc1272f6-a1b0-4597-a45f-3ee0431424ad', '35_1fbbd29e-478d-4557-90be-12049cc5d8f7', '36_914b29c4-2d4a-4f87-a877-491377920948', '37_135e3ea0-7859-4ebe-b05b-14fd132d1df4', '38_40603229-e298-4e03-b6b8-498ad8155bd4', '39_dd473ff6-d071-4d43-bbd4-f2a4a1ed1912', '40_9bba8916-6306-4dd0-8555-1a02017d6d8c', '41_2e6fd3ed-7f6c-42bc-bcec-4eeb1dc7163c', '42_762aeea3-15d5-448e-84cb-66deae54cafd', '43_6d540138-9eb7-4b93-9b56-6ed91dc457e1', '44_469d9873-7f04-4d98-955d-70c6f78fc397', '45_dca2cb8e-48cf-42c9-aaad-1c39e3dc8048', '46_08232f2a-a86d-4b8e-9743-13804cd6b6f6', '47_bb334b7b-fb68-4d14-b570-43fc8d024bf0', '48_37b81ccc-79b1-42c9-92a4-051ffe8b8130', '49_b0ae288a-1fd8-4c9e-bdbe-cdf50e13dbe5', '50_e82d16cb-8a2f-4fdc-90a5-0c10bf96d75d', '51_30edadc5-4cbb-436c-94b3-e3d355b6d2c9', '52_9c842fd2-3391-4cfe-8061-04417693166b', '53_e9fc620f-a21c-48d2-9f9a-0c52cea901df', '54_2ed3e162-28f3-4ed9-8d60-6c229ac970d6', '55_868e1f85-6f3d-411f-bb3e-bf8bb14b85b2', '56_248b5369-7a6c-40b2-b872-ed4906e45ba4', '57_f781119e-34f7-431c-b822-3a5855443a0e', '58_5fef2852-ff39-46da-ab9d-721f4d0c05a0', '59_a40ab85d-94e7-406d-932d-a8e9b9e336a5', '60_c05803ca-bd0f-414e-bdf1-4af0fe490f00', '61_ae997d0c-bffc-4840-8fbe-e84222dab179', '62_c46246ec-218d-4463-96a6-615239a9a742', '63_4fb7b0a0-e374-489f-8d30-705f94fa4c93']: EngineCore encountered an issue. See stack trace (above) for the root cause.
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082] Traceback (most recent call last):
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/vllm-omni/vllm_omni/entrypoints/omni_stage.py", line 1002, in _stage_worker
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     results = stage_engine.generate(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]               ^^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 449, in generate
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     outputs = self._run_completion(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]               ^^^^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1749, in _run_completion
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     self._validate_and_add_requests(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1848, in _validate_and_add_requests
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     raise e
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1837, in _validate_and_add_requests
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     request_id = self._add_request(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]                  ^^^^^^^^^^^^^^^^^^
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 1889, in _add_request
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     self.llm_engine.add_request(
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 268, in add_request
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     self.engine_core.add_request(request)
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 761, in add_request
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     self._send_input(EngineCoreRequestType.ADD, request)
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 734, in _send_input
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     self.ensure_alive()
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 576, in ensure_alive
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082]     raise EngineDeadError()
[Stage-2] ERROR 03-03 09:00:37 [omni_stage.py:1082] vllm.v1.engine.exceptions.EngineDeadError: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 32_81336837-07cc-43a4-bf05-6f0a8b724181: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 33_f167b77c-6ee3-4de4-9a12-9aa649056634: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 34_bc1272f6-a1b0-4597-a45f-3ee0431424ad: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 35_1fbbd29e-478d-4557-90be-12049cc5d8f7: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 36_914b29c4-2d4a-4f87-a877-491377920948: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 37_135e3ea0-7859-4ebe-b05b-14fd132d1df4: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 38_40603229-e298-4e03-b6b8-498ad8155bd4: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 39_dd473ff6-d071-4d43-bbd4-f2a4a1ed1912: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 40_9bba8916-6306-4dd0-8555-1a02017d6d8c: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 41_2e6fd3ed-7f6c-42bc-bcec-4eeb1dc7163c: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 42_762aeea3-15d5-448e-84cb-66deae54cafd: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 43_6d540138-9eb7-4b93-9b56-6ed91dc457e1: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 44_469d9873-7f04-4d98-955d-70c6f78fc397: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 45_dca2cb8e-48cf-42c9-aaad-1c39e3dc8048: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 46_08232f2a-a86d-4b8e-9743-13804cd6b6f6: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 47_bb334b7b-fb68-4d14-b570-43fc8d024bf0: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 48_37b81ccc-79b1-42c9-92a4-051ffe8b8130: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 49_b0ae288a-1fd8-4c9e-bdbe-cdf50e13dbe5: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 50_e82d16cb-8a2f-4fdc-90a5-0c10bf96d75d: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 51_30edadc5-4cbb-436c-94b3-e3d355b6d2c9: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 52_9c842fd2-3391-4cfe-8061-04417693166b: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 53_e9fc620f-a21c-48d2-9f9a-0c52cea901df: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 54_2ed3e162-28f3-4ed9-8d60-6c229ac970d6: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 55_868e1f85-6f3d-411f-bb3e-bf8bb14b85b2: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 56_248b5369-7a6c-40b2-b872-ed4906e45ba4: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 57_f781119e-34f7-431c-b822-3a5855443a0e: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 58_5fef2852-ff39-46da-ab9d-721f4d0c05a0: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 59_a40ab85d-94e7-406d-932d-a8e9b9e336a5: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 60_c05803ca-bd0f-414e-bdf1-4af0fe490f00: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 61_ae997d0c-bffc-4840-8fbe-e84222dab179: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 62_c46246ec-218d-4463-96a6-615239a9a742: EngineCore encountered an issue. See stack trace (above) for the root cause.
ERROR 03-03 09:00:37 [omni.py:1027] [Orchestrator] Stage 2 error on request 63_4fb7b0a0-e374-489f-8d30-705f94fa4c93: EngineCore encountered an issue. See stack trace (above) for the root cause.
(EngineCore_DP0 pid=24688) Process EngineCore_DP0:
(EngineCore_DP0 pid=24688) Traceback (most recent call last):
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_DP0 pid=24688)     self.run()
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/multiprocessing/process.py", line 108, in run
(EngineCore_DP0 pid=24688)     self._target(*self._args, **self._kwargs)
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 1010, in run_engine_core
(EngineCore_DP0 pid=24688)     raise e
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 999, in run_engine_core
(EngineCore_DP0 pid=24688)     engine_core.run_busy_loop()
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 1026, in run_busy_loop
(EngineCore_DP0 pid=24688)     self._process_engine_step()
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 1060, in _process_engine_step
(EngineCore_DP0 pid=24688)     outputs, model_executed = self.step_fn()
(EngineCore_DP0 pid=24688)                               ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 411, in step
(EngineCore_DP0 pid=24688)     model_output = future.result()
(EngineCore_DP0 pid=24688)                    ^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 81, in result
(EngineCore_DP0 pid=24688)     return super().result()
(EngineCore_DP0 pid=24688)            ^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 449, in result
(EngineCore_DP0 pid=24688)     return self.__get_result()
(EngineCore_DP0 pid=24688)            ^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
(EngineCore_DP0 pid=24688)     raise self._exception
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 85, in wait_for_response
(EngineCore_DP0 pid=24688)     response = self.aggregate(get_response())
(EngineCore_DP0 pid=24688)                               ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=24688)   File "/home/gcpuser/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 358, in get_response
(EngineCore_DP0 pid=24688)     raise RuntimeError(
(EngineCore_DP0 pid=24688) RuntimeError: Worker failed with error 'CUDA out of memory. Tried to allocate 3.35 GiB. GPU 0 has a total capacity of 79.11 GiB of which 1.72 GiB is free. Including non-PyTorch memory, this process has 25.73 GiB memory in use. Process 25180 has 51.63 GiB memory in use. Of the allocated memory 16.69 GiB is allocated by PyTorch, and 8.31 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)', please check the stack trace above for the root cause
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:41 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:41 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:41 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!
(Worker pid=25180) [Stage-1] INFO 03-03 09:00:42 [mrope.py:345] Multimodal token idx changed!

Hi, I tested again, I didn't meet OOM problem in latest main with 2*H100. Could you please check if any other residual processes are running in your environment?

hsliuustc0106 added a commit to hsliuustc0106/vllm-omni-skills that referenced this pull request Mar 4, 2026
### vllm-omni-perf
- Source: [PR #1619](vllm-project/vllm-omni#1619) - [Bugfix] Fix Qwen3-TTS code predictor crash due to missing vLLM config context
- Changes:
  - Bug fix: [Bugfix] Fix Qwen3-TTS code predictor crash due to missing vLLM config context

### vllm-omni-contrib
- Source: [PR #1615](vllm-project/vllm-omni#1615) - [Doc] Fix links in the configuration doc
- Changes:
  - Bug fix: [Doc] Fix links in the configuration doc

### vllm-omni-perf
- Source: [PR #1609](vllm-project/vllm-omni#1609) - [Bugfix] Fix filepath resolution for model with subdir and GLM-Image generation
- Changes:
  - Bug fix: [Bugfix] Fix filepath resolution for model with subdir and GLM-Image generation

### vllm-omni-image-gen
- Source: [PR #1609](vllm-project/vllm-omni#1609) - [Bugfix] Fix filepath resolution for model with subdir and GLM-Image generation
- Changes:
  - Bug fix: [Bugfix] Fix filepath resolution for model with subdir and GLM-Image generation
- Additions:
  - GLM-Image
  - GLM-Image
  - GLM-Image
  - GLM-Image
  - GLM-Image
  - GLM-Image
  - GLM-Image
  - GLM-Image

### vllm-omni-api
- Source: [PR #1609](vllm-project/vllm-omni#1609) - [Bugfix] Fix filepath resolution for model with subdir and GLM-Image generation
- Changes:
  - Bug fix: [Bugfix] Fix filepath resolution for model with subdir and GLM-Image generation

### vllm-omni-serving
- Source: [PR #1602](vllm-project/vllm-omni#1602) - [Bugfix] fix kernel error for qwen3-omni
- Changes:
  - Bug fix: [Bugfix] fix kernel error for qwen3-omni

### vllm-omni-perf
- Source: [PR #1598](vllm-project/vllm-omni#1598) - [BugFix] Fix load_weights error when loading HunyuanImage3.0
- Changes:
  - Bug fix: [BugFix] Fix load_weights error when loading HunyuanImage3.0

### vllm-omni-image-gen
- Source: [PR #1598](vllm-project/vllm-omni#1598) - [BugFix] Fix load_weights error when loading HunyuanImage3.0
- Changes:
  - Bug fix: [BugFix] Fix load_weights error when loading HunyuanImage3.0
- Additions:
  - HunyuanImage3
  - HunyuanImage3Pipeline
  - HunyuanImage3
  - HunyuanImage-3
  - HunyuanImage-3
  - HunyuanImage-3
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage-3

### vllm-omni-quantization
- Source: [PR #1598](vllm-project/vllm-omni#1598) - [BugFix] Fix load_weights error when loading HunyuanImage3.0
- Changes:
  - Bug fix: [BugFix] Fix load_weights error when loading HunyuanImage3.0

### vllm-omni-distributed
- Source: [PR #1598](vllm-project/vllm-omni#1598) - [BugFix] Fix load_weights error when loading HunyuanImage3.0
- Changes:
  - Bug fix: [BugFix] Fix load_weights error when loading HunyuanImage3.0

### vllm-omni-contrib
- Source: [PR #1576](vllm-project/vllm-omni#1576) - 0.16.0 release

### vllm-omni-audio-tts
- Source: [PR #1570](vllm-project/vllm-omni#1570) - [bugfix] Fix unexpected argument 'is_finished' in function llm2code2wav_async_chunk of mimo-audio
- Changes:
  - Bug fix: [bugfix] Fix unexpected argument 'is_finished' in function llm2code2wav_async_chunk of mimo-audio

### vllm-omni-api
- Source: [PR #1566](vllm-project/vllm-omni#1566) - [Bugfix] Import InputPreprocessor into Renderer
- Changes:
  - Bug fix: [Bugfix] Import InputPreprocessor into Renderer

### vllm-omni-perf
- Source: [PR #1565](vllm-project/vllm-omni#1565) - [BugFix]: fix a lot of bug
- Changes:
  - Bug fix: [BugFix]: fix a lot of bug

### vllm-omni-contrib
- Source: [PR #1564](vllm-project/vllm-omni#1564) - [NPU][Bugfix] Align GPU side and recover qwen3-tts
- Changes:
  - Bug fix: [NPU][Bugfix] Align GPU side and recover qwen3-tts

### vllm-omni-audio-tts
- Source: [PR #1564](vllm-project/vllm-omni#1564) - [NPU][Bugfix] Align GPU side and recover qwen3-tts
- Changes:
  - Bug fix: [NPU][Bugfix] Align GPU side and recover qwen3-tts

### vllm-omni-perf
- Source: [PR #1562](vllm-project/vllm-omni#1562) - [BugFix] Fix unexpected crash when init OmniDiffusion
- Changes:
  - Bug fix: [BugFix] Fix unexpected crash when init OmniDiffusion

### vllm-omni-api
- Source: [PR #1562](vllm-project/vllm-omni#1562) - [BugFix] Fix unexpected crash when init OmniDiffusion
- Changes:
  - Bug fix: [BugFix] Fix unexpected crash when init OmniDiffusion

### vllm-omni-quantization
- Source: [PR #1562](vllm-project/vllm-omni#1562) - [BugFix] Fix unexpected crash when init OmniDiffusion
- Changes:
  - Bug fix: [BugFix] Fix unexpected crash when init OmniDiffusion

### vllm-omni-distributed
- Source: [PR #1562](vllm-project/vllm-omni#1562) - [BugFix] Fix unexpected crash when init OmniDiffusion
- Changes:
  - Bug fix: [BugFix] Fix unexpected crash when init OmniDiffusion

### vllm-omni-api
- Source: [PR #1554](vllm-project/vllm-omni#1554) - fix(qwen3-tts): fix Base ICL voice clone producing corrupted audio
- Changes:
  - Bug fix: fix(qwen3-tts): fix Base ICL voice clone producing corrupted audio

### vllm-omni-cicd
- Source: [PR #1543](vllm-project/vllm-omni#1543) - [CI] Modify some CI test cases to run on L4 environment to reduce H100 resource usage.

### vllm-omni-perf
- Source: [PR #1540](vllm-project/vllm-omni#1540) - Fix no embed text spk tokens
- Changes:
  - Bug fix: Fix no embed text spk tokens

### vllm-omni-distributed
- Source: [PR #1540](vllm-project/vllm-omni#1540) - Fix no embed text spk tokens
- Changes:
  - Bug fix: Fix no embed text spk tokens

### vllm-omni-perf
- Source: [PR #1539](vllm-project/vllm-omni#1539) - [Debug] Enable curl retry aligned with openai

### vllm-omni-quantization
- Source: [PR #1539](vllm-project/vllm-omni#1539) - [Debug] Enable curl retry aligned with openai

### vllm-omni-distributed
- Source: [PR #1539](vllm-project/vllm-omni#1539) - [Debug] Enable curl retry aligned with openai

### vllm-omni-image-gen
- Source: [PR #1538](vllm-project/vllm-omni#1538) - [CI][skip ci]Update H100 image link based on #1518

### vllm-omni-perf
- Source: [PR #1536](vllm-project/vllm-omni#1536) - [Bugfix] Fix transformers 5.x compat issues in online TTS serving
- Changes:
  - Bug fix: [Bugfix] Fix transformers 5.x compat issues in online TTS serving

### vllm-omni-serving
- Source: [PR #1536](vllm-project/vllm-omni#1536) - [Bugfix] Fix transformers 5.x compat issues in online TTS serving
- Changes:
  - Bug fix: [Bugfix] Fix transformers 5.x compat issues in online TTS serving

### vllm-omni-cicd
- Source: [PR #1534](vllm-project/vllm-omni#1534) - [Debug] Merge vllm pull 35368

### vllm-omni-contrib
- Source: [PR #1530](vllm-project/vllm-omni#1530) - [Docs] update async chunk docs diagram [skip ci]

### vllm-omni-distributed
- Source: [PR #1524](vllm-project/vllm-omni#1524) - [BugFix] Restore talker's config
- Changes:
  - Bug fix: [BugFix] Restore talker's config

### vllm-omni-api
- Source: [PR #1522](vllm-project/vllm-omni#1522) - [Bugfix] Use uds for zmq address if not set --stage-id
- Changes:
  - New feature: [Bugfix] Use uds for zmq address if not set --stage-id

### vllm-omni-perf
- Source: [PR #1521](vllm-project/vllm-omni#1521) - Revert gpu_1 job to use regular image

### vllm-omni-perf
- Source: [PR #1518](vllm-project/vllm-omni#1518) - Use pull through cache image for H100 pool

### vllm-omni-perf
- Source: [PR #1515](vllm-project/vllm-omni#1515) - [Bugfix] fix offline text_to_image error from #1009
- Changes:
  - Bug fix: [Bugfix] fix offline text_to_image error from #1009

### vllm-omni-image-gen
- Source: [PR #1515](vllm-project/vllm-omni#1515) - [Bugfix] fix offline text_to_image error from #1009
- Changes:
  - Bug fix: [Bugfix] fix offline text_to_image error from #1009
- Additions:
  - num-images-per-prompt

### vllm-omni-quantization
- Source: [PR #1515](vllm-project/vllm-omni#1515) - [Bugfix] fix offline text_to_image error from #1009
- Changes:
  - Bug fix: [Bugfix] fix offline text_to_image error from #1009

### vllm-omni-distributed
- Source: [PR #1515](vllm-project/vllm-omni#1515) - [Bugfix] fix offline text_to_image error from #1009
- Changes:
  - Bug fix: [Bugfix] fix offline text_to_image error from #1009

### vllm-omni-api
- Source: [PR #1509](vllm-project/vllm-omni#1509) - [Chore] remove unused logger in omni_diffusion (#531)

### vllm-omni-perf
- Source: [PR #1505](vllm-project/vllm-omni#1505) - [Doc] Update installation instructions for vllm 0.16.0

### vllm-omni-quantization
- Source: [PR #1505](vllm-project/vllm-omni#1505) - [Doc] Update installation instructions for vllm 0.16.0

### vllm-omni-distributed
- Source: [PR #1505](vllm-project/vllm-omni#1505) - [Doc] Update installation instructions for vllm 0.16.0

### vllm-omni-contrib
- Source: [PR #1505](vllm-project/vllm-omni#1505) - [Doc] Update installation instructions for vllm 0.16.0

### vllm-omni-video-gen
- Source: [PR #1504](vllm-project/vllm-omni#1504) - [Feature][Wan2.2] Speed up diffusion model startup by multi-thread weight loading
- Changes:
  - New feature: [Feature][Wan2.2] Speed up diffusion model startup by multi-thread weight loading

### vllm-omni-perf
- Source: [PR #1504](vllm-project/vllm-omni#1504) - [Feature][Wan2.2] Speed up diffusion model startup by multi-thread weight loading
- Changes:
  - New feature: [Feature][Wan2.2] Speed up diffusion model startup by multi-thread weight loading

### vllm-omni-api
- Source: [PR #1504](vllm-project/vllm-omni#1504) - [Feature][Wan2.2] Speed up diffusion model startup by multi-thread weight loading
- Changes:
  - New feature: [Feature][Wan2.2] Speed up diffusion model startup by multi-thread weight loading

### vllm-omni-cicd
- Source: [PR #1504](vllm-project/vllm-omni#1504) - [Feature][Wan2.2] Speed up diffusion model startup by multi-thread weight loading
- Changes:
  - New feature: [Feature][Wan2.2] Speed up diffusion model startup by multi-thread weight loading

### vllm-omni-contrib
- Source: [PR #1500](vllm-project/vllm-omni#1500) - [ROCm] [CI] [Docker] Point to use the latest vLLM v0.16.0 stable version

### vllm-omni-cicd
- Source: [PR #1492](vllm-project/vllm-omni#1492) - [Platform] Enable layerwise offload on all hardware

### vllm-omni-image-gen
- Source: [PR #1491](vllm-project/vllm-omni#1491) - [CI] Update Dockerfile for vllm-omni CI image and remove obsolete dep…

### vllm-omni-cicd
- Source: [PR #1488](vllm-project/vllm-omni#1488) - [XPU][NPU][ROCM] enable cpu_offloading flag for non_cuda

### vllm-omni-audio-tts
- Source: [PR #1482](vllm-project/vllm-omni#1482) - [Fix][Chore] Qwen3-TTS Modeling Minor Code Sanity Improvements
- Changes:
  - Bug fix: [Fix][Chore] Qwen3-TTS Modeling Minor Code Sanity Improvements

### vllm-omni-perf
- Source: [PR #1468](vllm-project/vllm-omni#1468) - [BugFix] process request.num_cached_tokens if it equals to the initial value
- Changes:
  - Bug fix: [BugFix] process request.num_cached_tokens if it equals to the initial value

### vllm-omni-audio-tts
- Source: [PR #1455](vllm-project/vllm-omni#1455) - [Bugfix] Fix case-sensitive task_type matching in Qwen3TTSModelForGeneration
- Changes:
  - Bug fix: [Bugfix] Fix case-sensitive task_type matching in Qwen3TTSModelForGeneration

### vllm-omni-cicd
- Source: [PR #1449](vllm-project/vllm-omni#1449) - [Test] Reduce Perf test case and fix modify stage config
- Changes:
  - Bug fix: [Test] Reduce Perf test case and fix modify stage config

### vllm-omni-cicd
- Source: [PR #1448](vllm-project/vllm-omni#1448) - [Bugfix] Race condition in MultiprocExecutor when concurent access to Scheduler
- Changes:
  - Bug fix: [Bugfix] Race condition in MultiprocExecutor when concurent access to Scheduler

### vllm-omni-cicd
- Source: [PR #1438](vllm-project/vllm-omni#1438) - [Qwen3TTS][Feat] Streaming output
- Changes:
  - New feature: [Qwen3TTS][Feat] Streaming output

### vllm-omni-api
- Source: [PR #1438](vllm-project/vllm-omni#1438) - [Qwen3TTS][Feat] Streaming output
- Changes:
  - New feature: [Qwen3TTS][Feat] Streaming output

### vllm-omni-contrib
- Source: [PR #1438](vllm-project/vllm-omni#1438) - [Qwen3TTS][Feat] Streaming output
- Changes:
  - New feature: [Qwen3TTS][Feat] Streaming output

### vllm-omni-audio-tts
- Source: [PR #1438](vllm-project/vllm-omni#1438) - [Qwen3TTS][Feat] Streaming output
- Changes:
  - New feature: [Qwen3TTS][Feat] Streaming output

### vllm-omni-cicd
- Source: [PR #1435](vllm-project/vllm-omni#1435) - [Doc][Test][Misc] ComfyUI test, more screenshot, and code cleaning

### vllm-omni-video-gen
- Source: [PR #1433](vllm-project/vllm-omni#1433) - [Debug] Multi-Request for Qwen 3 Omni use_audio_in_video

### vllm-omni-audio-tts
- Source: [PR #1433](vllm-project/vllm-omni#1433) - [Debug] Multi-Request for Qwen 3 Omni use_audio_in_video
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready label to trigger buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants