Skip to content

[NPU][Bugfix] Align GPU side and recover qwen3-tts#1564

Merged
hsliuustc0106 merged 2 commits intovllm-project:mainfrom
gcanlin:qwen3-tts-fix
Feb 28, 2026
Merged

[NPU][Bugfix] Align GPU side and recover qwen3-tts#1564
hsliuustc0106 merged 2 commits intovllm-project:mainfrom
gcanlin:qwen3-tts-fix

Conversation

@gcanlin
Copy link
Contributor

@gcanlin gcanlin commented Feb 28, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

FIX #1508.

As a workaround, we have to add a hardware-specific code in modeling file. I will remove it quickly in the next release.

This PR also updates the docs to prepare the coming release v0.16.0.

Test Plan

 vllm serve Qwen/Qwen3-TTS-12Hz-0.6B-CustomVoice --omni --port 8091 --stage-configs-path /root/vllm-workspace/vllm-omni/vllm_omni/platforms/npu/stage_configs/qwen3_tts.yaml
curl -X POST http://localhost:8091/v1/audio/speech     -H "Content-Type: application/json"     -d '{
        "input": "Hello, how are you?",
        "voice": "vivian",
        "language": "English"
    }' --output output.wav

Test Result

  • Qwen3-Omni
  • Qwen3-TTS
  • Wan2.2 with hsdp

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan. Please provide the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the test style doc
  • The test results. Please paste the results comparison before and after, or the e2e results.
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Please run mkdocs serve to sync the documentation editions to ./docs.
  • (Optional) Release notes update. If your change is user-facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8be2f6e3b7

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

# FIXME(gcanlin): Refactor build_attn_metadata to avoid special-casing NPU backends here.
if current_omni_platform.is_npu():
# NPU requires AscendCommonAttentionMetadata with extra attributes
from vllm_ascend.worker.v2 import attn_utils

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid shadowing attn_utils in build_attn_metadata

Importing attn_utils inside the NPU-only branch makes attn_utils a local variable for the whole function. When current_omni_platform.is_npu() is false (e.g., GPU/CPU execution), the else branch calls attn_utils.build_attn_metadata(...) before that local is assigned, which raises UnboundLocalError and breaks the Qwen3-TTS code predictor path on non-NPU backends.

Useful? React with 👍 / 👎.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Fixed!

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
@gcanlin
Copy link
Contributor Author

gcanlin commented Feb 28, 2026

cc @hsliuustc0106 @Gaohan123 PTAL, thanks!

Copy link
Contributor

@lishunyang12 lishunyang12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a couple of minor comments. The TTS/Omni unification in the model runner looks solid overall — the getattr-based dispatch and the cudagraph guard make sense.

@hsliuustc0106 hsliuustc0106 merged commit c812667 into vllm-project:main Feb 28, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Qwen3-TTS语音合成很慢。支持npu吗?是否有npu镜像

3 participants