Skip to content

Qwen3-TTS GPU加速错误 #980

@despairTK

Description

@despairTK

按照文档:
默认配置为 CPU 模式以兼容所有电脑。如果您拥有 NVIDIA 显卡并安装了 CUDA,可以通过以下步骤开启加速(推理速度提升约10倍):

右键点击对应的 .bat 文件,选择“编辑”。
删除文件末尾的 --device cpu --dtype float32 代码。
保存并重新运行。

但是会出现该错误:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "F:\index\qwen3tts-win-0124\runtime\Lib\site-packages\qwen_tts\cli\demo.py", line 634, in <module>
    raise SystemExit(main())
                     ^^^^^^
  File "F:\index\qwen3tts-win-0124\runtime\Lib\site-packages\qwen_tts\cli\demo.py", line 608, in main
    tts = Qwen3TTSModel.from_pretrained(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\index\qwen3tts-win-0124\runtime\Lib\site-packages\qwen_tts\inference\qwen3_tts_model.py", line 112, in from_pretrained
    model = AutoModel.from_pretrained(pretrained_model_name_or_path, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\index\qwen3tts-win-0124\runtime\Lib\site-packages\transformers\models\auto\auto_factory.py", line 604, in from_pretrained
    return model_class.from_pretrained(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\index\qwen3tts-win-0124\runtime\Lib\site-packages\qwen_tts\core\models\modeling_qwen3_tts.py", line 1872, in from_pretrained
    model = super().from_pretrained(
            ^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\index\qwen3tts-win-0124\runtime\Lib\site-packages\transformers\modeling_utils.py", line 277, in _wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "F:\index\qwen3tts-win-0124\runtime\Lib\site-packages\transformers\modeling_utils.py", line 5048, in from_pretrained
    ) = cls._load_pretrained_model(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\index\qwen3tts-win-0124\runtime\Lib\site-packages\transformers\modeling_utils.py", line 5432, in _load_pretrained_model
    caching_allocator_warmup(model, expanded_device_map, hf_quantizer)
  File "F:\index\qwen3tts-win-0124\runtime\Lib\site-packages\transformers\modeling_utils.py", line 6090, in caching_allocator_warmup
    device_memory = torch_accelerator_module.mem_get_info(index)[0]
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\index\qwen3tts-win-0124\runtime\Lib\site-packages\torch\cuda\memory.py", line 897, in mem_get_info
    return torch.cuda.cudart().cudaMemGetInfo(device)
           ^^^^^^^^^^^^^^^^^^^
  File "F:\index\qwen3tts-win-0124\runtime\Lib\site-packages\torch\cuda\__init__.py", line 501, in cudart
    _lazy_init()
  File "F:\index\qwen3tts-win-0124\runtime\Lib\site-packages\torch\cuda\__init__.py", line 417, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

应该是qwen3tts-win-0124环境安装中没有GPU PyTorch的版本安装。

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions