为什么 Qwen2.5-Omni-7B 官方教程都报错 Cannot import available module of Qwen2_5OmniModel in modelscope ? #1295
-
|
官方教程:https://modelscope.cn/models/Qwen/Qwen2.5-Omni-7B 且安装了最新的 modelscope 版本 ╰─➤ pip install modelscope --upgrade 1 ↵
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: modelscope in /home/pon/.local/share/virtualenvs/modelscope_example-DACykz4b/lib/python3.11/site-packages (1.17.1)
Collecting modelscope
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/1b/8b/a3a6a5b3afe89517f513be7257193ba5aed0b164c42e4453ffdd150729d6/modelscope-1.24.1-py3-none-any.whl (5.9 MB)
Requirement already satisfied: requests>=2.25 in /home/pon/.local/share/virtualenvs/modelscope_example-DACykz4b/lib/python3.11/site-packages (from modelscope) (2.32.3)
Requirement already satisfied: tqdm>=4.64.0 in /home/pon/.local/share/virtualenvs/modelscope_example-DACykz4b/lib/python3.11/site-packages (from modelscope) (4.66.5)
Requirement already satisfied: urllib3>=1.26 in /home/pon/.local/share/virtualenvs/modelscope_example-DACykz4b/lib/python3.11/site-packages (from modelscope) (2.2.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/pon/.local/share/virtualenvs/modelscope_example-DACykz4b/lib/python3.11/site-packages (from requests>=2.25->modelscope) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /home/pon/.local/share/virtualenvs/modelscope_example-DACykz4b/lib/python3.11/site-packages (from requests>=2.25->modelscope) (3.8)
Requirement already satisfied: certifi>=2017.4.17 in /home/pon/.local/share/virtualenvs/modelscope_example-DACykz4b/lib/python3.11/site-packages (from requests>=2.25->modelscope) (2024.7.4)
Installing collected packages: modelscope
Attempting uninstall: modelscope
Found existing installation: modelscope 1.17.1
Uninstalling modelscope-1.17.1:
Successfully uninstalled modelscope-1.17.1
Successfully installed modelscope-1.24.1
[notice] A new release of pip is available: 24.2 -> 25.0.1
[notice] To update, run: pip install --upgrade pip运行官方文档中的 demo 代码 import soundfile as sf
from modelscope import Qwen2_5OmniModel, Qwen2_5OmniProcessor
from qwen_omni_utils import process_mm_info
# default: Load the model on the available device(s)
model = Qwen2_5OmniModel.from_pretrained("Qwen/Qwen2.5-Omni-7B", torch_dtype="auto", device_map="auto")
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = Qwen2_5OmniModel.from_pretrained(
# "Qwen/Qwen2.5-Omni-7B",
# torch_dtype="auto",
# device_map="auto",
# attn_implementation="flash_attention_2",
# )
processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B")
conversation = [
{
"role": "system",
"content": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.",
},
{
"role": "user",
"content": [
{"type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4"},
],
},
]
# set use audio in video
USE_AUDIO_IN_VIDEO = True
# Preparation for inference
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = processor(text=text, audios=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = inputs.to(model.device).to(model.dtype)
# Inference: Generation of the output text and audio
text_ids, audio = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(text)
sf.write(
"output.wav",
audio.reshape(-1).detach().cpu().numpy(),
samplerate=24000,
)但是报错了 让我很是无语 |
Beta Was this translation helpful? Give feedback.
Answered by
yingdachen
Apr 2, 2025
Replies: 1 comment 1 reply
-
|
参见 #1296 回复,需要源码安装transformers 以及最新的modelscope: pip install git+https://github.com/huggingface/transformers@3a1ead0aabed473eafe527915eea8c197d424356 |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
yingdachen
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
参见 #1296 回复,需要源码安装transformers 以及最新的modelscope:
pip install git+https://github.com/huggingface/transformers@3a1ead0aabed473eafe527915eea8c197d424356
pip install modelscope -U