怎么使用Qwen-14B的本地模型 #1210
Unanswered
BlueDarkUP
asked this question in
Q&A
Replies: 1 comment
-
有没有好心人帮个忙,实在感谢 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
我尝试把bridge_qwen的model_id = 'Qwen-14B-Chat-Int4',但出现报错
2023-10-29 18:18:18,083 - modelscope - INFO - Loading ast index from C:\Users\.cache\modelscope\ast_indexer
2023-10-29 18:18:18,170 - modelscope - INFO - Loading done! Current index file version is 1.9.4, with md5 5cb916cfe3a86d42811e30e6e5bb84fe and a total number of 945 components indexed
Traceback (most recent call last):
File "D:\gpt_academic-master\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "D:\gpt_academic-master\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "D:\gpt_academic-master\venv\lib\site-packages\gradio\blocks.py", line 1067, in call_function
prediction = await utils.async_iteration(iterator)
File "D:\gpt_academic-master\venv\lib\site-packages\gradio\utils.py", line 336, in async_iteration
return await iterator.anext()
File "D:\gpt_academic-master\venv\lib\site-packages\gradio\utils.py", line 329, in anext
return await anyio.to_thread.run_sync(
File "D:\gpt_academic-master\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\gpt_academic-master\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\gpt_academic-master\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\gpt_academic-master\venv\lib\site-packages\gradio\utils.py", line 312, in run_sync_iterator_async
return next(iterator)
File "D:\gpt_academic-master\toolbox.py", line 88, in decorated
yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)
File "D:\gpt_academic-master\request_llm\bridge_all.py", line 584, in predict
yield from method(inputs, llm_kwargs, *args, **kwargs)
File "D:\gpt_academic-master\request_llm\local_llm_class.py", line 153, in predict
_llm_handle = LLMSingletonClass()
File "D:\gpt_academic-master\request_llm\local_llm_class.py", line 15, in _singleton
_instance[cls] = cls(*args, **kargs)
File "D:\gpt_academic-master\request_llm\local_llm_class.py", line 36, in init
self.start()
File "C:\Program Files\Python39\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Program Files\Python39\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files\Python39\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "C:\Program Files\Python39\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "C:\Program Files\Python39\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'request_llm.bridge_qwen.GetONNXGLMHandle'>: it's not the same object as request_llm.bridge_qwen.GetONNXGLMHandle
Traceback (most recent call last):
File "", line 1, in
File "C:\Program Files\Python39\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Program Files\Python39\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
我想知道怎么部署通义千问啊,找遍了啥也没找到
Beta Was this translation helpful? Give feedback.
All reactions