-
Notifications
You must be signed in to change notification settings - Fork 678
Description
你好我修改了models.py函数,模仿了你们qwen模型的格式,新增了一个qwen32b的配置,模型在本地是可以连接的,我发现后端在解析链接qwen32b的时候不会报错但是前端解析json的时候会报错,是我的格式错误还是你们没有写这种适配方式,如果没有我能否直接修改json解析格式来达到我连接本地大模型的效果
新增配置:Qwen3-32b-MAX 本地API模型配置
qwen32b = dict(
type=GPTAPI,
model_type=os.getenv("LLM_MODEL", "Qwen3-32b-MAX"),
key=os.getenv("LLM_BINDING_API_KEY", "none"),
api_base=os.getenv("LLM_BINDING_HOST", "http://localhost:8011/v1"),
meta_template=[
dict(role="system", api_role="system"),
dict(role="user", api_role="user"),
dict(role="assistant", api_role="assistant"),
dict(role="environment", api_role="system"),
],
top_p=0.8,
top_k=1,
temperature=float(os.getenv("TEMPERATURE", 0.1)),
max_new_tokens=int(os.getenv("MAX_TOKENS", 131072)),
repetition_penalty=1.02,
stop_words=["<|im_end|>"],
timeout=int(os.getenv("TIMEOUT", 150)),
)
报错如下
- Running on local URL: http://127.0.0.1:7882
To create a public link, set share=True in launch().
F:\ANACONDA\envs\mindsearch\Lib\site-packages\schemdraw\backends\mpl.py:79: UserWarning: FigureCanvasAgg is non-interactive, and thus cannot be shown
self.fig.show()
F:\ANACONDA\envs\mindsearch\Lib\site-packages\schemdraw\backends\mpl.py:80: UserWarning: FigureCanvasAgg is non-interactive, and thus cannot be shown
plt.show() # To start the MPL event loop
Traceback (most recent call last):
File "F:\ANACONDA\envs\mindsearch\Lib\site-packages\gradio\queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ANACONDA\envs\mindsearch\Lib\site-packages\gradio\route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ANACONDA\envs\mindsearch\Lib\site-packages\gradio\blocks.py", line 2019, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ANACONDA\envs\mindsearch\Lib\site-packages\gradio\blocks.py", line 1578, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ANACONDA\envs\mindsearch\Lib\site-packages\gradio\utils.py", line 710, in async_iteration
return await anext(iterator)
^^^^^^^^^^^^^^^^^^^^^
File "F:\ANACONDA\envs\mindsearch\Lib\site-packages\gradio\utils.py", line 704, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ANACONDA\envs\mindsearch\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ANACONDA\envs\mindsearch\Lib\site-packages\anyio_backends_asyncio.py", line 2470, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "F:\ANACONDA\envs\mindsearch\Lib\site-packages\anyio_backends_asyncio.py", line 967, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ANACONDA\envs\mindsearch\Lib\site-packages\gradio\utils.py", line 687, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "F:\ANACONDA\envs\mindsearch\Lib\site-packages\gradio\utils.py", line 848, in gen_wrapper
response = next(iterator)
^^^^^^^^^^^^^^
File "F:\down_project\MindSearch-main\frontend\mindsearch_gradio.py", line 167, in predict
for resp in streaming(raw_response):
File "F:\down_project\MindSearch-main\frontend\mindsearch_gradio.py", line 147, in streaming
response = json.loads(decoded)
^^^^^^^^^^^^^^^^^^^
File "F:\ANACONDA\envs\mindsearch\Lib\json_init_.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ANACONDA\envs\mindsearch\Lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ANACONDA\envs\mindsearch\Lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)