Skip to content

使用官方 Docker 镜像部署 Qianfan-VL启动报错 #13

@liubaoshan

Description

@liubaoshan

docker启动命令:
docker run -d --name qianfan-vl --gpus all -v /path/to/Qianfan-VL-8B:/model -p 8000:8000 --ipc=host vllm/vllm-openai:latest --model /model --served-model-name qianfan-vl --trust-remote-code --hf-overrides '{"architectures":["InternVLChatModel"],"model_type":"internvl_chat"}'

容器启动报错信息:
(APIServer pid=1) ^^^^^^^^^^^^^^^^

(APIServer pid=1) File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run

(APIServer pid=1) return self._loop.run_until_complete(task)

(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

(APIServer pid=1) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete

(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/uvloop/init.py", line 61, in wrapper

(APIServer pid=1) return await main

(APIServer pid=1) ^^^^^^^^^^

(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1884, in run_server

(APIServer pid=1) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)

(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1902, in run_server_worker

(APIServer pid=1) async with build_async_engine_client(

(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^

(APIServer pid=1) File "/usr/lib/python3.12/contextlib.py", line 210, in aenter

(APIServer pid=1) return await anext(self.gen)

(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^

(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 180, in build_async_engine_client

(APIServer pid=1) async with build_async_engine_client_from_engine_args(

(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

(APIServer pid=1) File "/usr/lib/python3.12/contextlib.py", line 210, in aenter

(APIServer pid=1) return await anext(self.gen)

(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^

(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 206, in build_async_engine_client_from_engine_args

(APIServer pid=1) vllm_config = engine_args.create_engine_config(usage_context=usage_context)

(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1142, in create_engine_config

(APIServer pid=1) model_config = self.create_model_config()

(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^

(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 994, in create_model_config

(APIServer pid=1) return ModelConfig(

(APIServer pid=1) ^^^^^^^^^^^^

(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/pydantic/_internal/_dataclasses.py", line 123, in init

(APIServer pid=1) s.pydantic_validator.validate_python(ArgsKwargs(args, kwargs), self_instance=s)

(APIServer pid=1) pydantic_core._pydantic_core.ValidationError: 2 validation errors for ModelConfig

(APIServer pid=1) hf_overrides.dict[str,any]

(APIServer pid=1) Input should be a valid dictionary [type=dict_type, input_value="'{architectures:[InternV...el_type:internvl_chat}'", input_type=str]

(APIServer pid=1) For further information visit https://errors.pydantic.dev/2.11/v/dict_type⁠

(APIServer pid=1) hf_overrides.callable

(APIServer pid=1) Input should be callable [type=callable_type, input_value="'{architectures:[InternV...el_type:internvl_chat}'", input_type=str]

(APIServer pid=1) For further information visit https://errors.pydantic.dev/2.11/v/callable_type⁠

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions