Skip to content

Commit 082c4a6

Browse files
authored
[Bugfix] Import InputPreprocessor into Renderer (vllm-project#1566)
Signed-off-by: rongfu.leng <lenronfu@gmail.com>
1 parent b8fa125 commit 082c4a6

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

vllm_omni/entrypoints/openai/serving_chat.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,6 @@
1313
from fastapi import Request
1414
from PIL import Image
1515
from pydantic import TypeAdapter
16-
from vllm.renderers.protocol import BaseRenderer
1716

1817
from vllm_omni.entrypoints.async_omni import AsyncOmni
1918
from vllm_omni.entrypoints.openai.protocol.chat_completion import OmniChatCompletionResponse
@@ -67,7 +66,7 @@
6766
from vllm.logger import init_logger
6867
from vllm.outputs import RequestOutput
6968
from vllm.reasoning import ReasoningParser
70-
from vllm.renderers import merge_kwargs
69+
from vllm.renderers import BaseRenderer, merge_kwargs
7170
from vllm.renderers.inputs import TokPrompt
7271
from vllm.sampling_params import SamplingParams
7372
from vllm.tokenizers import TokenizerLike

0 commit comments

Comments
 (0)