-
1 、I have 2 vllm service on different container, using local model , which located at /root/data/Qwen/Qwen2-0.5B-Instruct 2、 run vllm serve /root/data/Qwen/Qwen2-0.5B-Instruct successfully, could get vllm model list 3、set up litellm config.yaml with one vllm service model_list:
4、 litellm --config config.yaml --detailed_debug the debug info is as followed: root@ins-98fj6-656c84fb97-n9qms:~/data# litellm --config litellm_config.yaml --detailed_debug LiteLLM: Proxy initialized with Config, Set models: Routing enable_pre_call_checks: False Routing fallbacks: None Routing content fallbacks: None Routing context window fallbacks: None Router Redis Caching=None 21:30:20 - LiteLLM Proxy:DEBUG: proxy_server.py:644 - prisma_client: None |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
run command line: litellm-proxy models list there is no response |
Beta Was this translation helpful? Give feedback.
-
anything else I could do? |
Beta Was this translation helpful? Give feedback.
-
it is resolved. thanks. |
Beta Was this translation helpful? Give feedback.
it is resolved. thanks.