Cannot access reranking model using vLLM #7566
Closed
djayvp
started this conversation in
Models + Providers
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am using Continue.dev extension in VSCode along with local models. I am running a reranking model using vLLM as there is no reranking support via Ollama. As per the documentation vLLM has Openai compatible API endpoint, so I have used openai as the provider. The configuration for reranking is as follows
However when I check the logs from the vLLM docker container, it does not show any request being made with above configuration. However, if I use huggingface-tei as a provider it gives following error
"GET /info HTTP/1.1" 404 Not Found
and with ollama as a provider the error is"POST /api/show HTTP/1.1" 404 Not Found
in the docker logs.Is it not possible to setup with any Openai API compatible providers for any given role?
Let me know if you need any additional information. I am using Continue 1.2.1 with VSCode 1.103.2 on Fedora 42.
Beta Was this translation helpful? Give feedback.
All reactions