Skip to content
Discussion options

You must be logged in to vote

Here's a workaround

$pip install  llama-index-llms-openai-like

from llama_index.llms.openai_like import OpenAILike
llm = OpenAILike(api_base="http://192.168.1.1:8000/v1/",model="mistral", api_key="bogus", max_new_tokens=100, temperature=0)
llm.complete("Paul Graham is?")

I'll try to come up with the fix for VllmServer in a few days.

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@mkhludnev
Comment options

Answer selected by mkhludnev
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant