Skip to content
Discussion options

You must be logged in to vote

Update:

  • The LiteLLM OpenAI proxy now supports Azure OpenAI endpoints properly in all scenarios.
  • This includes streaming + function calling.

The fix is tracked here and is fully resolved in their main branch:
BerriAI/litellm#2138

To use the proxy to load balance Azure OpenAI endpoints, the process is:

  • Define your Azure OpenAI endpoints in the LiteLLM config.yaml file
  • Deploy your proxy
  • Then in Flowise, specify the standard OpenAI nodes (not the Azure OpenAI nodes) -- this goes for chat models, embeddings, llms -- everything

So essentially, the high availability / load-balanced configuration sort of looks like this:

Flowise[1..n] -- (using standard OpenAI nodes) --> LiteLLM OpenAI Proxy[…

Replies: 9 comments 10 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
2 replies
@ishaan-jaff
Comment options

@dkindlund
Comment options

Comment options

You must be logged in to vote
5 replies
@amansoniamazatic
Comment options

@dkindlund
Comment options

@dkindlund
Comment options

@amansoniamazatic
Comment options

@dkindlund
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@dkindlund
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
2 replies
@dkindlund
Comment options

@dkindlund
Comment options

Comment options

You must be logged in to vote
0 replies
Answer selected by dkindlund
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
4 participants