How to Configure KernelMemoryBuilder for OpenAI-Compatible On-Premises LLM? #993
Replies: 1 comment
-
I found the usage method: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We have an on-premises LLM that is compatible with OpenAI API specifications.
In Semantic Kernel, we can configure this model using
AddOpenAIChatCompletion
.However, I am wondering if KernelMemoryBuilder has a similar method that allows us to use OpenAI-compatible on-premises LLM.
Any guidance or examples would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions