Does semantic-kernel work with local llms? #1256
-
All the examples I have seen so far seem to focus on tying into a service such as OpenAI or Hugging face but what if I have a custom llm trained locally I want to use. Is there a way to do that? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 3 replies
-
Yes, you only need to implement the interfaces ITextCompletion, IChatCompletion, and IEmbeddingGeneration. You can refer to Example16_CustomLLM. for more information. I am trying to use LLamaSharp to implement the Completion and Embedding interfaces of the semantic kernel. However, the performance and effectiveness of the Local LLM are not satisfactory, and there are still many requirements that need to be adapted. |
Beta Was this translation helpful? Give feedback.
-
This is the current CustomLLM source link: |
Beta Was this translation helpful? Give feedback.
-
Is there an updated link for this? |
Beta Was this translation helpful? Give feedback.
Yes, you only need to implement the interfaces ITextCompletion, IChatCompletion, and IEmbeddingGeneration. You can refer to Example16_CustomLLM. for more information.
I am trying to use LLamaSharp to implement the Completion and Embedding interfaces of the semantic kernel. However, the performance and effectiveness of the Local LLM are not satisfactory, and there are still many requirements that need to be adapted.
https://github.com/xbotter/semantic-kernel-LLamaSharp