Local LLM and Local Embedding possible? #181
Unanswered
EnviralDesign
asked this question in
Q&A
Replies: 1 comment 8 replies
-
|
Hey thanks for the kind words! For localized LLMs you can use LocalAI. See how you can use them on Flowise - #123 For embeddings, do you have an inference API endpoint like OpenAI that you can just make the call to it? |
Beta Was this translation helpful? Give feedback.
8 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Love this project and it's elegant strategy for becoming a ready to run API endpoint for easy integration into other pipelines!
That alongside the node based UX makes it super ideal for prototyping but also possibly for deployment.
One constraint to using Flowise at the moment is I cannot plug in local embedding models that run inference locally. The project I'm working on is one that involves over a thousand pages of docs, sometimes more depending on the chunking strategy I'm trying to running this through embedding API's constantly makes it not very cost friendly to r&d which is a big draw for this platform.
Is there any way to utilize a custom embeddings model locally at present? and if not, what general steps would I need to go about adding or contributing one?
Additionally, to that I'd love the ability to do the same with LLM's, but that's not as pressing for me as the cost builds up more unpredictably and quickly with calls to embeddings APIs.
Thank you for making this awesome software!
Beta Was this translation helpful? Give feedback.
All reactions