How to use Open Source LLM & Embedding models #10129
Replies: 2 comments
-
🤖 Hi there, Yes, you can use custom embeddings within the LangChain program itself using a local LLM instance. This can be done by using the Here is an example of how you can use it: from langchain.embeddings import LocalAIEmbeddings
openai = LocalAIEmbeddings(
openai_api_key="random-key",
openai_api_base="http://localhost:8080"
) In this example, a Additionally, the LangChain framework does support the use of custom embeddings. In addition to the To use llama.cpp embeddings, you need to instantiate the Here is an example of how to use a llama.cpp embeddings model: from langchain.embeddings import LlamaCppEmbeddings
# Instantiate the LlamaCppEmbeddings class with your model path
llama = LlamaCppEmbeddings(model_path="/path/to/model.bin")
# Use the embed_documents method to get embeddings for a list of documents
embeddings = llama.embed_documents(
[
"This is the first document",
"This is the second document",
]
) In this example, replace I hope this helps! Let me know if you have any other questions. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
For some reason, the LocalAIEmbeddings still requires an open-ai key. I got this to work using: loader = CSVLoader(file_path=file) or other loader |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am setting a local LLM instance for Question-Answer. Program stores the embeddings in the vector store.
Question is - Can I use custom embeddings within the program itself?
In stage 1 - I ran it with Open AI Embeddings and it successfully.
`from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
embedding = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents=all_splits, embedding=embedding)`
In stage 2 - I wanted to replace the dependency on OpenAI and use the local LLM instead with custom embeddings.
I tried using embeddings.self_hosted.SelfHostedEmbeddings
The problem with this is that it needs me to run the embedding model remotely.
Is there a way to create the embeddings within the program itself?
So, I want something like this:
`embedding = SomeOtherEmbedding()
vectorstore = Chroma.from_documents(documents=all_splits, embedding=embedding)``
Beta Was this translation helpful? Give feedback.
All reactions