Full Multimodal Support for Embedding Models and Vector Stores #31672
jakubbober
announced in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
Allow for usage of multimodal embeddings like Cohere Embed 4 and vector stores like PGVector
Motivation
When Cohere's Embed 4 multimodal embeddings came out, multimodal RAG/Vision RAG became the go-to solution for a lot of products. However, according to this page in LangChain documentation, the multimodal support exists only for chat models. The time has come to implement it all over the stack.
I love the simplicity of embedding documents and adding them to PGVector with LangChain (also using LlamaIndex's
PDFReader
in this particular example) as follows:And similarly fetching documents for RAG as follows:
I'd really love to have the option to use multimodal embeddings like Cohere's Embed 4 in a similar simplistic fashion.
Proposal (If applicable)
No response
Beta Was this translation helpful? Give feedback.
All reactions