You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ERROR: Embedding func: Error in decorated function for task 139862223312224_3071.870411049: Embedding dimension mismatch detected: total elements (384) cannot be evenly divided by expected dimension (1024).
ERROR: Traceback (most recent call last):
File "/home/ubuntu/MAF/LightRAG/lightrag/lightrag.py", line 1895, in process_document
await asyncio.gather(*first_stage_tasks)
File "/home/ubuntu/MAF/LightRAG/lightrag/kg/nano_vector_db_impl.py", line 124, in upsert
embeddings_list = await asyncio.gather(*embedding_tasks)
File "/home/ubuntu/MAF/LightRAG/lightrag/utils.py", line 503, in call
result = await self.func(*args, **kwargs)
File "/home/ubuntu/MAF/LightRAG/lightrag/utils.py", line 1016, in wait_func
return await future
File "/home/ubuntu/MAF/LightRAG/lightrag/utils.py", line 720, in worker
result = await asyncio.wait_for(
File "/opt/conda/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
return fut.result()
File "/home/ubuntu/MAF/LightRAG/lightrag/utils.py", line 511, in call
raise ValueError(
ValueError: Embedding dimension mismatch detected: total elements (384) cannot be evenly divided by expected dimension (1024).
Set the logging mode to debug and I noticed something odd (snippet below):
The Granite embedding model has an embedding length of 384, after a bit of digging and finding the utils.py where the EmbeddingFunc is I enlisted Claude's help and it suggested the following (I'm not familiar enough with the code or a good enough developer to comment):
O.K. so I thought I would just use the bge model with 1024 embedding dimensions and it worked. In short it seems that only embedding models with 1024 dimensions are supported or there's possibly a bug - am I missing something?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I fired up LightRAG to use with Ollama and some Granite models (granite-embedding:latest and granite4:latest). Using some code from: https://alain-airom.medium.com/hands-on-experience-with-lightrag-6ecbd3499660 to test it out - the code consistently failed on the embeddings with the following:
Set the logging mode to debug and I noticed something odd (snippet below):
The Granite embedding model has an embedding length of 384, after a bit of digging and finding the utils.py where the EmbeddingFunc is I enlisted Claude's help and it suggested the following (I'm not familiar enough with the code or a good enough developer to comment):
O.K. so I thought I would just use the bge model with 1024 embedding dimensions and it worked. In short it seems that only embedding models with 1024 dimensions are supported or there's possibly a bug - am I missing something?
Thanks a bunch.
Cheers - Steve
Beta Was this translation helpful? Give feedback.
All reactions