redis loading issue #10956
Replies: 5 comments
-
🤖 Hey there, @rahulkrprajapati! Great to see you back. Hope you're doing well! Based on the code you've shared, it seems like you're initializing a new self.base_index = VectorStoreIndex(
nodes=[],
storage_context=self.storage_context,
) The If you want to avoid creating a new index every time, you should initialize try:
# Load all indices
indices = load_indices_from_storage(self.storage_context)
# Print out the index_ids of all loaded indices
for index in indices:
print(index.index_id)
self.base_index = load_index_from_storage(self.storage_context)
print("[INFO] Index found at storage")
except ValueError as e:
print("[INFO] No index found at storage")
self.base_index = VectorStoreIndex(
nodes=[],
storage_context=self.storage_context,
) In this modified code, Please note that this is just a suggestion based on the code and information you've provided. If this doesn't solve your issue or if there are other factors at play, please provide more details so I can give a more accurate solution. For more information about how indices are loaded from the storage, you can refer to the test_loading.py file in the LlamaIndex repository. Sources
|
Beta Was this translation helpful? Give feedback.
-
Everytime I call this function after reloading the index from the vector store: my retrived_nodes are empty def process_fetch_query_results(
self, query="", similarity_top_k_reranker=3
):
try:
self.base_retriever = self.base_index.as_retriever(
similarity_top_k=self.similarity_top_k
)
self.retriever = AutoMergingRetriever(
self.base_retriever, self.storage_context, verbose=True
)
self.postprocessor = SentenceTransformerRerank(
model="cross-encoder/ms-marco-MiniLM-L-2-v2",
top_n=similarity_top_k_reranker,
)
query_bundle = QueryBundle(query_str=query)
print("*****************",query)
retrived_nodes = self.retriever.retrieve(query_bundle)
print("*************", retrived_nodes)
rerank_nodes = self.postprocessor.postprocess_nodes(
nodes=retrived_nodes, query_bundle=query_bundle
)
return rerank_nodes
except Exception as e:
raise Exception(f"An error occurred retrieving: {e}") |
Beta Was this translation helpful? Give feedback.
-
🤖 Based on the code you've shared and the context provided, it seems like the
You can verify these possibilities by adding some debug prints or logs in the Regarding the If you're still facing the issue, could you please provide more details about the state of your vector store and Sources
|
Beta Was this translation helpful? Give feedback.
-
Write down a complete storage context just using the RedisDocumentStore, and RedisIndexStore. Also use RedisVectorStore if supported or try and store the vectors in redis. Load the nodesusing load_index_from_storage and then retrive them using AutoMergingRetriever |
Beta Was this translation helpful? Give feedback.
-
🤖 Sure, I can provide a detailed guide on how to create a complete storage context using RedisDocumentStore, RedisIndexStore, and RedisVectorStore, load the nodes using the First, you need to create instances of RedisDocumentStore, RedisIndexStore, and RedisVectorStore. Here's how you can do it: from llama_index.storage.docstore.redis import RedisDocumentStore
from llama_index.storage.index_store.redis import RedisIndexStore
from llama_index.core.vector_stores.redis import RedisVectorStore
# Create instances
doc_store = RedisDocumentStore.from_host_and_port(host="localhost", port=6379)
index_store = RedisIndexStore.from_host_and_port(host="localhost", port=6379)
vector_store = RedisVectorStore(
index_name="my_index",
index_prefix="llama_index",
index_args={"algorithm": "HNSW", "m": 16, "ef_construction": 200, "distance_metric": "cosine"},
redis_url="redis://localhost:6379/",
overwrite=True
) Next, you can load the nodes into the RedisVectorStore using the from llama_index.core.schema import BaseNode
# Create some nodes
nodes = [BaseNode(node_id="1", ref_doc_id="doc1", embedding=[0.1, 0.2, 0.3]),
BaseNode(node_id="2", ref_doc_id="doc2", embedding=[0.4, 0.5, 0.6])]
# Add nodes to the index
vector_store.add(nodes) Now, you can load the nodes using the from llama_index.core import load_index_from_storage, StorageContext
# Create a storage context
storage_context = StorageContext(doc_store=doc_store, index_store=index_store, vector_store=vector_store)
# Load the index
index = load_index_from_storage(storage_context) Finally, you can retrieve the nodes using Please note that the above code snippets are based on the provided context and the available documentation. If you encounter any issues or need further assistance, feel free to ask. Sources: Sources
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
my code seems to working fine the first time but every time fails subsequently
it seems to be add a new index everytime I run it
Beta Was this translation helpful? Give feedback.
All reactions