-
Hello, I am following the above tutorial and am getting the below error.
Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 3 replies
-
Hey @muz-sup! I'm here to help you with any issues you're facing. I'm a bot that can assist with bug fixes, answer questions, and guide you on how to contribute. Let's work together to solve this problem! To resolve the 'results' key error in the response JSON while running the provided code from the tutorial "Using LlamaIndex and llamafile to build a local, private research assistant," you need to ensure that the response JSON structure contains the 'results' key. This key is likely expected by the code but is missing in the actual response. Here is a possible way to handle this issue by checking for the 'results' key and providing a default value if it is missing: import json
# Assuming `response` is the JSON response from the API
response = {
# Example response structure
# "results": [...]
}
# Check if 'results' key exists in the response
if 'results' not in response:
response['results'] = [] # Provide a default empty list if 'results' key is missing
# Now you can safely access the 'results' key
results = response['results']
# Process the results as needed
for result in results:
# Your processing code here
pass This code snippet ensures that the 'results' key is always present in the response JSON, preventing the key error from occurring. Additionally, if you are using the def _get_text_embeddings(self, texts: List[str]) -> List[Embedding]:
request_body = {
"content": texts,
}
with httpx.Client(timeout=Timeout(self.request_timeout)) as client:
response = client.post(
url=f"{self.base_url}/embedding",
headers={"Content-Type": "application/json"},
json=request_body,
)
response.encoding = "utf-8"
response.raise_for_status()
return [output["embedding"] for output in response.json()["results"]] Ensure that your server's response includes the 'results' key with the appropriate structure to avoid such errors [1][2]. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the response @dosu Could you please give me guidance as to "Ensure that your server's response includes the 'results' key with the appropriate structure to avoid such errors"? Where/how would I check if the server's response has the 'results' key? This is the script I am using (slightly modified the tutorial script mentioned above)
Thanks! |
Beta Was this translation helpful? Give feedback.
-
There is no 'results' key in the response: Loading files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 220.12file/s]
Parsing nodes: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 27.41it/s]
Generating embeddings: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍| 210/211 [04:57<00:01, 1.42s/it]
Error: 'results' key not found in the response JSON
Response JSON: {'embedding': [0.010298442095518112, -0.008788006380200386, 0.002432317240163684, -0.025140391662716866, 0.012892053462564945,
-0.022154325619339943, 0.012162705883383751, -0.0035074613988399506, -0.010162748396396637, 0.01815889962017536, -0.0013415934517979622,
0.011894568800926208, 0.012215185910463333, -0.0411808118224144, -0.015340336598455906, 0.02035493217408657, 0.027807263657450676,
-0.0002215437707491219, 0.01179485209286213, -0.01389545388519764, 0.019198661670088768, -0.009360240772366524, -0.005647893995046616,
...
-0.022155126556754112, 0.002537184627726674, 0.00039988855132833123, -0.0006991124828346074, 0.015385596081614494, 0.01416072715073824,
0.012316341511905193, 0.01371043361723423, -0.20243974030017853, 0.007901657372713089, -0.0035857653710991144]}
Generating embeddings: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 211/211 [04:58<00:00, 1.42s/it]
Traceback (most recent call last):
File "/home/xyz/llms/llamafile-llamaindex-tut-with-debugger.py", line 55, in <module>
index = VectorStoreIndex.from_documents(
File "/home/xyz/.local/lib/python3.10/site-packages/llama_index/core/indices/base.py", line 145, in from_documents
return cls(
File "/home/xyz/.local/lib/python3.10/site-packages/llama_index/core/indices/vector_store/base.py", line 75, in __init__
super().__init__(
File "/home/xyz/.local/lib/python3.10/site-packages/llama_index/core/indices/base.py", line 94, in __init__
index_struct = self.build_index_from_nodes(
File "/home/xyz/.local/lib/python3.10/site-packages/llama_index/core/indices/vector_store/base.py", line 308, in build_index_from_nodes
return self._build_index_from_nodes(nodes, **insert_kwargs)
File "/home/xyz/.local/lib/python3.10/site-packages/llama_index/core/indices/vector_store/base.py", line 280, in _build_index_from_nodes
self._add_nodes_to_index(
File "/home/xyz/.local/lib/python3.10/site-packages/llama_index/core/indices/vector_store/base.py", line 233, in _add_nodes_to_index
nodes_batch = self._get_node_with_embedding(nodes_batch, show_progress)
File "/home/xyz/.local/lib/python3.10/site-packages/llama_index/core/indices/vector_store/base.py", line 147, in _get_node_with_embedding
embedding = id_to_embed_map[node.node_id]
KeyError: '1867ed31-8a23-416e-9564-f53ab1eda0a9' |
Beta Was this translation helpful? Give feedback.
-
Thank you @dosu |
Beta Was this translation helpful? Give feedback.
-
This script worked for me: from typing import Optional, List
import httpx
from httpx import Timeout
from llama_index.core.base.embeddings.base import BaseEmbedding, Embedding
from llama_index.embeddings.llamafile import LlamafileEmbedding
from llama_index.core import Settings
from llama_index.llms.llamafile import Llamafile
from llama_index.core.node_parser import SentenceSplitter
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.readers.web import SimpleWebPageReader
class CustomLlamafileEmbedding(LlamafileEmbedding):
def _get_text_embeddings(self, texts: List[str]) -> List[Embedding]:
request_body = {
"content": texts,
}
with httpx.Client(timeout=Timeout(self.request_timeout)) as client:
response = client.post(
url=f"{self.base_url}/embedding",
headers={"Content-Type": "application/json"},
json=request_body,
)
response.encoding = "utf-8"
response.raise_for_status()
response_json = response.json()
if 'results' not in response_json:
# Transform the response to include the 'results' key
response_json['results'] = [{'embedding': response_json.pop('embedding')}]
return [output["embedding"] for output in response_json["results"]]
# Configure LlamaIndex with the custom embedding class
Settings.embed_model = CustomLlamafileEmbedding(base_url="http://localhost:8080")
Settings.llm = Llamafile(
base_url="http://localhost:8080",
temperature=0,
seed=0
)
Settings.transformations = [
SentenceSplitter(
chunk_size=256,
chunk_overlap=5
)
]
# Load local data
local_doc_reader = SimpleDirectoryReader(input_dir='~/llms/data')
docs = local_doc_reader.load_data(show_progress=True)
# We'll load some Wikipedia pages as well
urls = [
'https://en.wikipedia.org/wiki/Homing_pigeon',
'https://en.wikipedia.org/wiki/Magnetoreception',
]
web_reader = SimpleWebPageReader(html_to_text=True)
docs.extend(web_reader.load_data(urls))
# Build the index
index = VectorStoreIndex.from_documents(
docs,
show_progress=True,
)
# Save the index
index.storage_context.persist(persist_dir="~/llms/storage")
query_engine = index.as_query_engine()
print(query_engine.query("What were homing pigeons used for?"))
print(query_engine.query("When were homing pigeons first used?")) |
Beta Was this translation helpful? Give feedback.
The issue arises because the server's response JSON contains an 'embedding' key instead of the expected 'results' key. To resolve this, you can transform the response to include the 'results' key with the appropriate structure. Here's how you can modify the
_get_text_embeddings
method in theLlamafileEmbedding
class to handle this:Here's the updated code: