Replies: 3 comments
-
🤖 Hello, The error you're encountering, Based on the context provided, it appears that this issue has been encountered before and was resolved by updating the pip install --upgrade llama-cpp-python After upgrading, try running your code again. If the issue persists, please provide more information about the versions of the packages you're using, especially For more information, you can refer to these similar issues:
I hope this helps! Let me know if you have any other questions or if the issue persists after trying the suggested solution. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
@zzadiues make sure you update your llama-cpp-python version |
Beta Was this translation helpful? Give feedback.
-
Thanks @logan-markewich it worked ! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Question Validation
Question
I loaded a GGUF LLM
llm = LlamaCPP(
model_url="https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/resolve/main/llama-2-13b-chat.Q5_K_M.gguf",
#model_url="https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/resolve/main/llama-2-13b-chat.Q5_K_M.gguf",
# optionally, you can set the path to a pre-downloaded model instead of model_url
model_path=None,
)
defining the service context
service_context = ServiceContext.from_defaults(chunk_size=100, llm=llm,embed_model="local",
callback_manager=callback_manager)
threw an error like
AttributeError Traceback (most recent call last)
Cell In[10], line 2
1 #service_context = ServiceContext.from_defaults(chunk_size=128, llm=llm)
----> 2 service_context = ServiceContext.from_defaults(chunk_size=100, llm=llm,embed_model="local",
3 callback_manager=callback_manager)
File ~/anaconda3/envs/llmproj/lib/python3.10/site-packages/llama_index/indices/service_context.py:167, in ServiceContext.from_defaults(cls, llm_predictor, llm, prompt_helper, embed_model, node_parser, llama_logger, callback_manager, system_prompt, query_wrapper_prompt, chunk_size, chunk_overlap, context_window, num_output, chunk_size_limit)
163 embed_model = resolve_embed_model(embed_model)
164 embed_model.callback_manager = callback_manager
166 prompt_helper = prompt_helper or _get_default_prompt_helper(
--> 167 llm_metadata=llm_predictor.metadata,
168 context_window=context_window,
169 num_output=num_output,
170 )
172 node_parser = node_parser or _get_default_node_parser(
173 chunk_size=chunk_size,
174 chunk_overlap=chunk_overlap,
175 callback_manager=callback_manager,
176 )
178 llama_logger = llama_logger or LlamaLogger()
File ~/anaconda3/envs/llmproj/lib/python3.10/site-packages/llama_index/llm_predictor/base.py:115, in LLMPredictor.metadata(self)
112 @Property
113 def metadata(self) -> LLMMetadata:
114 """Get LLM metadata."""
--> 115 return self._llm.metadata
File ~/anaconda3/envs/llmproj/lib/python3.10/site-packages/llama_index/llms/llama_cpp.py:154, in LlamaCPP.metadata(self)
150 @Property
151 def metadata(self) -> LLMMetadata:
152 """LLM metadata."""
153 return LLMMetadata(
--> 154 context_window=self._model.context_params.n_ctx,
155 num_output=self.max_new_tokens,
156 model_name=self.model_path,
157 )
AttributeError: 'Llama' object has no attribute 'context_params'
Beta Was this translation helpful? Give feedback.
All reactions