-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- I carefully followed the README.md.
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
Hello, basically I am trying to compile and install llama-cpp-python with cuda on Windows 11 following this #1963 guide. It installs perfectly fine, but I am running into the issue mentioned in the header.
Current Behavior
when importing Llama in a python script, I get the following error:
AttributeError: function 'llama_get_kv_self' not found. Did you mean: 'llama_get_model'?
it is thrown when calling
import llama_cpp
Environment and Context
I am using Windows 11 using a Ryzen 5700x and an RTX 4070. I am trying to install llama-cpp-python into a venv based on Python 3.11.9 and have also tried it with Python 3.13.xx. Also tested straight into python without venv.
Failure Information (for bugs)
Traceback (most recent call last):
File ".\testLlamaCpp.py", line 1, in
import llama_cpp
File "..venv\Lib\site-packages\llama_cpp_init_.py", line 1, in
from .llama_cpp import *
File "..venv\Lib\site-packages\llama_cpp\llama_cpp.py", line 1408, in
@ctypes_function(
^^^^^^^^^^^^^^^^
File "..venv\Lib\site-packages\llama_cpp_ctypes_extensions.py", line 113, in decorator
func = getattr(lib, name)
^^^^^^^^^^^^^^^^^^
File "\AppData\Local\Programs\Python\Python311\Lib\ctypes_init_.py", line 389, in getattr\AppData\Local\Programs\Python\Python311\Lib\ctypes_init_.py", line 394, in getitem
func = self.getitem(name)
^^^^^^^^^^^^^^^^^^^^^^
File "
func = self._FuncPtr((name_or_ordinal, self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: function 'llama_get_kv_self' not found. Did you mean: 'llama_get_model'?
Steps to Reproduce
Compile llama-cpp-python on windows using the guide mentioned.
import llama_cpp