Replies: 1 comment
-
you can see what are cached with |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have stalled
llamacpp-python
but i forgot to make it built using cuda. So i am trying to reinstall using cuda suppoirt by doing this:CMAKE_ARGS="-DGGML_CUDA=on" pdm add llama-cpp-python
But the problem is PDM just reinstall already built one instead of rebuilding with GPU support.
I tried removing .venv folder , but it just install already build one.
How to force rebuild with the env var set.
Beta Was this translation helpful? Give feedback.
All reactions