How to make localGPT use the local model ? #335
Unanswered
50ZAIofficial
asked this question in
Q&A
Replies: 1 comment 1 reply
-
I modified run_localGPT.py>load_model> |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Some HuggingFace models I use do not have a ggml version. I downloaded the model and converted it to model-ggml-q4.bin through llama.cpp, but I cannot call the model through model_id and model_basename. Am I limited to models from the HuggingFace webpage?
Looking forward to your reply, thank you!
Beta Was this translation helpful? Give feedback.
All reactions