Replies: 2 comments 2 replies
-
This is incorrect, it should be Can you add the version of openllm you are using? |
Beta Was this translation helpful? Give feedback.
2 replies
-
Yes this should work with latest version. Please try again |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have been trying a dozen different way. Asking Claude 2, GPT-4, Code Interpreters you name it. I for the life of me cannot figure out how to get the llama-2 models either to download or load the ones I have pulled from hugging face.
I have already been accepted via META and Hugging face, these models are not gated for me.
I have logged into my HF account through huggingface-cli and added the token and all that.
I have
pip install "openllm[llama]"
I have tried
To which it stops for a minute as if it's going to do something and then spits out:
I have also tried downloading llama-2 7b-chat and 13b-chat directly from huggingface via GIT and I do have the files but I cannot get openLLM to utilize/locate them. I also think there's an issue with the weights not being tied to the model and I am not sure how I do that or how I even get back to a place to replicate that error.
So I have been trying to download/start them with OPENLLM to no avail.
Anyone able to assist me with getting the 7b-chat and 13b-chat models working? Or help point me in the right direction.
Beta Was this translation helpful? Give feedback.
All reactions