Replies: 4 comments 9 replies
-
Hi @santoshbs! Our documentation is incomplete for this, we'll improve on it. So the thing with Llama 2 is that Meta published this conditional on a custom license you'll have to agree on, otherwise you aren't allowed to - and can't - download the model from Hugging Face. You can see the license at the top of this page. So in order to be able to use Llama 2 via HF (and consequently in
These steps are not obvious to users not familiar with this process, so we'll update our docs correspondingly. Thanks for bringing this up! |
Beta Was this translation helpful? Give feedback.
-
Hi @rmitsch, Also, does this configuration look OK?
|
Beta Was this translation helpful? Give feedback.
-
can the Hugging face models(LLMs) used in spacy-llm be quantized ? ... i mean instead of loading the whole raw models from hugging face which cause the computation issues and memory...can you just do this Quantization process automatically so that the Hugging face models used. |
Beta Was this translation helpful? Give feedback.
-
Thanks much @rmitsch |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to use the basic example provided with llama2 model. I get an error. Not sure what I am doing wrong. Request your help.
Here's the config:
Here's the code:
The error message:
Beta Was this translation helpful? Give feedback.
All reactions