How to load LLM with multiple GPUs #13257
Replies: 1 comment
-
Duplicate of explosion/spacy-llm#424 - we'll go ahead and close this one. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I want to use
Llama-2-13b-hf
to do NER task like code examples in https://spacy.io/usage/large-language-modelsBut I only got several RTX3090 cards. How to configure to perform inference on multiple GPUs in this spaCy library?
Beta Was this translation helpful? Give feedback.
All reactions