GPU Support Troubleshooting FAQ #11436
Locked
polm
started this conversation in
Help: Best practices
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
GPU libraries are developed at a rapid pace, and with a number of versions available it can be hard to get a functioning environment. This FAQ post covers common questions and issues related to getting GPU libraries working. Note that in general spaCy doesn't have special GPU requirements, and most issues are with the underlying libraries - if this FAQ doesn't help you, feel free to open a thread, but in some cases it may be better to go directly to a forum for the library you're having trouble with.
Note that currently only CUDA support is stable. Support for GPUs on Apple M1 machines is currently experimental.
If you haven't, be sure to try using the install widget first.
🆕 Experimental support for M1 GPUs
spacy-transformers
1.1.8 adds experimental support for M1 GPUs through Metal Performance Shaders. However, this support comes with some limitations:spacy-transformers
.Keeping in mind the above, you can try experimental support by installing
spacy-transformers
1.1.8 or higher and PyTorch 1.13+. After this, spaCy should pick up the GPU usingrequire_gpu()
(see below). For several models, including English, you need to set thePYTORCH_ENABLE_MPS_FALLBACK
environment variable:This performs operations that are not supported by the MPS backend on the CPU.
For updates on the status of this support, as well as a compatibility table for tested pipelines, see this tracking issue.
Checking GPU Support in spaCy
First, check if you can use a GPU in spaCy. You can check like this:
If
spacy.require_gpu()
is True then everything is working. If not, read on to troubleshoot the issue.Check CUDA
First you need to have CUDA installed. CUDA drivers should be available through your package manager or directly from Nvidia. Below is one way to check your installed CUDA version:
The commands above may not work on every system, even if CUDA is installed correctly. If they don't work, be sure to check your installation is functioning, and once you've confirmed that you can move on the next steps.
Check CuPy
CuPy is a library with a NumPy-like API for GPU computation, and is required for using spaCy on the GPU in any circumstance. You can check if CuPy is working with the following Python script:
If you don't get an exception, then the above code is working, and you should have fully functional GPU support in spaCy.
To install the right version of CuPy, check the spaCy install widget. To install CuPy directly, you generally install
cupy-cudaXXX
, whereXXX
corresponds to your CUDA version - so102
for 10.2, or114
for 11.4. Additionally, for version 11.2 or higher you should be able to installcupy-cuda11x
. For more details see the CuPy install guide.One problem that can happen with CuPy is that it's possible to end up with multiple versions installed at the same time, which will cause errors. To check if this is affecting you, check
pip freeze
or equivalent and confirm there aren't multiple different versions installed.Check PyTorch
PyTorch is a machine learning library that's required to use
spacy-transformers
. PyTorch has different versions depending on the version of CUDA you're using. While you can install PyTorch directly viapip
, note that by default that installs CUDA 10.2 version. To install PyTorch with GPU support, go to their homepage and follow the instructions.Note that only a few versions of CUDA are listed on the page. If the version you have isn't listed, pick the next lower version. For example, if
nvcc --version
shows 11.5, then you can use PyTorch with 11.3.You can verify that Torch is installed correctly, and get more information for debugging, with this Python code:
If any of the above is off, try creating a new blank environment (virtualenv, conda env, or otherwise) and reinstalling PyTorch while disabling any caching.
Note that sometimes you can end up with multiple versions of PyTorch in the same environment, which often causes hard to understand problems. If you think you've uninstalled PyTorch, try importing it anyway to check there isn't a version sneaking in somehow.
If you're still having trouble, it's probably best to ask for help at the PyTorch forums directly.
If The Above Doesn't Work…
If you're still having trouble, feel free to open a thread. Please describe what parts of the above did and didn't work for you, as well as details about your hardware and system CUDA drivers. Note that because we don't develop these underlying GPU libraries, our ability to help may be limited, but we can at least check things.
Beta Was this translation helpful? Give feedback.
All reactions