PyTorch tensor allocation defaults to GPU after prefer_gpu invocation #10558
-
Hi, I have the following snippet of code: import torch
import spacy
print(torch.__version__)
print(spacy.__version__)
print(torch.cuda.is_available())
print(torch.tensor(1).device)
spacy.prefer_gpu()
print(torch.tensor(1).device) I get the following output: 1.8.1+cu101
3.2.3
True
cuda:0
cuda:0 Why does This is how I installed SpaCy, PyTorch and CuPy: pip3 install torch==1.8.1+cu101 -f https://download.pytorch.org/whl/torch_stable.html
pip3 install --upgrade pip setuptools wheel
conda install -y -c conda-forge spacy
conda install -c conda-forge cupy Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
Changing the default device is kind of the point of that function. If you look at the implementation in Thinc you'll see that it calls |
Beta Was this translation helpful? Give feedback.
-
Hi @polm, Thanks for the reply. This unfortunately means, other PyTorch code chas difficulty co-existing with SpaCy. For e.g., I have a large-computation model (spacy-based) that I would rather run in GPU, the output of which is fed to another PyTorch (non-spacy-based) model that I run on CPU. I need to do this because I have limited GPU memory. I am not forced to call Is there any way out of this? Thanks! |
Beta Was this translation helpful? Give feedback.
-
You can use |
Beta Was this translation helpful? Give feedback.
You can use
require_gpu()
/require_cpu()
to switch back and forth, or the context manageruse_ops
withwith use_ops("cpu")
/with use_ops("cupy")
for the spacy+GPU / non-spacy parts of your code.