Why do I have to transform every data to cuda??? #622
Unanswered
apollo000104
asked this question in
Q&A
Replies: 1 comment
-
Hi @apollo000104 , Great question! The reason is because you have to make computations all on the same device. For example, if your model is on the CPU (default) but your data is on the GPU ( Data and model must be on the same device (CPU or GPU). By default, PyTorch saves everything to CPU, so in order to use a GPU, we use the However, in PyTorch 2.0, you can set a default device. This helps make sure everything is on the same device (e.g. import torch
# Set the device
device = "cuda" if torch.cuda.is_available() else "cpu"
# Set the device globally
torch.set_default_device(device)
# All tensors created will be on the global device by default
layer = torch.nn.Linear(20, 30)
print(f"Layer weights are on device: {layer.weight.device}")
print(f"Layer creating data on device: {layer(torch.randn(128, 20)).device}") Output:
See: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, Everybody.
I am just studying pytorch with https://www.learnpytorch.io/02_pytorch_classification/.
And Here, I have met a problem. https://www.learnpytorch.io/02_pytorch_classification/#31-going-from-raw-model-outputs-to-predicted-labels-logits-prediction-probabilities-prediction-labels here, I have to move each data to avoid such error:
{ValueError: Target size (torch.Size([800, 1, 1])) must be the same as input size (torch.Size([800, 1]))}
If I delete any <.to(device)> part, I meet above error.
I have avoid it by writing like this
""
model_0.train()
y_pred=model_0(X_train.to(device))
X_test=X_test.unsqueeze(dim=1)
loss=loss_fn(y_pred.to(device), X_test.to(device))
optimizer.zero_grad()
loss.backward()
optimizer.step()
""""
But I don't know why I can't remove any of "to(device)" method.
I wonder if anybody explain me step by step.
Why I can't do that.
Thanks for reaching at here and Thanks again to try to help me.
Beta Was this translation helpful? Give feedback.
All reactions