00_pytorch_fundamentals Exercises Question 5&6 #550
Answered
by
AS1100K
Perian-Yan
asked this question in
Q&A
-
Hi, I got a different answer to this question. My code is # set random seed for GPU
torch.cuda.manual_seed(1234)
device = "cuda" if torch.cuda.is_available() else "cpu"
T1_gpu = torch.rand((2,3), device=device)
T2_gpu = torch.rand((2,3), device=device)
print(T1_gpu)
print(T2_gpu) However, the solution is: I don't know why we need to set |
Beta Was this translation helpful? Give feedback.
Answered by
AS1100K
Jul 24, 2023
Replies: 2 comments 2 replies
-
When using device = "cuda" if torch.cuda.is_available() else "cpu"
torch.cuda.manual_seed(1234)
T1_gpu = torch.rand((2,3), device=device)
torch.cuda.manual_seed(1234)
T2_gpu = torch.rand((2,3), device=device)
print(T1_gpu)
print(T2_gpu) Output:
|
Beta Was this translation helpful? Give feedback.
2 replies
-
Thank you! It's clear now 👍 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
There is no difference between
torch.manual_seed
andtorch.cuda.manual_seed
when running on single GPU or without GPU.So,
torch.cuda.manual_seed
sets manual seed for the current GPUand
torch.manual_seed
sets manual seed across all CPU(s) and GPU(s).NOTE: When using
torch.manual_seed
ortorch.cuda.manual_seed
, the reproducibility also depends on the device on which tensor was created. So, If we use any of the reproducibility functions, there will be a different Tensor for bothCPU
andGPU
.Source: PyTorch Reproducibility Documentation,
torch.cud…