-
Notifications
You must be signed in to change notification settings - Fork 53
Open
Description
I'm fortunate enough to have a machine with an NVIDIA RTX 3090 GPU. However, the GPU-enabled binary versions of PyTorch 1.6.0 available from the PyTorch project won't run on the 3090, and probably won't run on any 3000 series GPUs - the necessary CUDA binaries don't seem to be compiled in.
PyTorch 1.7.0 does run on my 3090, so I've built a virtual enviroment with that and torchaudio 0.7.0. I started up training on the "LJ" dataset to see if it worked and it appeared to be functioning; it used about 11.5 GB of GPU RAM and about 45% of GPU processors. Do you anticipate any other problems with PyTorch 1.7.0, or should I go ahead with training on my own dataset?
Metadata
Metadata
Assignees
Labels
No labels