MNIST demo is not utilizing the GPU #14304
Unanswered
delip
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment
-
@delip I've just run the example in my environment but don't see any issue with it. env
so, I guess there may be an issue with your environment setup. Would you mind running the following script and share the output here? Creating a fresh environment and starting over may simply resolve your issue.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am running the demo code on a host with V100 (as seen via. nvidia-smi), but the GPU utilization is zero. I confirmed this by instrumenting line 99 onwards to see if the model was loaded on the GPU and if a CUDA device was available.
I notice that the GPU was visible to the process and yet the model was running on the CPU. I also tried to "force" it to use all available GPUs, by modifying the final line to the following:
And still, the model is loaded and optimized on the CPU. I have two questions: 1) Why is the model not running on the GPU here?, and 2) How do I force the Lite Trainer to use the GPU?
Other information:
nvidia-smi:
Beta Was this translation helpful? Give feedback.
All reactions