GPU usage does not remain high for lightweight models when loaded CIFAR-10 as a custom dataset #8274
-
I am experimenting with the following repository. Keiku/PyTorch-Lightning-CIFAR10: "Not too complicated" training code for CIFAR-10 by PyTorch Lightning I have implemented two methods, one is to load CIFAR-10 from torchvision and the other is to load CIFAR-10 as a custom dataset. Also, I have implemented two models: a lightweight model (eg scratch resnet18, timm MobileNet V3, etc.) and a relatively heavy model (eg scratch resnet50, timm resnet152). After some experiments, I found the following.
In this situation, is there a problem with the implementation code of the custom dataset? Also, please let me know if there is a way to increase GPU usage even for lightweight models. I am experimenting in the following EC2 g4dn.xlarge environment.
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
I got a replay from ptrblck. |
Beta Was this translation helpful? Give feedback.
I got a replay from ptrblck.
https://discuss.pytorch.org/t/gpu-usage-does-not-remain-high-for-lightweight-models-when-loaded-cifar-10-as-a-custom-dataset/125738