Google Colab: "Is this normal?" and GPU selection #1595
Unanswered
erica-trump
asked this question in
Q&A
Replies: 1 comment 4 replies
-
Without access to the actual Google Colab, solving your issue will be difficult. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Is this normal? On Colab, I am using a 16 GB RAM-GPU. I notice that the GPU usage never goes above 2.7 GB RAM. It runs using 2.7 GB RAM for many hours.
Some other info that would help:
Colab link: https://colab.research.google.com/drive/11tteAb_BkD8CIucQjogyOvgd7DiTazmd
Background:
I've been struggling with the speaker-diarization-3.1 pipeline. I'm using this within a Google colab notebook. The job runs for 4+ hours and then times out. I think that something may be wrong with the setup of my environment.
Prior to encountered the time-out issue, I struggled to get the right PyTorch configuration to work without dependency conflicts with pyannote. I had also previously used a "bad" configuration that made it so that the jobs wouldn't go to GPU at all. Here is what I have now:
torch Version: 2.1.0
torchvision Version: 0.16.2+cu118
tensorflow Version: 2.15.0.post1
protobuf Version: 4.23.4
Beta Was this translation helpful? Give feedback.
All reactions