Pruning callback causes GPU memory leak when used iteratively #8363
Replies: 3 comments 8 replies
-
I found out about this issue https://discuss.pytorch.org/t/how-to-avoid-memory-leak-when-training-multiple-models-sequentially/100315 which I think confirms that the memory leak is related to Lightning, but the question is how can we deal with it? |
Beta Was this translation helpful? Give feedback.
-
This the repo that contain my code: https://github.com/MohammedAljahdali/shrinkbench/tree/NAS, to use the following to reproduce the memory leak |
Beta Was this translation helpful? Give feedback.
-
Can you provide more details about the alleged memory leaks? You could try and see if putting |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I have a script that does the following logic:
There is much more going on, but to keep it simple this is the flow that I have, my question is there another preferable way to what I just did? Also, in my code I have some memory leak, that happens after each loop iteration, could this be somehow related to the trainer object, not being deleted properly?
Beta Was this translation helpful? Give feedback.
All reactions