Free VRAM after training. #3985
Answered
by
ExtReMLapin
ExtReMLapin
asked this question in
Q&A
-
Hello, I symply train my model using trainer = trainer_class(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
del trainer And even after python interpreter left the function, VRAM usage doesn't go down. Is there a way to free VRAM ? Or is it some kind of internal tensorflow memory managment, which mean it "reserve" vram and internally dispatch memory blocks to whatever calls it from python ? Thanks. |
Beta Was this translation helpful? Give feedback.
Answered by
ExtReMLapin
Mar 7, 2022
Replies: 1 comment
-
Solution : |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
ExtReMLapin
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Solution :
torch.cuda.empty_cache()
#4007