Skip to content
Discussion options

You must be logged in to vote

PyTorch uses a caching memory allocator. So, it doesn't immediately release freed memory to the operating system. Instead, it'll reuse previously freed memory when a new tensor needs to be allocated. What this means in practice is that the VRAM usage will appear to only grow during a given PyTorch session. You could call the torch.cuda.empty_cache function to release the cached memory, but this generally isn't advisable unless you memory fragmentation becomes an issue.

Replies: 1 comment 4 replies

Comment options

You must be logged in to vote
4 replies
@Hansyvea
Comment options

@shadeMe
Comment options

@Hansyvea
Comment options

@adrianeboyd
Comment options

Answer selected by adrianeboyd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf / memory Performance: memory use
3 participants
Converted from issue

This discussion was converted from issue #13114 on November 08, 2023 10:22.