Is it normal that the GPU memory consumption does not scale linearly with respect to the number of atoms in the system? #3098
Unanswered
DingChangjie
asked this question in
Q&A
Replies: 1 comment
-
|
TensorFlow allocates GPU memory in advance before the memory is used for high efficiency. Its code is here, showing the memory is allocated in a power of 2 unless the memory is not available: https://github.com/google/tsl/blob/c71f380658d64f44da1216565f9c092a961da3aa/tsl/framework/bfc_allocator.cc#L125-L132 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I tested the GPU memory consumption of systems with different number of atoms, and found that the relationship between GPU memory consumption and the number of atoms in the system is stepwise rather than linear (the following Figure). For example, the system with 48,000 atoms cost almost the same memory as the system with 72,000 atoms (i.e., 24GB). Moreover, I found that the memory consumption is always a multiple of 8GB (e.g., 8x2=16GB, 8x3=24GB, 8x5=40GB in the Figure). I used four 12GB 3080ti cards to perform these tests. I would like to know how to understand such an observation?? Thanks.

Beta Was this translation helpful? Give feedback.
All reactions