-
I'm trying to analyze total GPU memory required for my current model and dataset. for epoch in range(num_epochs):
losses = []
train_loss = train(epoch)
test_loss = test(epoch)
print(f'batch size:{batch_size} Hidden_Ch {hidden_channel} Lat {x_lat}')
print('Epoch: {:02d}, Train MSE: {:.4f}, Test MSE: {:.4f}'.format(epoch, train_loss, test_loss))
a = torch.cuda.memory_allocated(0)
print(a) The only message that I can see is about epoch 1, saying After the
I think the number |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Analyzing GPU memory is indeed tricky. You can take a look at the PyTorch profiler. PyG also provides some simple methods to estimate GPU costs via the |
Beta Was this translation helpful? Give feedback.
Analyzing GPU memory is indeed tricky. You can take a look at the PyTorch profiler. PyG also provides some simple methods to estimate GPU costs via the
torch_geometric.profile
package.