@hirofumi0810 Hi,i'm training a LC-BLSTM RNN-T model,and get cuda out of memory all the time.It happens after some epochs,then i monitor the GPU occupancy and find memory occupancy of GPU grows by time.I tried del and torch.cuda.empty_cache() but failed.Could you please solve my problem?