-
Notifications
You must be signed in to change notification settings - Fork 216
Open
Description
I always get an ResourceExhaustedError: OOM error whenever using this code. I'm unable to use any batch size greater than 256. Can you point out which parts are the most memory intensive?
ResourceExhaustedError: OOM when allocating tensor with shape[510,510,510,510] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node loss_5/merged_layer_neg_loss/batch_all_triplet_loss/ToFloat_1}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Metadata
Metadata
Assignees
Labels
No labels