Shouldn't predictions accumulate on CPU, not GPU? #7485
-
predictions = self.trainer.accelerator.predict_step(args)
if predictions is None:
self.warning_cache.warn("predict returned None if it was on purpose, ignore this warning...")
self.trainer.call_hook("on_predict_batch_end", predictions, batch, batch_idx, dataloader_idx)
if self.should_store_predictions:
self.predictions[dataloader_idx].append(predictions) Shouldn't |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
This issue has just affected me. I had trained a model using a single GPU, it was tight but no CUDA OOM errors, then I tried getting some predictions through Would be nice to have a flag ( |
Beta Was this translation helpful? Give feedback.
-
This was fixed in #9085 |
Beta Was this translation helpful? Give feedback.
This was fixed in #9085