Skip to content
This repository was archived by the owner on Nov 3, 2023. It is now read-only.

Commit a927dfa

Browse files
authored
Example: ray_ddp_sharded_example.py (#153)
Fix on_train_epoch_end() hook usage. From PTL interface definition, we don't need the additional `outputs` arg.
1 parent d5f325a commit a927dfa

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

ray_lightning/examples/ray_ddp_sharded_example.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ def on_train_epoch_start(self, trainer, pl_module):
2020
torch.cuda.synchronize(trainer.root_gpu)
2121
self.start_time = time.time()
2222

23-
def on_train_epoch_end(self, trainer, pl_module, outputs):
23+
def on_train_epoch_end(self, trainer, pl_module):
2424
torch.cuda.synchronize(trainer.root_gpu)
2525
max_memory = torch.cuda.max_memory_allocated(trainer.root_gpu) / 2**20
2626
epoch_time = time.time() - self.start_time

0 commit comments

Comments
 (0)