📚 Documentation
When using lightning
with RayDDPStrategy
, we found that Trainer
's limit_train_batches
parameter was actually meant for each worker instead of "global".
I wanna confirm with the developer if
- this is the case with other parallel training strategies
- this applies to other
limit_*_batches
parameters of the Trainer
.
It will be great to update the doc to mention this, e.g.,
https://lightning.ai/docs/pytorch/stable/common/trainer.html#limit-train-batches
Thanks
cc @lantiga @Borda @justusschock