Skip to content
This repository was archived by the owner on Nov 3, 2023. It is now read-only.

Commit 3c3e9d4

Browse files
authored
Update broken PTL link (#137)
1 parent c8bcae7 commit 3c3e9d4

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ plugin = RayShardedPlugin(num_workers=4, num_cpus_per_worker=1, use_gpu=True)
130130
trainer = pl.Trainer(..., plugins=[plugin])
131131
trainer.fit(ptl_model)
132132
```
133-
See the [Pytorch Lightning docs](https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#sharded-training) for more information on sharded training.
133+
See the [Pytorch Lightning docs](https://pytorch-lightning.readthedocs.io/en/stable/advanced/model_parallel.html#sharded-training) for more information on sharded training.
134134

135135
## Hyperparameter Tuning with Ray Tune
136136
`ray_lightning` also integrates with Ray Tune to provide distributed hyperparameter tuning for your distributed model training. You can run multiple PyTorch Lightning training runs in parallel, each with a different hyperparameter configuration, and each training run parallelized by itself. All you have to do is move your training code to a function, pass the function to tune.run, and make sure to add the appropriate callback (Either `TuneReportCallback` or `TuneReportCheckpointCallback`) to your PyTorch Lightning Trainer.

0 commit comments

Comments
 (0)