Skip to content
This repository was archived by the owner on Nov 3, 2023. It is now read-only.

Commit 10a0854

Browse files
authored
Fix docs formatting (#188)
1 parent 299a776 commit 10a0854

File tree

2 files changed

+9
-6
lines changed

2 files changed

+9
-6
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -36,11 +36,11 @@ Or to install master:
3636
Here are the supported PyTorch Lightning versions:
3737

3838
| Ray Lightning | PyTorch Lightning |
39-
|---|---|
40-
| 0.1 | 1.4 |
41-
| 0.2 | 1.5 |
42-
| 0.3 | 1.6 |
43-
| master | 1.6 |
39+
|---------------|-------------------|
40+
| 0.1 | 1.4 |
41+
| 0.2 | 1.5 |
42+
| 0.3 | 1.6 |
43+
| master | 1.6 |
4444

4545

4646
## PyTorch Distributed Data Parallel Strategy on Ray
@@ -194,7 +194,7 @@ As discussed [here](https://github.com/pytorch/pytorch/issues/51688#issuecomment
194194
Neither of these should be an issue with the `RayStrategy` due to Ray's serialization mechanisms. The only thing to keep in mind is that when using this strategy, your model does have to be serializable/pickleable.
195195

196196
> Horovod installation issue
197-
please see [details](./docs/horovod_faq.md)
197+
please see [details](https://github.com/ray-project/ray_lightning/blob/main/docs/horovod_faq.md)
198198

199199
<!--$UNCOMMENT## API Reference
200200

ray_lightning/ray_ddp.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@
2323
@PublicAPI(stability="beta")
2424
class RayStrategy(DDPSpawnStrategy):
2525
"""Pytorch Lightning strategy for DDP training on a Ray cluster.
26+
2627
This strategy is used to manage distributed training using DDP and
2728
Ray for process launching. Internally, the specified number of
2829
Ray actors are launched in the cluster and are registered as part of a
@@ -35,6 +36,7 @@ class RayStrategy(DDPSpawnStrategy):
3536
script: ``python train.py``, and only on the head node if running in a
3637
distributed Ray cluster. There is no need to run this script on every
3738
single node.
39+
3840
Args:
3941
num_workers (int): Number of training workers to use.
4042
num_cpus_per_worker (int): Number of CPUs per worker.
@@ -51,6 +53,7 @@ class RayStrategy(DDPSpawnStrategy):
5153
``DistributedDataParallel`` initialization
5254
Example:
5355
.. code-block:: python
56+
5457
import pytorch_lightning as ptl
5558
from ray_lightning import RayAccelerator
5659
ptl_model = MNISTClassifier(...)

0 commit comments

Comments
 (0)