We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 7490332 commit df285cfCopy full SHA for df285cf
beginner_source/dist_overview.rst
@@ -70,7 +70,7 @@ When deciding what parallelism techniques to choose for your model, use these co
70
#. Use `DistributedDataParallel (DDP) <https://pytorch.org/docs/stable/notes/ddp.html>`__,
71
if your model fits in a single GPU but you want to easily scale up training using multiple GPUs.
72
73
- * Use `torchrun <https://pytorch.org/docs/stable/elastic/run.html>`__, to launch multiple pytorch processes if you are you using more than one node.
+ * Use `torchrun <https://pytorch.org/docs/stable/elastic/run.html>`__, to launch multiple pytorch processes if you are using more than one node.
74
75
* See also: `Getting Started with Distributed Data Parallel <../intermediate/ddp_tutorial.html>`__
76
0 commit comments