Skip to content
Merged
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 10 additions & 4 deletions intermediate_source/dist_tuto.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ the following template.
"""run.py:"""
#!/usr/bin/env python
import os
import sys
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
Expand All @@ -66,9 +67,13 @@ the following template.
if __name__ == "__main__":
world_size = 2
processes = []
mp.set_start_method("spawn")
for rank in range(world_size):
p = mp.Process(target=init_process, args=(rank, world_size, run))
if "google.colab" in sys.modules:
print("Running in Google Colab")
mp.get_context("spawn")
else:
mp.set_start_method("spawn")
for rank in range(size):
p = mp.Process(target=init_process, args=(rank, size, run))
p.start()
processes.append(p)

Expand Down Expand Up @@ -156,7 +161,8 @@ we should not modify the sent tensor nor access the received tensor before ``req
In other words,

- writing to ``tensor`` after ``dist.isend()`` will result in undefined behaviour.
- reading from ``tensor`` after ``dist.irecv()`` will result in undefined behaviour.
- reading from ``tensor`` after ``dist.irecv()`` will result in undefined
behaviour, until ``req.wait()`` has been executed.

However, after ``req.wait()``
has been executed we are guaranteed that the communication took place,
Expand Down
Loading