Skip to content

Fix typo in pytorch-ddp-accelerate-transformers.md #1436

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 31, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion pytorch-ddp-accelerate-transformers.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ The optimizer needs to be declared based on the model *on the specific device* (
Lastly, to run the script PyTorch has a convenient `torchrun` command line module that can help. Just pass in the number of nodes it should use as well as the script to run and you are set:

```bash
torchrun --nproc_per_nodes=2 --nnodes=1 example_script.py
torchrun --nproc_per_node=2 --nnodes=1 example_script.py
```

The above will run the training script on two GPUs that live on a single machine and this is the barebones for performing only distributed training with PyTorch.
Expand Down