We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
accelerator
1 parent d87fe95 commit 2afb2e0Copy full SHA for 2afb2e0
docs/source/en/tutorials/basic_training.md
@@ -340,8 +340,8 @@ Now you can wrap all these components together in a training loop with 🤗 Acce
340
... loss = F.mse_loss(noise_pred, noise)
341
... accelerator.backward(loss)
342
343
-... if (step + 1) % config.gradient_accumulation_steps == 0:
344
-... accelerator.clip_grad_norm_(model.parameters(), 1.0)
+... if accelerator.sync_gradients:
+... accelerator.clip_grad_norm_(model.parameters(), 1.0)
345
... optimizer.step()
346
... lr_scheduler.step()
347
... optimizer.zero_grad()
0 commit comments