Skip to content

Commit 2afb2e0

Browse files
Added accelerator based gradient accumulation for basic_example (#8966)
added accelerator based gradient accumulation for basic_example Co-authored-by: Sayak Paul <[email protected]>
1 parent d87fe95 commit 2afb2e0

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/source/en/tutorials/basic_training.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -340,8 +340,8 @@ Now you can wrap all these components together in a training loop with 🤗 Acce
340340
... loss = F.mse_loss(noise_pred, noise)
341341
... accelerator.backward(loss)
342342

343-
... if (step + 1) % config.gradient_accumulation_steps == 0:
344-
... accelerator.clip_grad_norm_(model.parameters(), 1.0)
343+
... if accelerator.sync_gradients:
344+
... accelerator.clip_grad_norm_(model.parameters(), 1.0)
345345
... optimizer.step()
346346
... lr_scheduler.step()
347347
... optimizer.zero_grad()

0 commit comments

Comments
 (0)