Skip to content

Commit d024a7c

Browse files
authored
Merge pull request #76 from andreped/andreped-patch-2
README technique order update [no ci]
2 parents 94e3569 + 5330dfe commit d024a7c

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,8 +74,8 @@ Our implementations enable theoretically **infinitely large batch size**, with *
7474

7575
| Technique | Usage |
7676
| - | - |
77-
| `Adaptive Gradient Clipping` | `model = GradientAccumulateModel(accum_steps=4, agc=True, inputs=model.input, outputs=model.output)` |
7877
| `Batch Normalization` | `layer = AccumBatchNormalization(accum_steps=4)` |
78+
| `Adaptive Gradient Clipping` | `model = GradientAccumulateModel(accum_steps=4, agc=True, inputs=model.input, outputs=model.output)` |
7979
| `Mixed precision` | `model = GradientAccumulateModel(accum_steps=4, mixed_precision=True, inputs=model.input, outputs=model.output)` |
8080

8181
* As batch normalization (BN) is not natively compatible with GA, we have implemented a custom BN layer which can be used as a drop-in replacement.

0 commit comments

Comments
 (0)