Skip to content

Commit 56204d8

Browse files
bhimrazyBorda
andauthored
docs: Add float16 precision warning for optimizer eps and small values (#21295)
* docs: add warnings about Float16 limitations and introduce BFloat16 for better stability * Empty-Commit --------- Co-authored-by: jirka <[email protected]>
1 parent 29ddf82 commit 56204d8

File tree

1 file changed

+8
-0
lines changed

1 file changed

+8
-0
lines changed

docs/source-pytorch/common/precision_basic.rst

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,14 @@ However, this setting can sometimes lead to unstable training.
3939
4040
Trainer(precision="16-true")
4141
42+
.. warning::
43+
44+
Float16 cannot represent values smaller than ~6e-5. Values like Adam's default ``eps=1e-8`` become zero, which can cause
45+
NaN during training. Increase ``eps`` to 1e-4 or higher, and avoid extremely small values in your model weights and data.
46+
47+
.. note::
48+
49+
BFloat16 (``"bf16-mixed"`` or ``"bf16-true"``) has better numerical stability with a wider dynamic range.
4250

4351
----
4452

0 commit comments

Comments
 (0)