FP8 format #354
Unanswered
stella-jxu
asked this question in
Q&A
FP8 format
#354
Replies: 1 comment
-
@stella-jxu |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying this pytorch example below,
https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html?highlight=e4m3#pytorch
It is working fine with E4M3 data format, However, I tried with E5M2 and I got the following error
Traceback (most recent call last):
File "transformer/transformer.py", line 16, in
fp8_recipe = recipe.DelayedScaling(margin=0, interval=1, fp8_format=recipe.Format.E5M2)
File "pydantic/dataclasses.py", line 286, in pydantic.dataclasses._add_pydantic_validation_attributes.handle_extra_init
f'default={self.default!r},'
File "", line 11, in init
File "pydantic/dataclasses.py", line 305, in pydantic.dataclasses._add_pydantic_validation_attributes.new_post_init
def set_name(self, owner, name):
File "/usr/local/lib/python3.10/dist-packages/transformer_engine/common/recipe.py", line 135, in post_init
assert self.fp8_format != Format.E5M2, "Pure E5M2 training is not supported."
AssertionError: Pure E5M2 training is not supported.
Just wondering how to enable E5M2 format in this case. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions