-
Notifications
You must be signed in to change notification settings - Fork 315
Check numerical equivalence / closeness between different kernel preferences #2651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2651
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 1 PendingAs of commit 1506c0d with merge base d2e791b ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
2534529
to
c608b78
Compare
e19cb46
to
5ae457c
Compare
for i in range(1, len(kp_and_res)): | ||
kp, res = kp_and_res[i] | ||
self.assertTrue( | ||
compute_error(res, kp_and_res[0][1]) > 28, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @vkuzo we don't have equivalence yet due to some differences in implementation, do you think we should match torchao quant primitives (choose_scale_float8 + quantize_float8) and triton ones?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we know what the differences are?
IMO we should also choose either TORCH or FBGEMM (but not AUTO) as the reference, and match others to the reference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah see PR summary for differences
I can update and use TORCH as reference
c608b78
to
42a767c
Compare
42a767c
to
65a4f84
Compare
65a4f84
to
ba8efe2
Compare
ba8efe2
to
1720743
Compare
1720743
to
36fce5e
Compare
36fce5e
to
9824504
Compare
60eae40
to
2e63f70
Compare
2e63f70
to
48bae28
Compare
48bae28
to
ab6d944
Compare
ab6d944
to
57b2086
Compare
57b2086
to
f6bac49
Compare
f6bac49
to
c7f8ff0
Compare
…erences Summary: This PR checks different kernel preferences for Float8Tensor are similar in numerics (AUTO, TORCH and FBGEMM) triton implementation and torchao implementation are a bit different right now actually, need to decide if we should fix it or not 1. difference in quantize op main difference seems to be the triton implementation is using: ``` a_scale = MAX_FP8 / max_abs then do a_scale = 1.0 / a_scale a_fp8 = a * a_scale ``` while torch is doing: ``` a_scale = max_abs / MAX_FP8 a_fp8 = a / a_scale ``` Also the hp_value_lb and hp_value_ub settings are slightly different triton choose scale and quantize code: https://github.com/pytorch/FBGEMM/blob/a4286c01ef01dad435b2ec8798605127d3032cd8/fbgemm_gpu/experimental/gemm/triton_gemm/fp8_gemm.py#L2382-L2392 torchao choose scale and quantize code: https://github.com/pytorch/ao/blob/3c466f844684af0fb80014094f2ca8663881eb33/torchao/quantization/quant_primitives.py#L2183 https://github.com/pytorch/ao/blob/3c466f844684af0fb80014094f2ca8663881eb33/torchao/quantization/quant_primitives.py#L2283 2. (potentially) difference in matrix multiplication ops TORCH and AUTO/FBGEMM are using different quantized mm ops Added a reverse option to bring sqnr closer: ``` granularity: PerTensor() sizes: ((128,), 256, 128) kp: KernelPreference.AUTO tensor(inf, device='cuda:0', dtype=torch.bfloat16) granularity: PerTensor() sizes: ((128,), 256, 128) kp: KernelPreference.FBGEMM tensor(inf, device='cuda:0', dtype=torch.bfloat16) .granularity: PerTensor() sizes: ((32, 128), 64, 256) kp: KernelPreference.AUTO tensor(inf, device='cuda:0', dtype=torch.bfloat16) granularity: PerTensor() sizes: ((32, 128), 64, 256) kp: KernelPreference.FBGEMM tensor(inf, device='cuda:0', dtype=torch.bfloat16) .granularity: PerRow() sizes: ((128,), 256, 128) kp: KernelPreference.AUTO tensor(inf, device='cuda:0', dtype=torch.bfloat16) granularity: PerRow() sizes: ((128,), 256, 128) kp: KernelPreference.FBGEMM tensor(inf, device='cuda:0', dtype=torch.bfloat16) .granularity: PerRow() sizes: ((32, 128), 64, 256) kp: KernelPreference.AUTO tensor(64.5000, device='cuda:0', dtype=torch.bfloat16) granularity: PerRow() sizes: ((32, 128), 64, 256) kp: KernelPreference.FBGEMM tensor(68., device='cuda:0', dtype=torch.bfloat16) ``` Test Plan: python test/quantization/quantize_/workflows/float8/test_float8_tensor.py -k test_kernel_preference_numerical_equivalence Reviewers: Subscribers: Tasks: Tags: stack-info: PR: #2651, branch: jerryzh168/stack/15
c7f8ff0
to
847259b
Compare
test/quantization/quantize_/workflows/float8/test_float8_tensor.py
Outdated
Show resolved
Hide resolved
|
||
# comparing numerics between different kernel preferences, using TORCH as the standard | ||
kp_and_res = list(quantized_outputs.items()) | ||
for i in range(1, len(kp_and_res)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you explicitly peel the first iteration so its very obvious that we see its the reference and just iterate on the rest of the keys
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK updated to run the ref separately
9f5fb3f
to
873b120
Compare
…erences Summary: This PR checks different kernel preferences for Float8Tensor are similar in numerics (AUTO, TORCH and FBGEMM) triton implementation and torchao implementation are a bit different right now actually, need to decide if we should fix it or not 1. difference in quantize op main difference seems to be the triton implementation is using: ``` a_scale = MAX_FP8 / max_abs then do a_scale = 1.0 / a_scale a_fp8 = a * a_scale ``` while torch is doing: ``` a_scale = max_abs / MAX_FP8 a_fp8 = a / a_scale ``` Also the hp_value_lb and hp_value_ub settings are slightly different triton choose scale and quantize code: https://github.com/pytorch/FBGEMM/blob/a4286c01ef01dad435b2ec8798605127d3032cd8/fbgemm_gpu/experimental/gemm/triton_gemm/fp8_gemm.py#L2382-L2392 torchao choose scale and quantize code: https://github.com/pytorch/ao/blob/3c466f844684af0fb80014094f2ca8663881eb33/torchao/quantization/quant_primitives.py#L2183 https://github.com/pytorch/ao/blob/3c466f844684af0fb80014094f2ca8663881eb33/torchao/quantization/quant_primitives.py#L2283 2. (potentially) difference in matrix multiplication ops TORCH and AUTO/FBGEMM are using different quantized mm ops Added a reverse option to bring sqnr closer: ``` granularity: PerTensor() sizes: ((128,), 256, 128) kp: KernelPreference.AUTO tensor(inf, device='cuda:0', dtype=torch.bfloat16) granularity: PerTensor() sizes: ((128,), 256, 128) kp: KernelPreference.FBGEMM tensor(inf, device='cuda:0', dtype=torch.bfloat16) .granularity: PerTensor() sizes: ((32, 128), 64, 256) kp: KernelPreference.AUTO tensor(inf, device='cuda:0', dtype=torch.bfloat16) granularity: PerTensor() sizes: ((32, 128), 64, 256) kp: KernelPreference.FBGEMM tensor(inf, device='cuda:0', dtype=torch.bfloat16) .granularity: PerRow() sizes: ((128,), 256, 128) kp: KernelPreference.AUTO tensor(inf, device='cuda:0', dtype=torch.bfloat16) granularity: PerRow() sizes: ((128,), 256, 128) kp: KernelPreference.FBGEMM tensor(inf, device='cuda:0', dtype=torch.bfloat16) .granularity: PerRow() sizes: ((32, 128), 64, 256) kp: KernelPreference.AUTO tensor(64.5000, device='cuda:0', dtype=torch.bfloat16) granularity: PerRow() sizes: ((32, 128), 64, 256) kp: KernelPreference.FBGEMM tensor(68., device='cuda:0', dtype=torch.bfloat16) ``` Test Plan: python test/quantization/quantize_/workflows/float8/test_float8_tensor.py -k test_kernel_preference_numerical_equivalence Reviewers: Subscribers: Tasks: Tags: stack-info: PR: #2651, branch: jerryzh168/stack/15
873b120
to
1506c0d
Compare
Stacked PRs:
optional_tensor_names
in TorchAOBaseTensor #2710Check numerical equivalence / closeness between different kernel preferences
Summary:
This PR checks different kernel preferences for Float8Tensor are similar in numerics
(AUTO, TORCH and FBGEMM)
triton implementation and torchao implementation are a bit different right now actually, need to decide if we should fix it or not
main difference seems to be the triton implementation is using:
while torch is doing:
Also the hp_value_lb and hp_value_ub settings are slightly different
triton choose scale and quantize code: https://github.com/pytorch/FBGEMM/blob/a4286c01ef01dad435b2ec8798605127d3032cd8/fbgemm_gpu/experimental/gemm/triton_gemm/fp8_gemm.py#L2382-L2392
torchao choose scale and quantize code:
ao/torchao/quantization/quant_primitives.py
Line 2183 in 3c466f8
ao/torchao/quantization/quant_primitives.py
Line 2283 in 3c466f8
TORCH and AUTO/FBGEMM are using different quantized mm ops
Added a reverse option to bring sqnr closer:
Test Plan:
python test/quantization/quantize_/workflows/float8/test_float8_tensor.py -k test_kernel_preference_numerical_equivalence
Reviewers:
Subscribers:
Tasks:
Tags: