-
Notifications
You must be signed in to change notification settings - Fork 6
bpf: improve the general precision of tnum_mul #5758
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bpf: improve the general precision of tnum_mul #5758
Conversation
Upstream branch: dc0fe95 |
91606c1
to
9e5d665
Compare
Upstream branch: c80d797 |
efd6713
to
e7d6e2c
Compare
9e5d665
to
f86fd37
Compare
Upstream branch: abdaf49 |
e7d6e2c
to
980667b
Compare
f86fd37
to
2842450
Compare
Upstream branch: 3ec8560 |
980667b
to
e15bad2
Compare
2842450
to
53b8665
Compare
Upstream branch: 1274163 |
e15bad2
to
685377b
Compare
53b8665
to
26f4a08
Compare
This commit addresses a challenge explained in an open question ("How can we incorporate correlation in unknown bits across partial products?") left by Harishankar et al. in their paper: https://arxiv.org/abs/2105.05398 When LSB(a) is uncertain, we know for sure that it is either 0 or 1, from which we could find two possible partial products and take a union. Experiment shows that applying this technique in long multiplication improves the precision in a significant number of cases (at the cost of losing precision in a relatively lower number of cases). This commit also removes the value-mask decomposition technique employed by Harishankar et al., as its direct incorporation did not result in any improvements for the new algorithm. Signed-off-by: Nandakumar Edamana <[email protected]>
Upstream branch: d87fdb1 |
685377b
to
6970f56
Compare
At least one diff in series https://patchwork.kernel.org/project/netdevbpf/list/?series=991951 expired. Closing PR. |
Pull request for series with
subject: bpf: improve the general precision of tnum_mul
version: 2
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=991951