You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
exact BigFloat to IEEE FP conversion in pure Julia (#50691)
There's lots of code, but most of it seems like it will be useful in
general. For example, I think I'll use the changes in float.jl and
rounding.jl to improve the #49749 PR. The changes in float.jl could also
be used to refactor float.jl to remove many magic constants.
Benchmarking script:
```julia
using BenchmarkTools
f(::Type{T} = BigFloat, n::Int = 2000) where {T} = rand(T, n)
g!(u, v) = map!(eltype(u), u, v)
@Btime g!(u, v) setup=(u = f(Float16); v = f();)
@Btime g!(u, v) setup=(u = f(Float32); v = f();)
@Btime g!(u, v) setup=(u = f(Float64); v = f();)
```
On master (dc06468):
```
46.116 μs (0 allocations: 0 bytes)
38.842 μs (0 allocations: 0 bytes)
37.039 μs (0 allocations: 0 bytes)
```
With both this commit and #50674 applied:
```
42.310 μs (0 allocations: 0 bytes)
42.661 μs (0 allocations: 0 bytes)
41.608 μs (0 allocations: 0 bytes)
```
So, with this benchmark at least, on an AMD Zen 2 laptop, conversion to
`Float16` is faster, but there's a slowdown for `Float32` and `Float64`.
Fixes#50642 (exact conversion to `Float16`)
Co-authored-by: Oscar Smith <[email protected]>
0 commit comments