Skip to content

fix accuracy of logcosh(::Union{Float32, Float64}) #101

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

nsajko
Copy link
Contributor

@nsajko nsajko commented Aug 2, 2025

Fixes #100

@nsajko
Copy link
Contributor Author

nsajko commented Aug 2, 2025

Plots of the error in ULPs after this change

logcosh(::Float32):

f32

logcosh(::Float64):

f64

The plotting happens after downsampling (which just takes the maximum value), which itself happens after smoothing (which also just takes the maximum, i.e., a sliding window maximum). Thus the downsampling and smoothing basically just remove the downward spikes of the noise, preserving the maximum error at the region around each point.

Copy link
Collaborator

@tpapp tpapp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

nsajko added 2 commits August 6, 2025 16:04
oscardssmith has been reviewing the `ULPError` code on
JuliaLang/julia#59087

So sync the code with the changes there.
@tpapp tpapp requested a review from devmotion August 8, 2025 06:50

The kernel of `logcosh`.

The polynomial coefficients were found using Sollya:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why Sollya and not Remez.jl as the kernel for log1pmx? Would Remez.jl give a different result?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Last time I checked out Remez.jl, it only implemented the Remez algorithm. That is, Remez.jl returns the coefficients in multiple precision, while Sollya's fpminimax further takes care of rounding the coefficients to machine precision without unnecessary loss of accuracy. See https://hal.science/inria-00119513

Therefore, we see that the general situation for L∞ approximation by real polynomials can be considered quite satisfying. The problem for the scientist that implements in software or hardware such approximations is that he uses finite-precision arithmetic and unfortunately, most of the time, the minimax approximation given by Chebyshev’s theorem and computed by Remez’ algorithm has coefficients which are transcendental (or at least irrational) numbers, hence not exactly representable with a finite number of bits.

Thus, the coefficients of the approximation usually need to be rounded according to the requirements of the application targeted (for example, in current software implementations, one often uses FP numbers in IEEE single or double precision for storing the coefficients of the polynomial approximation). But this rounding, if carelessly done, may lead to an important loss of accuracy. For instance, if we choose to round to the nearest each coefficient of the minimax approximation to the required format (this yields a polynomial that we will call rounded minimax in the sequel of the paper), the quality of the approximation we get can be very poor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

logcosh(x) accuracy vanishes around the zero, at x = 0
3 participants