Skip to content

Conversation

nsajko
Copy link
Contributor

@nsajko nsajko commented Jul 24, 2025

Accomplished by:

  • Using minimax instead of Taylor polynomials. They're more efficient. The minimax polynomials are found using Sollya.

  • Using multiple polynomials for different subintervals of the domain. Known as "domain splitting". Enables good accuracy for a wider region around the origin than possible with only a single polynomial.

The polynomial degrees are kept as-is.

Fixes #59065

@nsajko nsajko added the maths Mathematical functions label Jul 24, 2025
@nsajko
Copy link
Contributor Author

nsajko commented Jul 24, 2025

cosc(::Float64)

Before:

before

After:

after

cosc(::Float32)

Before:

before

After:

after

@nsajko nsajko marked this pull request as ready for review July 24, 2025 17:07
@nsajko nsajko requested a review from oscardssmith as a code owner July 24, 2025 17:07
@nsajko
Copy link
Contributor Author

nsajko commented Jul 24, 2025

To clarify: the above plots show the error in ULPs. In each case, the plot happens after downsampling by taking the maximum. Also see the linked issue.

@nsajko
Copy link
Contributor Author

nsajko commented Jul 24, 2025

The failing test is just flaky CI, issue #59073

@nsajko
Copy link
Contributor Author

nsajko commented Aug 5, 2025

Test failure seems surely unrelated.

@nsajko nsajko force-pushed the reimplement_cosc_for_Float32_Float64 branch from 7d11d2c to bfb32aa Compare August 6, 2025 12:18
@oscardssmith
Copy link
Member

I've read the pr 3 time by now, and I'm still unsure whether the struct it introduces is worth it. I feel like I n better read the more function based approach in exp.jl, but some portion of that is that I'm the author of those.

@nsajko nsajko force-pushed the reimplement_cosc_for_Float32_Float64 branch 2 times, most recently from 5995cf9 to 8f5a476 Compare August 11, 2025 04:06
@nsajko
Copy link
Contributor Author

nsajko commented Aug 11, 2025

still unsure whether the struct it introduces is worth it

The latest commit ("improve style") improves the style a bit. I wonder if it addresses your concerns somewhat?

@stevengj
Copy link
Member

stevengj commented Aug 13, 2025

To be clear, is this the minimax polynomial that minimizes the maximum absolute error over the interval, or which minimizes the maximum relative error? Presumably the latter, I hope?

I'm a bit concerned that your plot seems to show 1ulp error at x == 0. What does that mean? A nonzero error at the root would correspond to an infinite relative error. Since the x == 0 root is known exactly, and is exactly representable, we should strive to have small relative error as we approach it. (Ensuring this is a nice property of using a Taylor series around a zero.)

Or am I deceived and your plot doesn't actually have the x == 0 data point, so it is just showing a small relative error as you approach the root? Also, you might want to use a semilogx scale near x == 0 to make sure nothing bad is happening close to the zero.

(I'm also not entirely clear on what precisely you are plotting. I'm imagining a plot of relative error |approx–exact|/|exact|, divided by eps(T) to get something ulp-like?)

@stevengj
Copy link
Member

stevengj commented Aug 13, 2025

Also, you might want to check that cosc(+0.0) === -0.0 and cosc(-0.0) === +0.0; I don't remember if we have a test for this.

nsajko added a commit to nsajko/julia that referenced this pull request Aug 13, 2025
The idea for this addition to the test suite originates here, where
stevengj suggested to do this test for `cosc` specifically:

* JuliaLang#59087 (comment)

This PR instead tests all applicable functions I could find.
@nsajko
Copy link
Contributor Author

nsajko commented Aug 14, 2025

is this the minimax polynomial that minimizes the maximum absolute error over the interval, or which minimizes the maximum relative error?

It minimizes relative error. The fpminimax commands that appear in the included Sollya script allow choosing either absolute or relative error to optimize for, but defaults to the relative option.

Anyway if I had chosen to minimize the absolute error with the polynomial, the ULP error plots would have looked quite different.

I'm a bit concerned that your plot seems to show 1ulp error at x == 0. What does that mean? A nonzero error at the root would correspond to an infinite relative error.

At x == 0, the result is exact (easy to check exhaustively, see a relevant separate PR inspired by your comments: #59265). Otherwise the relative error would not be finite.

Each point on the plot is the result of many evaluations in the same neighborhood, which are aggregated basically by taking the maximum value. Trying to plot every evaluation would result in an unreadably noisy plot, so the idea is basically to get rid of the downward spikes of the noise. Thus the plot visualizes the maximum error in the neighborhood of each point, but not the minimum error. The justification is that the worst case is exactly what is interesting.

As an additional explanation, I'll copy over a paragraph from my message on this PR:

The plotting happens after downsampling (which just takes the maximum value), which itself happens after smoothing (which also just takes the maximum, i.e., a sliding window maximum). Thus the downsampling and smoothing basically just remove the downward spikes of the noise, preserving the maximum error at the region around each point.

If you want I can also share the code I use for plotting. The ULP error itself is calculated using the same code included in this PR, see module ULPError. The downsampling is implemented using some iterators, at least some of which I want to eventually include into IterTools.jl. The initial PR (implementing the sliding window maximum, used for smoothing, which happens before downsampling proper) is:

Since the x == 0 root is known exactly, and is exactly representable, we should strive to have small relative error as we approach it.

IMO it is fine to minimize the maximum relative error over the entire interval, there's no need to treat the immediate neighborhood of the root specially. (In fact, Sollya is not able to do that in the first place. My own package FindMinimaxPolynomial is able to do that, however it lacks Sollya's functionality for optimizing the coefficients of the polynomial in machine precision.)

Ensuring this is a nice property of using a Taylor series around a zero.

An exhaustive comparison of the maximum ULP errors between zero and nextfloat(type(0), 100_000_000) before and after this PR shows the maximum error happens to stays the same in this immediate neighborhood of zero. They are 0.58824617f0 for Float32 and 0.49999997f0 for Float64.

I'm also not entirely clear on what precisely you are plotting. I'm imagining a plot of relative error |approx–exact|/|exact|, divided by eps(T) to get something ulp-like?

See the module ULPError in test/testhelpers/ULPError.jl. In particular, the generic formula is:

abs((approximate - acc) / eps(approximate))

I believe this corresponds exactly to one of the definitions of ULP error, so it's not merely "ULP-like"?

JeffBezanson pushed a commit that referenced this pull request Aug 15, 2025
…59265)

The idea for this addition to the test suite originates here, where
stevengj suggested to do this test for `cosc` specifically:

* #59087 (comment)

This PR instead tests all applicable functions I could find.

Also fix the sign of `cosc(::BigFloat)` for zero input, thanks to
stevengj.

---------

Co-authored-by: Steven G. Johnson <[email protected]>
nsajko added 4 commits August 16, 2025 04:26
Improve accuracy for `cosc(::Float32)` and `cosc(::Float64)`. This is
accomplished by:

* Using minimax instead of Taylor polynomials. They're more efficient.
  The minimax polynomials are found using Sollya.

* Using multiple polynomials for different subintervals of the domain.
  Known as "domain splitting". Enables good accuracy for a wider region
  around the origin than possible with only a single polynomial.

The polynomial degrees are kept as-is.

Fixes JuliaLang#59065
@nsajko nsajko force-pushed the reimplement_cosc_for_Float32_Float64 branch from dc3d649 to e933115 Compare August 16, 2025 02:34
@nsajko
Copy link
Contributor Author

nsajko commented Aug 17, 2025

bump

@StefanKarpinski
Copy link
Member

@stevengj, if that addresses your concerns and @oscardssmith is happy, can we merge this?

@oscardssmith
Copy link
Member

I don't think sinc/cosc should exist in Base, but while they do, they might as well be good. Approved.

@StefanKarpinski StefanKarpinski merged commit 4c723db into JuliaLang:master Aug 29, 2025
7 checks passed
@nsajko nsajko deleted the reimplement_cosc_for_Float32_Float64 branch August 29, 2025 14:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
maths Mathematical functions
Projects
None yet
Development

Successfully merging this pull request may close these issues.

cosc: inaccuracy on the transition between two different implementations
4 participants