Conversation
…solve_ivp and passing init func args
benchmarks/solve_ivp_bench.py
Outdated
| timing_dfs = [] | ||
| for problem_key, problem_item in problem_dictionary.items(): | ||
| for algo, algo_params in algorithm_dict.items(): | ||
| for algo_iter in list(itertools.product(*algo_params["args"])): |
There was a problem hiding this comment.
@fruzsinaagocs does it make sense here to do all combinations of the tolerances? Or do we want to be more careful with the sets of tolerances we test over?
|
@fruzsinaagocs This is ready now! I still need to add the burst example, but feel free to add the schrodinger and flame prop benchmarks. |
…pp into feature/benchmarks
benchmarks/solve_ivp_bench.py
Outdated
| epss, epshs, ns = [1e-12, 1e-6], [1e-13, 1e-9], [35, 20] | ||
| atol = [1e-14, 1e-6] | ||
| # rtol = [1e-3, 1e-6] | ||
| algorithm_dict = { # Algo.LSODA: {"args":[epss, atol], "iters": 1}, | ||
| Algo.BDF: {"args": [epss, atol], "iters": 1}, | ||
| Algo.Radau: {"args": [epss, atol], "iters": 1}, | ||
| Algo.RK45: {"args": [epss, atol], "iters": 1}, | ||
| Algo.DOP853: {"args": [epss, atol], "iters": 1}, | ||
| Algo.PYRICCATICPP: {"args": [epss, epshs, ns], "iters": 1000}, | ||
| } |
There was a problem hiding this comment.
Set epsh to 0.1 * epss
atol can be the same as epss
zip with ns
benchmarks/solve_ivp_bench.py
Outdated
| # Not sure what to write here | ||
| #stiff_data = pl.DataFrame( | ||
| # {"start": [0.0], "end": [200.0], "y1": [0.0], "relative_error": [1e-12]} | ||
| #) | ||
|
|
||
| problem_dictionary = {Problem.STIFF: {"class": Stiff} #, "data": bremer_ref}} # Do we need this? |
There was a problem hiding this comment.
Each row of the polars data frame is unpacked when calling gen_problem and it instantiates an instance of the problem. You can think of it like each row of the polars data is the inputs to the constructor for the class. So you are essentially calling Stiff(start, end, y1, relative_error)
| The table below shows `wall time method / wall time pyriccaticpp` for each $\lambda$ value of the Bremer equation benchmarked. | ||
| The highlighted values in each row show the relative runtime of the second fastest method relative to pyriccaticpp. | ||
| Even in the most favorable scenario for the classical methods (the smallest $\lambda = 10$) DOP853 is still 3.9$\times$ slower than pyriccaticpp; for large $\lambda$, the classical solvers are orders of magnitude slower. | ||
|
|
There was a problem hiding this comment.
** TODO for @fruzsinaagocs : add note about accuracy of BDF -- it looks deceivingly fast here, but it produces garbage **
There was a problem hiding this comment.
I'm rerunning the tests now to also return failures and will include that info here
benchmarks/readme.md
Outdated
|
|
||
| The figure below (`ivp_bench_errs.png`) plots the relative error of the solution at the end of the solution interval for each method against an analytic reference value (which is available for all examples). We highlight the following: | ||
|
|
||
| - **Bremer**: With smaller $\lambda$, all methods achieve a global accuracy consistent with the user-defined local tolerance. At large $\lambda$, we note that the large condition number of the problem does not allow for a global accuracy of $10^{-12}$ to be achieved (see section 4 of @agocs2024adaptive ). `pyriccaticpp` produces the smallest error that one could realistically expect for a problem with this condition number (and working in double precision arithmetic). Classical methods are forced to take more and smaller steps, which accummulates local error, resulting in a global error that is a few orders of magnitue larger than the tolerance setting, except for BDF, which fails catastrophically and produces $\mathcal{O}(1)$ error. |
There was a problem hiding this comment.
** TODO: may need to change this after Steve's check. **
When a solution fails across all the runs should I have their timing results be NA?
benchmarks/readme.md
Outdated
|
|
||
| - **Bremer**: With smaller $\lambda$, all methods achieve a global accuracy consistent with the user-defined local tolerance. At large $\lambda$, we note that the large condition number of the problem does not allow for a global accuracy of $10^{-12}$ to be achieved (see section 4 of @agocs2024adaptive ). `pyriccaticpp` produces the smallest error that one could realistically expect for a problem with this condition number (and working in double precision arithmetic). Classical methods are forced to take more and smaller steps, which accummulates local error, resulting in a global error that is a few orders of magnitue larger than the tolerance setting, except for BDF, which fails catastrophically and produces $\mathcal{O}(1)$ error. | ||
| - **Airy**: Apart from the high-order `DOP853` method and `pyriccaticpp`, none produced an error within one order of magnitude of the tolerance setting, with the latter being a factor of 10 more accurate than the former. | ||
| - **Stiff**: The analytic solution at the end of the solution interval in this example zero (to more than 100 digits). Therefore, computing a relative accuracy is meaningless; we plot the _absolute_ error, computed simply as the output of the given method at the end of the solution interval, instead. Due to a strong damping term in the equation, all solvers achieve low errors, but over extremely different runtimes. |
There was a problem hiding this comment.
** TODO Actually, computing the relative error in this case makes no sense, since it just gives a 0 / 0 expression. The absolute error is basically the answer that each solver produces, which is what we should plot. **
There was a problem hiding this comment.
Oh I need to update the doc here, we do actually do absolute error for the stiff equation
if problem_info.y1 == 0:
yerr = np.abs(ys[-1])
else:
yerr = np.abs((problem_info.y1 - ys[-1]) / problem_info.y1)
This modifies the benchmarking script to run several problems over the solvers available in
scipy.integrate.solve_ivp. Several more things need done before mergedsolve_ivpmethods have correct args to benchmark againstFor 3, should we write the jacobian functions for the BDF, Radau, etc. solvers that need them? That would make them faster, but pyriccaticpp does not require setting up that jacobian so it feels like no defining it is the correct thing to benchmark against.