Skip to content

Conversation

@cwittens
Copy link
Member

I wished I could have done something like

function rhs!(...)
   rhs_split_1!(...)
   rhs_split_2!(...)
end

but this does not seem practically.

see comments in:

function rhs!(dq, q, t, mesh, equations::KdVEquation1D, initial_condition,
              ::BoundaryConditionPeriodic, source_terms, solver, cache)
    # define locally bc to not use a global variable
    bc = boundary_condition_periodic      
    (; tmp1) = cache
    rhs_1!(dq, q, t, mesh, equations, initial_condition,
           bc, source_terms, solver, cache)

    deta, = dq.x
    tmp1 .= deta

    # rhs_split_2! needs to be defined to set "deta = ..."
    # and not just "deta += ..." in order to work with SplitODEProblem 
    # This means one either has to store the result in a temporary variable
    # or have a function rhs! does not call rhs_split_1! and rhs_split_2!. 
    # and currently it does not work with ForwardDiff
    rhs_split_2!(dq, q, t, mesh, equations, initial_condition,
           bc, source_terms, solver, cache)

    deta .+= tmp1
    return nothing
end

Here I am not 100% if this works, but I think it does (and tests will tell I hope).

function jacobian(semi::Semidiscretization;
                  t = 0.0,
                  q0 = compute_coefficients(semi.initial_condition, t, semi))
-    J = ForwardDiff.jacobian(similar(q0), q0) do dq, q
-         DispersiveShallowWater.rhs!(dq, q, semi, t)
+    @unpack tmp_partitioned = semi.cache
+    J = ForwardDiff.jacobian(tmp_partitioned, q0) do dq, q
+        DispersiveShallowWater.rhs!(dq, q, semi, t)
    end
    return J

@cwittens
Copy link
Member Author

Also currently ForwardDiff / building of Jacobians is not working with the SplitODEProblem / ODEProblem with Splitfunction, but this is because of SciML/OrdinaryDiffEq.jl#2719 I think.

@github-actions
Copy link
Contributor

github-actions bot commented May 23, 2025

Benchmark Results

main f1c24e9... main / f1c24e9...
bbm_1d/bbm_1d_basic.jl 13.7 ± 0.26 μs 13.7 ± 0.31 μs 0.997 ± 0.029
bbm_1d/bbm_1d_fourier.jl 0.215 ± 0.011 ms 0.534 ± 0.0097 ms 0.402 ± 0.021
bbm_bbm_1d/bbm_bbm_1d_basic_reflecting.jl 0.081 ± 0.00036 ms 0.0807 ± 0.00045 ms 1 ± 0.0072
bbm_bbm_1d/bbm_bbm_1d_dg.jl 0.034 ± 0.00049 ms 0.0343 ± 0.00049 ms 0.989 ± 0.02
bbm_bbm_1d/bbm_bbm_1d_relaxation.jl 27.1 ± 0.41 μs 27.6 ± 0.48 μs 0.982 ± 0.023
bbm_bbm_1d/bbm_bbm_1d_upwind_relaxation.jl 0.0481 ± 0.00059 ms 0.0482 ± 0.00061 ms 0.997 ± 0.018
hyperbolic_serre_green_naghdi_1d/hyperbolic_serre_green_naghdi_dingemans.jl 4.14 ± 0.014 μs 4.16 ± 0.013 μs 0.997 ± 0.0045
serre_green_naghdi_1d/serre_green_naghdi_well_balanced.jl 0.196 ± 0.0084 ms 0.2 ± 0.0094 ms 0.98 ± 0.062
svaerd_kalisch_1d/svaerd_kalisch_1d_dingemans_relaxation.jl 0.143 ± 0.0036 ms 0.148 ± 0.0054 ms 0.971 ± 0.043
time_to_load 2.02 ± 0.013 s 2.01 ± 0.0055 s 1.01 ± 0.007

Benchmark Plots

A plot of the benchmark results have been uploaded as an artifact to the workflow run for this PR.
Benchmark Plot

@codecov-commenter
Copy link

codecov-commenter commented May 23, 2025

Codecov Report

Attention: Patch coverage is 98.14815% with 1 line in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/semidiscretization.jl 96.00% 1 Missing ⚠️

📢 Thoughts on this report? Let us know!

@cwittens
Copy link
Member Author

Should I add something like kdv_1d_IMEX_narrow_stencil.jl to make Codecov happy?

Copy link
Member

@JoshuaLampert JoshuaLampert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just an initial review with a general question about the interface.

saveat = range(tspan..., length = 100)

# alg = Rodas5() # not working because of https://github.com/SciML/OrdinaryDiffEq.jl/issues/2719
# alg = KenCarp4() # would need to add OrdinaryDiffEqSDIRK - can to that, but I dont want to bloat DSW.jl with package s
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to use a true IMEX scheme like KenCarp4 here. Something like Tsit5 will use explicit time stepping also for the stiff part, right? I think it's fine to just add OrdinaryDiffEqSDIRK.jl to the test suite. It doesn't need to be a dependency of DSW.jl.

that can be passed to `solve` from the [SciML ecosystem](https://diffeq.sciml.ai/latest/).
"""
function semidiscretize(semi::Semidiscretization, tspan)
function semidiscretize(semi::Semidiscretization, tspan; no_splitform = true)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering what's the best interface for this. I initially thought it would be good to add a trait has_stiff_terms(equations), which can be used to determine whether we create a usual ODEFunction or a SplitFunction. This way, we wouldn't need a kwarg no_splitform or similar, but whether or not using the SplitFunction formulation will be determined from the equations. However, this would mean that we also use a split function in the case of a non-IMEX scheme, which could have some overhead (I think that's a similar approach as Trixi.jl does with periodic equations, right @ranocha?). Maybe you could first add one or two KdV examples to the benchmark suite as in the list of #202, such that we can try how much overhead we would have from the SplitFunction formulation for the explicit solvers?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The advantage of the has_stiff_terms (or maybe better have_stiff_terms because equations are plural) would be:

  • the user would not need to set no_splitform = false in semidiscretize in order to use an IMEX scheme
  • we wouldn't need the rhs! definition anymore because we always define a SplitFunction for equations, which have_stiff_terms reducing the maintenance overhead and avoiding duplication.

The disadvantage I see is

  • maybe a small overhead.

Any more aspects to consider?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hyperbolic approximations would be tricky since they can have quite stiff terms depending on the hyperbolic approximation parameter (lambda for the hyperbolic SGN equations). Still, we may also want to use them with explicit time integration methods as efficiently as possible.
Currently, "as efficiently as possible" is significantly restricted by SciML/RecursiveArrayTools.jl#400, so I am unsure of its importance. We should benchmark it.
To sum up, we could add a trait have_stiff_terms to the equations and introduce a keyword argument like split_ode = have_stiff_terms(equations). We should ensure that this results in a type-stable semidiscretize, i.e., the keyword argument can take True() and False() values (with different types).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me. So if I understand correctly, we want to support both the classical ODEFunction version and the SplitODEFunction version, i.e. we also need to keep both rhs! and rhs_split_stiff! and rhs_split_nonstiff!. Is there an elegant way to reduce code duplication and make maintenance easier because as Collin wrote in the initial comment of the PR we cannot simply call both versions after each other in rhs!.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One could also pass an argument to rhs_split_nonstiff! which then decides of it should be deta = ... or deta += ... which if known at compile time would mean no overhead (and otherwise just an if statement) and it would make it possible to have

function rhs!(...)
   rhs_split_stiff!(...)
   rhs_split_nonstiff!(... :add)
end

afterall.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's do it like this: since we only support the KdV equation with implicit methods so far, we can just hard-code that. For the hyperbolic approximation of SGN, we keep the current structure and create an issue to let us reconsider this in the future.

@ranocha
Copy link
Member

ranocha commented May 23, 2025

Can we use https://docs.sciml.ai/SciMLOperators/stable/ somehow for the stiff part to tell OrdinaryDiffEq.jl and its underlying solver infrastructure that the stiff part is a linear operator that does not change in time? For periodic boundary conditions, SummationByPartsOperators.jl also provides relatively efficient solves, e.g., using an FFT for Fourier operators.

@cwittens
Copy link
Member Author

Can we use https://docs.sciml.ai/SciMLOperators/stable/ somehow for the stiff part to tell OrdinaryDiffEq.jl and its underlying solver infrastructure that the stiff part is a linear operator that does not change in time? For periodic boundary conditions, SummationByPartsOperators.jl also provides relatively efficient solves, e.g., using an FFT for Fourier operators.

I tried my hand a bit at SciMLOperators and I could make it work - but this could also be my incompetence + bad documentation IMO.
I feel like they are not working with ArrayPartition, but I didnt produce a MRE to really show that yet.

@ranocha
Copy link
Member

ranocha commented May 23, 2025

Could you please try this a bit more - and maybe ask on Julia Slack/Discourse? We can also test using sparse (or Matrix for Fourier operators) versions if this helps to make it work first.

@cwittens
Copy link
Member Author

Could you please try this a bit more - and maybe ask on Julia Slack/Discourse? We can also test using sparse (or Matrix for Fourier operators) versions if this helps to make it work first.

Yes, will do.
Probably also needed because currently IMEX is not faster (see some benchmarking results in examples/kdv_1d/kdv_1d_manufactured.jl as a raw string at the end of the file.

@cwittens
Copy link
Member Author

I am a little bit proud of the current version of handling the rhs!, rhs_split_stiff!, rhs_split_nonstiff! problem.
There is (almost) no duplication, everything is still together and readable, it should be easier to make existing rhs!'s IMEX ready and it has no overhead.

Concerning the rest, this is how I understood

To sum up, we could add a trait have_stiff_terms to the equations and introduce a keyword argument like split_ode = have_stiff_terms(equations). We should ensure that this results in a type-stable semidiscretize, i.e., the keyword argument can take True() and False() values (with different types).

but I am open to suggestions/changes.

Copy link
Member

@JoshuaLampert JoshuaLampert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Implementation-wise this looks good to me. I also like the way you combine the stiff and non-stiff parts in rhs!. Regarding performance of the IMEX method, this doesn't look good, but I don't know what's the issue. Is this related to the issue with RecursiveArrayTools.jl or do you have any idea, @ranocha?

@JoshuaLampert JoshuaLampert requested a review from Copilot May 25, 2025 10:12
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces IMEX (implicit–explicit) support for the KdV equation by splitting stiff and non-stiff right-hand side evaluations, updating semidiscretize to optionally dispatch a SplitFunction, and adding tests and examples to exercise the new paths.

  • Add rhs_split_stiff! and rhs_split_nonstiff! and dispatch logic in semidiscretization.jl
  • Introduce have_stiff_terms trait and default split_ode keyword on semidiscretize
  • Extend tests (@test_allocations_split_ode, error-throw test) and update KdV examples with explicit split_ode flags

Reviewed Changes

Copilot reviewed 14 out of 14 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
src/equations/kdv_1d.jl New split-RHS implementations and have_stiff_terms trait
src/semidiscretization.jl split_ode keyword, helper dispatch, rhs_split_*!
src/equations/equations.jl Default have_stiff_terms trait added
src/DispersiveShallowWater.jl Export have_stiff_terms, import SplitFunction
test/test_util.jl Macro @test_allocations_split_ode added
test/test_unit.jl Error case for split_ode = Val{true}() added
test/test_kdv_1d.jl IMEX example and allocation test added
test/Project.toml Added OrdinaryDiffEqSDIRK dependency
examples/kdv_1d/*.jl Explicit split_ode flags in incremental examples
examples/kdv_1d/kdv_1d_IMEX.jl New standalone IMEX example
Comments suppressed due to low confidence (3)

src/equations/kdv_1d.jl:136

  • An opening triple-quote is inserted without a matching closing """, causing the subsequent code to be treated as a string literal. Add a corresponding closing """ right before defining the next function to fix the docstring.
+++

src/semidiscretization.jl:299

  • [nitpick] The name tmp_partitioned may not clearly convey its role as the preallocated Jacobian seed or workspace. Ensure that semi.cache actually defines this field, and consider renaming to something like jacobian_seed or workspace for clarity.
@unpack tmp_partitioned = semi.cache

test/test_unit.jl:86

  • You cover the error path when asking for splitting on non-stiff equations. Consider also adding a test to verify that semidiscretize with split_ode = Val{true}() on a truly stiff equation returns an ODEProblem with a SplitFunction as expected.
@test_throws ArgumentError semidiscretize(semi, (0.0, 1.0), split_ode = Val{true}())

return nothing
end
function rhs_split_stiff!(dq, q, t, mesh, equations::KdVEquation1D, initial_condition,
Copy link

Copilot AI May 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The implementations of rhs_split_stiff!, rhs_split_nonstiff!, and the main rhs! share large blocks of duplicated logic (e.g., operator application and temporary vectors). Consider extracting common operations into helper functions to reduce duplication and improve readability.

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's basically a reminder to remove the commented code once everything is settled.

save_everystep = false, callback = callbacks, saveat = saveat)

"""
using alg = KenCarp4() I get the following Benchmarks:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please also check the performance for a physically relevant setup, i.e., not a manufactured solution? Manufactured setups can be unusual and behave differently from what we observe in physically relevant setups.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here are some benchmarks using initial_condition_convergence_test and alg = KenCarp4() and alg = Tsit5().

alg = KenCarp4()
IMEX:
# @benchmark, with callbacks = nothing: 
BenchmarkTools.Trial: 23 samples with 1 evaluation per sample.
 Range (min  max):  211.648 ms  225.053 ms  ┊ GC (min  max): 0.00%  3.14%
 Time  (median):     216.464 ms               ┊ GC (median):    1.19%
 Time  (mean ± σ):   217.090 ms ±   3.438 ms  ┊ GC (mean ± σ):  1.16% ± 1.06%

       ▃        ▃  ▃           █      ▃
  ▇▁▁▁▁█▁▇▁▁▁▇▁▁█▁▁█▁▇▇▇▁▁▁▁▁▇▇█▇▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▇▁▁▁▁▁▁▇▁▁▁▁▇ ▁
  212 ms           Histogram: frequency by time          225 ms <

 Memory estimate: 27.45 MiB, allocs estimate: 49627.
#  Callback (for ncalls etc:)
──────────────────────────────────────────────────────────────────────────────────────
           DispersiveSWE                     Time                    Allocations
                                    ───────────────────────   ────────────────────────
         Tot / % measured:               225ms /  12.6%           23.3MiB /   0.2%

Section                     ncalls     time    %tot     avg     alloc    %tot      avg
──────────────────────────────────────────────────────────────────────────────────────
rhs_split_stiff!             3.51k   22.1ms   77.6%  6.29μs      672B    1.4%    0.19B
  third-order derivatives    3.51k   21.5ms   75.7%  6.14μs     0.00B    0.0%    0.00B
  ~rhs_split_stiff!~         3.51k    528μs    1.9%   150ns      672B    1.4%    0.19B
analyze solution                 2   5.75ms   20.2%  2.87ms   43.7KiB   96.4%  21.9KiB
rhs_split_nonstiff!            771    619μs    2.2%   803ns      976B    2.1%    1.27B
  hyperbolic                   771    364μs    1.3%   473ns     0.00B    0.0%    0.00B
  ~rhs_split_nonstiff!~        771    238μs    0.8%   308ns      976B    2.1%    1.27B
  source terms                 771   16.9μs    0.1%  21.9ns     0.00B    0.0%    0.00B
──────────────────────────────────────────────────────────────────────────────────────

No IMEX:
# @benchmark, with callbacks = nothing: 
BenchmarkTools.Trial: 19 samples with 1 evaluation per sample.
 Range (min  max):  250.372 ms  287.337 ms  ┊ GC (min  max): 0.00%  2.31%
 Time  (median):     266.741 ms               ┊ GC (median):    1.01%
 Time  (mean ± σ):   265.551 ms ±   9.897 ms  ┊ GC (mean ± σ):  1.04% ± 0.96%

  █  ▁ ▁    ▁     ▁   ▁▁  ▁  ▁ █   ▁ ▁▁▁▁      ▁              ▁
  █▁▁█▁█▁▁▁▁█▁▁▁▁▁█▁▁▁██▁▁█▁▁█▁█▁▁▁█▁████▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
  250 ms           Histogram: frequency by time          287 ms <

 Memory estimate: 29.41 MiB, allocs estimate: 48408.
# Callback (for ncalls etc:)
──────────────────────────────────────────────────────────────────────────────────────
           DispersiveSWE                     Time                    Allocations
                                    ───────────────────────   ────────────────────────
         Tot / % measured:               263ms /  24.3%           25.2MiB /   0.1%

Section                     ncalls     time    %tot     avg     alloc    %tot      avg
──────────────────────────────────────────────────────────────────────────────────────
rhs!                         3.33k   59.4ms   92.7%  17.8μs   1.25KiB    5.4%    0.38B
  third-order derivatives    3.33k   29.9ms   46.6%  8.96μs     0.00B    0.0%    0.00B
  hyperbolic                 3.33k   28.1ms   43.8%  8.42μs     0.00B    0.0%    0.00B
  ~rhs!~                     3.33k   1.39ms    2.2%   417ns   1.25KiB    5.4%    0.38B
  source terms               3.33k   83.6μs    0.1%  25.1ns     0.00B    0.0%    0.00B
analyze solution                 1   4.69ms    7.3%  4.69ms   21.8KiB   94.6%  21.8KiB
──────────────────────────────────────────────────────────────────────────────────────


alg = Tsit5() # (IMEX or No IMEX should make almost no difference)
IMEX:
# @benchmark, with callbacks = nothing: 
BenchmarkTools.Trial: 608 samples with 1 evaluation per sample.
 Range (min  max):  7.119 ms  25.909 ms  ┊ GC (min  max): 0.00%  68.83%
 Time  (median):     7.961 ms              ┊ GC (median):    0.00%
 Time  (mean ± σ):   8.203 ms ±  1.102 ms  ┊ GC (mean ± σ):  0.68% ±  3.79%

       ▄▅█▅▂▂▁
  ▃▃▅▇▇███████▇▇▇▆▄▃▄▃▄▃▃▃▃▃▂▃▁▃▂▁▂▂▂▂▃▂▁▁▁▂▁▁▁▁▁▂▁▁▁▁▁▁▂▁▁▂ ▃
  7.12 ms        Histogram: frequency by time        12.5 ms <

 Memory estimate: 559.71 KiB, allocs estimate: 4637.

# Callback (for ncalls etc:)
──────────────────────────────────────────────────────────────────────────────────────
           DispersiveSWE                     Time                    Allocations
                                    ───────────────────────   ────────────────────────
         Tot / % measured:              22.9ms /  78.0%            624KiB /  21.3%

Section                     ncalls     time    %tot     avg     alloc    %tot      avg
──────────────────────────────────────────────────────────────────────────────────────
analyze solution                 6   13.4ms   74.8%  2.23ms    131KiB   98.8%  21.9KiB
rhs_split_nonstiff!          3.56k   2.54ms   14.2%   714ns      976B    0.7%    0.27B
  hyperbolic                 3.56k   1.51ms    8.5%   424ns     0.00B    0.0%    0.00B
  ~rhs_split_nonstiff!~      3.56k    944μs    5.3%   266ns      976B    0.7%    0.27B
  source terms               3.56k   87.2μs    0.5%  24.5ns     0.00B    0.0%    0.00B
rhs_split_stiff!             3.56k   1.95ms   10.9%   548ns      672B    0.5%    0.19B
  third-order derivatives    3.56k   1.47ms    8.2%   412ns     0.00B    0.0%    0.00B
  ~rhs_split_stiff!~         3.56k    482μs    2.7%   136ns      672B    0.5%    0.19B
──────────────────────────────────────────────────────────────────────────────────────

No IMEX:
# @benchmark, with callbacks = nothing: 
BenchmarkTools.Trial: 641 samples with 1 evaluation per sample.
 Range (min  max):  6.771 ms  41.173 ms  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     7.613 ms              ┊ GC (median):    0.00%
 Time  (mean ± σ):   7.806 ms ±  1.679 ms  ┊ GC (mean ± σ):  0.69% ± 3.97%

        ▃▃▅█▅▆█▂
  ▃▃▅█▇█████████▇▇▇▅▆▅▅▅▃▄▂▃▂▃▂▁▁▂▂▂▂▁▁▂▂▁▁▂▂▂▂▂▁▂▂▁▁▁▁▁▁▁▁▂ ▃
  6.77 ms        Histogram: frequency by time        11.1 ms <

 Memory estimate: 554.96 KiB, allocs estimate: 4632.
# Callback (for ncalls etc:)
──────────────────────────────────────────────────────────────────────────────────────
           DispersiveSWE                     Time                    Allocations
                                    ───────────────────────   ────────────────────────
         Tot / % measured:              21.4ms /  79.6%            622KiB /  21.2%

Section                     ncalls     time    %tot     avg     alloc    %tot      avg
──────────────────────────────────────────────────────────────────────────────────────
analyze solution                 6   12.5ms   73.3%  2.08ms    130KiB   99.1%  21.7KiB
rhs!                         3.56k   4.55ms   26.7%  1.28μs   1.25KiB    0.9%    0.36B
  hyperbolic                 3.56k   1.55ms    9.1%   437ns     0.00B    0.0%    0.00B
  third-order derivatives    3.56k   1.48ms    8.7%   417ns     0.00B    0.0%    0.00B
  ~rhs!~                     3.56k   1.42ms    8.3%   399ns   1.25KiB    0.9%    0.36B
  source terms               3.56k   91.7μs    0.5%  25.8ns     0.00B    0.0%    0.00B
──────────────────────────────────────────────────────────────────────────────────────

@ranocha
Copy link
Member

ranocha commented May 25, 2025

Thanks! Implementation-wise this looks good to me. I also like the way you combine the stiff and non-stiff parts in rhs!. Regarding performance of the IMEX method, this doesn't look good, but I don't know what's the issue. Is this related to the issue with RecursiveArrayTools.jl or do you have any idea, @ranocha?

The results mentioned in one of the files suggest that there could be several reasons (since the number of time steps varies, too). Thus, I propose the following steps to investigate this issue further:

  • Switch to a physically relevant setup
  • Compare the runtime for fixed time steps without any adaptivity
  • Check a few other solvers for split ODE problems to see whether the issue persists or is special to KenCarp4

@cwittens
Copy link
Member Author

  • Compare the runtime for fixed time steps without any adaptivity

Results for fixed time steps are in the manufactured solution example below and there IMEX is better then No IMEX.

@cwittens
Copy link
Member Author

  • Switch to a physically relevant setup
    done in Added IMEX for KdV eq. #207 (comment) and it looks like IMEX is better then No IMEX, but purely explicit solvers are still better. Besides in the case of the manufactured solution, where Tsit5() will cause a maxiter exceeded error.
  • Check a few other solvers for split ODE problems to see whether the issue persists or is special to KenCarp4

I tried all solvers from https://docs.sciml.ai/DiffEqDocs/stable/solvers/split_ode_solve/ besides the KenCarp#, but I could really compare them. Here is a list of the problem:

First, all Solvers:

  • SplitEuler: 1st order fully explicit method. Used for testing accuracy
    of splits.
  • IMEXEuler : 1st order explicit Euler mixed with implicit Euler. Fixed time
    step only.
  • CNAB2: Crank-Nicolson Adams Bashforth Order 2. Fixed time step only.
  • CNLF: Crank-Nicolson Leapfrog of Order 2. Fixed time step only.
  • SBDF2 : 2nd order IMEX BDF method. Fixed time step only.
  • SBDF3 : 3rd order IMEX BDF method. Fixed time step only. In development.
  • SBDF4 : 4th order IMEX BDF method. Fixed time step only. In development.
  • KenCarp3: An A-L stable stiffly-accurate 3rd order ESDIRK method.
  • KenCarp4: An A-L stable stiffly-accurate 4th order ESDIRK method.
  • KenCarp47 - An A-L stable stiffly-accurate 4th order seven-stage ESDIRK method with splitting
  • KenCarp5: An A-L stable stiffly-accurate 5th order ESDIRK method.
  • KenCarp58 - An A-L stable stiffly-accurate 5th order eight-stage ESDIRK method with splitting
  • LawsonEuler - First order exponential Euler scheme. Fixed timestepping only.
  • NorsettEuler - First order exponential-RK scheme. Fixed timestepping only. Alias: ETD1.
  • ETD2 - Second order Exponential Time Differencing method (in development). Fixed timestepping only. Doesn't support Krylov approximation.
  • ETDRK2 - 2nd order exponential-RK scheme. Fixed timestepping only.
  • ETDRK3 - 3rd order exponential-RK scheme. Fixed timestepping only.
  • ETDRK4 - 4th order exponential-RK scheme. Fixed timestepping only.
  • HochOst4 - 4th order exponential-RK scheme with stiff order 4. Fixed
    timestepping only.

Problems when not using IMEX:

(ERROR: type ODEFunction has no field f1)

  • SplitEuler: 1st order fully explicit method. Used for testing accuracy
    of splits.
  • IMEXEuler : 1st order explicit Euler mixed with implicit Euler. Fixed time
    step only.
  • CNAB2: Crank-Nicolson Adams Bashforth Order 2. Fixed time step only.
  • SBDF2 : 2nd order IMEX BDF method. Fixed time step only.
  • SBDF3 : 3rd order IMEX BDF method. Fixed time step only. In development.
  • SBDF4 : 4th order IMEX BDF method. Fixed time step only. In development.
  • LawsonEuler - First order exponential Euler scheme. Fixed timestepping only.
  • NorsettEuler - First order exponential-RK scheme. Fixed timestepping only. Alias: ETD1.
  • ETD2 - Second order Exponential Time Differencing method (in development). Fixed timestepping only. Doesn't support Krylov approximation.
  • ETDRK2 - 2nd order exponential-RK scheme. Fixed timestepping only.
  • ETDRK3 - 3rd order exponential-RK scheme. Fixed timestepping only.
  • ETDRK4 - 4th order exponential-RK scheme. Fixed timestepping only.
  • HochOst4 - 4th order exponential-RK scheme with stiff order 4. Fixed
    timestepping only.
    Basically all solvers are only defined on SplitODEProblems

Not defined (ERROR: UndefVarError: CNLF not defined in Main)

  • CNLF: Crank-Nicolson Leapfrog of Order 2. Fixed time step only.

Problems when using IMEX:

ERROR: MethodError: no method matching size(::typeof(DispersiveShallowWater.rhs_split_stiff!))

The function size exists, but no method is defined for this combination of argument types.
(Maybe the problem here is how I implemented it but why do they want a size of a function.??)

  • LawsonEuler - First order exponential Euler scheme. Fixed timestepping only.
  • NorsettEuler - First order exponential-RK scheme. Fixed timestepping only. Alias: ETD1.
  • ETD2 - Second order Exponential Time Differencing method (in development). Fixed timestepping only. Doesn't support Krylov approximation.
  • ETDRK2 - 2nd order exponential-RK scheme. Fixed timestepping only.
  • ETDRK3 - 3rd order exponential-RK scheme. Fixed timestepping only.
  • ETDRK4 - 4th order exponential-RK scheme. Fixed timestepping only.
  • HochOst4 - 4th order exponential-RK scheme with stiff order 4. Fixed
    timestepping only.

Algorithms which work besides KenCarp:

  • SplitEuler: 1st order fully explicit method. Used for testing accuracy
    of splits.
  • IMEXEuler : 1st order explicit Euler mixed with implicit Euler. Fixed time
    step only.
  • CNAB2: Crank-Nicolson Adams Bashforth Order 2. Fixed time step only.
  • CNLF: Crank-Nicolson Leapfrog of Order 2. Fixed time step only.
  • SBDF2 : 2nd order IMEX BDF method. Fixed time step only.
  • SBDF3 : 3rd order IMEX BDF method. Fixed time step only. In development.
  • SBDF4 : 4th order IMEX BDF method. Fixed time step only. In development.

But not sure a) how practical they are because they only support fixed time steps and b) I can not really compare them. (Besides saying ImplicitEuler() with no IMEX is slower then IMEXEuler() with IMEX)

@ranocha
Copy link
Member

ranocha commented May 26, 2025

The exponential integrators (LawsonEuler etc.) require a linear stiff term satisfying some linear interface - SciMLOperators.jl, I guess. That's why they did not work.

@ranocha
Copy link
Member

ranocha commented May 26, 2025

Thanks! So the only methods that worked are the KenCarp* algorithms? And they all share the same problematic behavior of worse performance with the IMEX formulation compared to a single RHS?

@cwittens
Copy link
Member Author

cwittens commented May 26, 2025

Thanks! So the only methods that worked are the KenCarp* algorithms? And they all share the same problematic behavior of worse performance with the IMEX formulation compared to a single RHS?

For the solitary wave initial condition KenCarp4() was faster with IMEX compared to a single rhs! (250ms compared to 210ms)

See: #207 (comment)

Should I also test the over KenCarps on the manufactured solution and on the solitary wave or first focus on trying to use SciML operators, because this probably changes to results again if it works?

@ranocha
Copy link
Member

ranocha commented May 27, 2025

Thanks! SciMLOperators should have a significant influence, so working on that first would be nice.

@cwittens
Copy link
Member Author

cwittens commented Jun 1, 2025

Thanks! SciMLOperators should have a significant influence, so working on that first would be nice.

MatrixOperator

I tried using MatrixOperator and it significantly worsens performance / strange behaviour occurs.

For now I only experimented locally with my own implementation first without using ArrayPartition (see later).

I define a MatrixOperator (e.g. to be able to use the exponential integrators)

opD3 = MatrixOperator(-1 / 6 * sqrt_gd * d^2 * D3)

and as comparisons

function rhs_stiff_op!(dη, η, p, t)
    (; opD3) = p
    mul!(dη, opD3, η)
    return nothing
end

function rhs_stiff!(dη, η, p, t)
    (; D3, g, d,) = p
    sqrt_gd = sqrt(g * d)
    mul!(dη, D3, η)
    @..*= -1 / 6 * sqrt_gd * d^2 
    return nothing
end

Now I can define

prob_split  = ODEProblem(SplitFunction(opD3,          rhs_nonstiff!), u0, tspan, p)
prob_split2 = ODEProblem(SplitFunction(rhs_stiff_op!, rhs_nonstiff!), u0, tspan, p)
prob_split3 = ODEProblem(SplitFunction(rhs_stiff!,    rhs_nonstiff!), u0, tspan, p)

And even thou now I can use exponential integrators on prob_split (prob_split2/3 gives an error as expected) they take of the order of tens of seconds (instead of ms) to run.

And when using something like KenCarp4, where one is able to compare the results, I get:

@btime sol_split  = solve(prob_split,  KenCarp4(), dt = 0.05, save_everystep=false); # => 1.353 s (7876 allocations: 19.22 MiB)
@btime sol_split2 = solve(prob_split2, KenCarp4(), dt = 0.05, save_everystep=false); # => 17.259 ms (251 allocations: 4.26 MiB)
@btime sol_split3 = solve(prob_split3, KenCarp4(), dt = 0.05, save_everystep=false); # => 16.673 ms (251 allocations: 4.26 MiB)

(where of cause one still needs to keep in mind that mul!(Vector, MatrixOperator, Vector) is pretty much as fast as mul!(Vector, SparseMatrix, Vector), which means considerably slower then using SBP Operators.)

ArrayPartition

When doing the same as above but using ArrayPartition, sol_split is not working anymore and gives the following error:

ERROR: MethodError: no method matching ArrayPartition{Float64, Tuple{Vector{Float64}}}(::UndefInitializer, ::Int64)
The type `ArrayPartition{Float64, Tuple{Vector{Float64}}}` exists, but no method is defined for this combination of argument types when trying to construct it.

Closest candidates are:
  (::Type{ArrayPartition{T, S}} where {T, S<:Tuple})(::Any)
   @ RecursiveArrayTools C:\Users\colli\.julia\packages\RecursiveArrayTools\iAeBW\src\array_partition.jl:27

Stacktrace:
  [1] Krylov.GmresSolver(m::Int64, n::Int64, memory::Int64, S::Type{ArrayPartition{Float64, Tuple{Vector{Float64}}}})
    @ Krylov C:\Users\colli\.julia\packages\Krylov\wgV9p\src\krylov_solvers.jl:2495
  [2] Krylov.GmresSolver(A::OrdinaryDiffEqDifferentiation.WOperator{…}, b::ArrayPartition{…}, memory::Int64)
    @ Krylov C:\Users\colli\.julia\packages\Krylov\wgV9p\src\krylov_solvers.jl:2513
  [3] init_cacheval(alg::KrylovJL{…}, A::OrdinaryDiffEqDifferentiation.WOperator{…}, b::ArrayPartition{…}, u::ArrayPartition{…}, Pl::LinearSolve.InvPreconditioner{…}, Pr::Diagonal{…}, maxiters::Int64, abstol::Float64, reltol::Float64, verbose::Bool, assumptions::OperatorAssumptions{…}; zeroinit::Bool)
    @ LinearSolve C:\Users\colli\.julia\packages\LinearSolve\KN05h\src\iterative_wrappers.jl:223
  [4] solve!(cache::LinearSolve.LinearCache{…}, alg::KrylovJL{…}; kwargs::@Kwargs{})
    @ LinearSolve C:\Users\colli\.julia\packages\LinearSolve\KN05h\src\iterative_wrappers.jl:255
  [5] macro expansion
    @ C:\Users\colli\.julia\packages\LinearSolve\KN05h\src\default.jl:339 [inlined]
  [6] solve!(::LinearSolve.LinearCache{…}, ::LinearSolve.DefaultLinearSolver; assump::OperatorAssumptions{…}, kwargs::@Kwargs{})
    @ LinearSolve C:\Users\colli\.julia\packages\LinearSolve\KN05h\src\default.jl:332
  [7] solve!
    @ C:\Users\colli\.julia\packages\LinearSolve\KN05h\src\default.jl:332 [inlined]
  [8] #solve!#8
    @ C:\Users\colli\.julia\packages\LinearSolve\KN05h\src\common.jl:307 [inlined]
  [9] solve!
    @ C:\Users\colli\.julia\packages\LinearSolve\KN05h\src\common.jl:306 [inlined]
 [10] #dolinsolve#2
    @ C:\Users\colli\.julia\packages\OrdinaryDiffEqDifferentiation\M5647\src\linsolve_utils.jl:29 [inlined]
 [11] dolinsolve
    @ C:\Users\colli\.julia\packages\OrdinaryDiffEqDifferentiation\M5647\src\linsolve_utils.jl:5 [inlined]
 [12] compute_step!(nlsolver::OrdinaryDiffEqNonlinearSolve.NLSolver{…}, integrator::OrdinaryDiffEqCore.ODEIntegrator{…}, γW::Float64)
    @ OrdinaryDiffEqNonlinearSolve C:\Users\colli\.julia\packages\OrdinaryDiffEqNonlinearSolve\e4hoO\src\newton.jl:224
 [13] nlsolve!(nlsolver::OrdinaryDiffEqNonlinearSolve.NLSolver{…}, integrator::OrdinaryDiffEqCore.ODEIntegrator{…}, cache::OrdinaryDiffEqSDIRK.KenCarp4Cache{…}, repeat_step::Bool)
    @ OrdinaryDiffEqNonlinearSolve C:\Users\colli\.julia\packages\OrdinaryDiffEqNonlinearSolve\e4hoO\src\nlsolve.jl:52
 [14] perform_step!(integrator::OrdinaryDiffEqCore.ODEIntegrator{…}, cache::OrdinaryDiffEqSDIRK.KenCarp4Cache{…}, repeat_step::Bool)
    @ OrdinaryDiffEqSDIRK C:\Users\colli\.julia\packages\OrdinaryDiffEqSDIRK\YAA9M\src\kencarp_kvaerno_perform_step.jl:1009
 [15] perform_step!
    @ C:\Users\colli\.julia\packages\OrdinaryDiffEqSDIRK\YAA9M\src\kencarp_kvaerno_perform_step.jl:957 [inlined]
 [16] solve!(integrator::OrdinaryDiffEqCore.ODEIntegrator{…})
    @ OrdinaryDiffEqCore C:\Users\colli\.julia\packages\OrdinaryDiffEqCore\bMOsj\src\solve.jl:620
 [17] __solve(::ODEProblem{…}, ::KenCarp4{…}; kwargs::@Kwargs{})
    @ OrdinaryDiffEqCore C:\Users\colli\.julia\packages\OrdinaryDiffEqCore\bMOsj\src\solve.jl:7
 [18] __solve
    @ C:\Users\colli\.julia\packages\OrdinaryDiffEqCore\bMOsj\src\solve.jl:1 [inlined]
 [19] #solve_call#35
    @ C:\Users\colli\.julia\packages\DiffEqBase\zYZst\src\solve.jl:635 [inlined]
 [20] solve_call
    @ C:\Users\colli\.julia\packages\DiffEqBase\zYZst\src\solve.jl:592 [inlined]
 [21] #solve_up#44
    @ C:\Users\colli\.julia\packages\DiffEqBase\zYZst\src\solve.jl:1142 [inlined]
 [22] solve_up
    @ C:\Users\colli\.julia\packages\DiffEqBase\zYZst\src\solve.jl:1120 [inlined]
 [23] #solve#42
    @ C:\Users\colli\.julia\packages\DiffEqBase\zYZst\src\solve.jl:1057 [inlined]
 [24] var"##core#371"()
    @ Main C:\Users\colli\.julia\packages\BenchmarkTools\1i1mY\src\execution.jl:598
 [25] var"##sample#372"(::Tuple{}, __params::BenchmarkTools.Parameters)
    @ Main C:\Users\colli\.julia\packages\BenchmarkTools\1i1mY\src\execution.jl:607
 [26] _lineartrial(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters; maxevals::Int64, kwargs::@Kwargs{})
    @ BenchmarkTools C:\Users\colli\.julia\packages\BenchmarkTools\1i1mY\src\execution.jl:186
 [27] _lineartrial(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters)
    @ BenchmarkTools C:\Users\colli\.julia\packages\BenchmarkTools\1i1mY\src\execution.jl:181
 [28] #invokelatest#2
    @ .\essentials.jl:1055 [inlined]
 [29] invokelatest
    @ .\essentials.jl:1052 [inlined]
 [30] #lineartrial#46
    @ C:\Users\colli\.julia\packages\BenchmarkTools\1i1mY\src\execution.jl:51 [inlined]
 [31] lineartrial
    @ C:\Users\colli\.julia\packages\BenchmarkTools\1i1mY\src\execution.jl:50 [inlined]
 [32] tune!(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters; progressid::Nothing, nleaves::Float64, ndone::Float64, verbose::Bool, pad::String, kwargs::@Kwargs{})
    @ BenchmarkTools C:\Users\colli\.julia\packages\BenchmarkTools\1i1mY\src\execution.jl:299
 [33] tune!
    @ C:\Users\colli\.julia\packages\BenchmarkTools\1i1mY\src\execution.jl:288 [inlined]
 [34] tune!(b::BenchmarkTools.Benchmark)
    @ BenchmarkTools C:\Users\colli\.julia\packages\BenchmarkTools\1i1mY\src\execution.jl:288
 [35] top-level scope
    @ C:\Users\colli\.julia\packages\BenchmarkTools\1i1mY\src\execution.jl:728
Some type information was truncated. Use `show(err)` to see complete types.

Which to me looks like Krylov.GmresSolver just does not support ArrayPartitions.

Also interestingly (as also already stated above by Hendrik), when using the other solves also get quite a bit slower when using ArrayPartitions, even when "optimizing" the rhs! functions for ArrayPartitions, e.g. doing:

function rhs_stiff_op!(du::ArrayPartition{Float64, Tuple{Vector{Float64}}}, u::ArrayPartition{Float64, Tuple{Vector{Float64}}}, p, t)
    dη = du.x[1] # without unpacking it is even slower
    η = u.x[1]
    (; opD3) = p
    mul!(dη, opD3, η)
    return nothing
end

I get:

@btime sol_split  = solve(prob_split,  KenCarp4(), dt = 0.05, save_everystep=false); # => ERROR
@btime sol_split2 = solve(prob_split2, KenCarp4(), dt = 0.05, save_everystep=false); # => 352.997 ms (1263 allocations: 18.31 MiB)
@btime sol_split3 = solve(prob_split3, KenCarp4(), dt = 0.05, save_everystep=false); # => 406.871 ms (1263 allocations: 18.31 MiB)

Here I am not sure where all the allocations come from (sol_split.stats are identical)

FunctionOperator

I still wasn't able to make FunctionOperators work, but in my defense, even following the getting_started tutorial is 1:1 gives errors (gonna write an issue).
So far I havent tried asking on the Discourse /Slack yet, because even when I can make it work without ArrayPartitions, I still feel like using them would not work for the same reason as MatrixOperator is not working.

@cwittens
Copy link
Member Author

cwittens commented Jun 1, 2025

On the topic of SciMLOperators and IMEX:
It looks like SciMLOperators are not fully implemented into OrdinaryDiffEq.jl, (since over two years) see: SciML/SciMLOperators.jl#142

And it doesnt look like this will change soon. Here it was talked about being fixed in 1-2 weeks (as of May 2023)(SciML/SciMLOperators.jl#191) and basically since then there have been no updates (SciML/OrdinaryDiffEq.jl#2078 (comment)).

So with this knowledge in mind, I guess it wont make sense to try and make SciMLOperators work in DSW.jl (?)

@ranocha
Copy link
Member

ranocha commented Jun 1, 2025

Yes, it appears that this is indeed the case. @JoshuaLampert What's your take on this? Shall we just skip this part and focus on classical time integration methods and the remaining tasks?

@JoshuaLampert
Copy link
Member

It's a bit sad to see that the SciML ecosystem doesn't seem to be ready for IMEX methods for our use cases. I see roughly the following possibilities:

  1. Ask the SciML people for help/ideas.
  2. Finish this PR, set split_ode = Val{false}() as default, and merge it anyway since it adds the technical possibility to use IMEX methods, but doesn't destroy anything. The disadvantages of would be that it contradicts the principle to only add features that are really a benefit, users wanting to use IMEX methods, wonder why they are slow (we could document that properly), and that we need to maintain the additional code (should be fine, I think).
  3. Just keep this open and continue to work on other things.

So, if you still have some motivation to continue the work on this @cwittens and would be willing to discuss this issue with the SciML folks, I would go with 1. But if not (which would be totally understandable), I tend to go with 3. for now and follow the status of SciMLOperators.jl and OrdinaryDiffEq.jl for any updates to eventually come back to this once there is a solution. That would of course a bit unfortunate because of the work you spend on this @cwittens. Thanks a lot anyway! At least, we learned something.

@ranocha
Copy link
Member

ranocha commented Jun 1, 2025

I agree. Given that there hasn't been any momentum in the SciML ecosystem, I would prefer focusing on other aspects for now that improve DispersiveShallowWater.jl directly. Thus, I tend to prefer option 3 for now with the idea that we can come back to this later. We can also talk about this in person, @cwittens

@github-actions
Copy link
Contributor

github-actions bot commented Jun 3, 2025

Benchmark Results (Julia v1.10)

Time benchmarks
main b61e48d... main / b61e48d...
bbm_1d/bbm_1d_basic.jl - rhs!: 14.4 ± 0.41 μs 13.8 ± 0.29 μs 1.05 ± 0.037
bbm_1d/bbm_1d_fourier.jl - rhs!: 0.528 ± 0.0098 ms 0.527 ± 0.077 ms 1 ± 0.15
bbm_bbm_1d/bbm_bbm_1d_basic_reflecting.jl - rhs!: 0.0806 ± 0.00037 ms 0.0802 ± 0.00031 ms 1 ± 0.006
bbm_bbm_1d/bbm_bbm_1d_dg.jl - rhs!: 0.0343 ± 0.00071 ms 0.0345 ± 0.00053 ms 0.995 ± 0.026
bbm_bbm_1d/bbm_bbm_1d_relaxation.jl - rhs!: 27.3 ± 0.69 μs 27.2 ± 0.39 μs 1 ± 0.029
bbm_bbm_1d/bbm_bbm_1d_upwind_relaxation.jl - rhs!: 0.0483 ± 0.00061 ms 0.0482 ± 0.00061 ms 1 ± 0.018
hyperbolic_serre_green_naghdi_1d/hyperbolic_serre_green_naghdi_dingemans.jl - rhs!: 4.36 ± 0.05 μs 4.24 ± 0.03 μs 1.03 ± 0.014
kdv_1d/kdv_1d_basic.jl - rhs!: 1.42 ± 0.02 μs 1.44 ± 0.021 μs 0.986 ± 0.02
kdv_1d/kdv_1d_implicit.jl - rhs!: 1.41 ± 0.021 μs 1.43 ± 0.011 μs 0.985 ± 0.016
serre_green_naghdi_1d/serre_green_naghdi_well_balanced.jl - rhs!: 0.196 ± 0.008 ms 0.199 ± 0.0082 ms 0.982 ± 0.057
svaerd_kalisch_1d/svaerd_kalisch_1d_dingemans_relaxation.jl - rhs!: 0.143 ± 0.0039 ms 0.145 ± 0.0036 ms 0.989 ± 0.037
time_to_load 2 ± 0.035 s 2 ± 0.0071 s 1 ± 0.018
Memory benchmarks
main b61e48d... main / b61e48d...
bbm_1d/bbm_1d_basic.jl - rhs!: 1 allocs: 4.12 kB 1 allocs: 4.12 kB 1
bbm_1d/bbm_1d_fourier.jl - rhs!: 1 allocs: 4.12 kB 1 allocs: 4.12 kB 1
bbm_bbm_1d/bbm_bbm_1d_basic_reflecting.jl - rhs!: 5 allocs: 1.17 kB 5 allocs: 1.17 kB 1
bbm_bbm_1d/bbm_bbm_1d_dg.jl - rhs!: 10 allocs: 8.62 kB 10 allocs: 8.62 kB 1
bbm_bbm_1d/bbm_bbm_1d_relaxation.jl - rhs!: 2 allocs: 8.25 kB 2 allocs: 8.25 kB 1
bbm_bbm_1d/bbm_bbm_1d_upwind_relaxation.jl - rhs!: 2 allocs: 8.25 kB 2 allocs: 8.25 kB 1
hyperbolic_serre_green_naghdi_1d/hyperbolic_serre_green_naghdi_dingemans.jl - rhs!: 0 allocs: 0 B 0 allocs: 0 B
kdv_1d/kdv_1d_basic.jl - rhs!: 0 allocs: 0 B 0 allocs: 0 B
kdv_1d/kdv_1d_implicit.jl - rhs!: 0 allocs: 0 B 0 allocs: 0 B
serre_green_naghdi_1d/serre_green_naghdi_well_balanced.jl - rhs!: 0.075 k allocs: 0.66 MB 0.075 k allocs: 0.66 MB 1
svaerd_kalisch_1d/svaerd_kalisch_1d_dingemans_relaxation.jl - rhs!: 0.042 k allocs: 0.315 MB 0.042 k allocs: 0.315 MB 1
time_to_load 0.153 k allocs: 14.5 kB 0.153 k allocs: 14.5 kB 1

@cwittens cwittens marked this pull request as draft June 17, 2025 08:46
@cwittens
Copy link
Member Author

I converted it to draft now and will go back to it if the SciML ecosystem seems to be ready for IMEX methods at some point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants