Skip to content

Conversation

@sbairos
Copy link

@sbairos sbairos commented Sep 3, 2025

No description provided.

@github-actions
Copy link

github-actions bot commented Sep 3, 2025

Review checklist

This checklist is meant to assist creators of PRs (to let them know what reviewers will typically look for) and reviewers (to guide them in a structured review process). Items do not need to be checked explicitly for a PR to be eligible for merging.

Purpose and scope

  • The PR has a single goal that is clear from the PR title and/or description.
  • All code changes represent a single set of modifications that logically belong together.
  • No more than 500 lines of code are changed or there is no obvious way to split the PR into multiple PRs.

Code quality

  • The code can be understood easily.
  • Newly introduced names for variables etc. are self-descriptive and consistent with existing naming conventions.
  • There are no redundancies that can be removed by simple modularization/refactoring.
  • There are no leftover debug statements or commented code sections.
  • The code adheres to our conventions and style guide, and to the Julia guidelines.

Documentation

  • New functions and types are documented with a docstring or top-level comment.
  • Relevant publications are referenced in docstrings (see example for formatting).
  • Inline comments are used to document longer or unusual code sections.
  • Comments describe intent ("why?") and not just functionality ("what?").
  • If the PR introduces a significant change or new feature, it is documented in NEWS.md with its PR number.

Testing

  • The PR passes all tests.
  • New or modified lines of code are covered by tests.
  • New or modified tests run in less then 10 seconds.

Performance

  • There are no type instabilities or memory allocations in performance-critical parts.
  • If the PR intent is to improve performance, before/after time measurements are posted in the PR.

Verification

  • The correctness of the code was verified using appropriate tests.
  • If new equations/methods are added, a convergence test has been run and the results
    are posted in the PR.

Created with ❤️ by the Trixi.jl community.

@sbairos sbairos force-pushed the split_Jac_SCT branch 2 times, most recently from d529252 to f59c8dc Compare September 16, 2025 07:21
Comment on lines 84 to 109
### Compare sparsity pattern based on hyperbolic-parabolic ###
### problem with sparsity pattern of parabolic-only problem ###

# We construct a semidiscretization just as we did previously, but this time with advection_velocity=0
equations_hyperbolic_no_advection = LinearScalarAdvectionEquation1D(0)
equations_parabolic_no_advection = LaplaceDiffusion1D(diffusivity(), equations_hyperbolic_no_advection)

semi_jac_type_no_advection = SemidiscretizationHyperbolicParabolic(mesh,
(equations_hyperbolic_no_advection,
equations_parabolic_no_advection),
initial_condition, solver,
uEltype = jac_eltype)
ode_jac_type_no_advection = semidiscretize(semi_jac_type_no_advection, tspan)

# Do sparsity detection on our semidiscretization with advection turned off
rhs_parabolic_wrapped! = (du_ode, u0_ode) -> Trixi.rhs_parabolic!(du_ode, u0_ode, semi_jac_type_no_advection, tspan[1])

jac_prototype_parabolic_no_advection = jacobian_sparsity(rhs_parabolic_wrapped!, du_ode, u0_ode, jac_detector)

coloring_result_no_advection = coloring(jac_prototype_parabolic_no_advection, coloring_prob, coloring_alg)
coloring_vec_parabolic_no_advection = column_colors(coloring_result)

# Given that the stencil for parabolic solvers are always larger than those of hyperbolic solvers,
# the sparsity detection will never depend on the hyperbolic part of the problem
@assert(jac_prototype_parabolic == jac_prototype_parabolic_no_advection)
@assert(coloring_vec_parabolic == coloring_vec_parabolic_no_advection)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is my comparison of advection_velocity=0 with the normal advection velocity. I'm not sure how convincing it is, though, since we don't actually ever do sparsity detection on the rhs!() function. Should I also include the rhs!() in this comparison (I guess in the previous section we would add it and in this section we could leave it out)?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, please use the entire rhs! 👍

# Default `AutoForwardDiff()` is not yet working, see
# https://github.com/trixi-framework/Trixi.jl/issues/2369
SBDF2(; autodiff = AutoFiniteDiff());
abstol = time_int_tol, reltol = time_int_tol, dt = 0.0005,
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dt here has to be very small because for larger dt such as for 0.005, the simulation stops due to instability.

This means that this elixir is quite slow (it runs for ~180s even though the timespan it covers is only 0.05). I did try the other IMEX methods here and they were all approximately equally slow

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm okay, what about looser tolerances?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, it looks like the tolerances don't seem to help for larger timespans. I originally started with tspan=(0.0, 0.5) but using dt=0.005 or larger, it fails at around the 30th timestep even for time_int_tol=1e-1.

Using the larger timespan, it only starts working when dt=0.0005 but then it ends up taking ~2500s so I had to also decrease tspan by an order of magnitude in order to make it run in a "reasonable" runtime of ~170s.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, could you try a different integrator? Maybe something different performs better

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried CNAB2, CNLF2, SBDF2, and IMEXEuler. They all crash at the same spot unfortunately.

Copy link
Author

@sbairos sbairos Oct 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I spent some more time looking into this this morning and it really does seem like the smaller step size is just needed for these implicit Euler schemes.

The error that always comes up is:

┌ Warning: Newton steps could not converge and algorithm is not adaptive. Use a lower dt.
└ @ SciMLBase ~/.julia/packages/SciMLBase/wfZCo/src/integrator_interface.jl:637

The following changes made the simulation crash with this error in an earlier timestep:

  • Increasing the solver's polydeg from 3 to 4
  • Increasing the mesh's polydeg from 3 to 4
  • Increasing the trees_per_dimension from (4, 4) to (8, 8)
  • Increasing initial_refinement_level from 2 to 3

Which was surprising to me because I would have thought that having a finer mesh would have led to fewer discontinuities.

Also, I've found that at least for the higher polydeg change, if I keep this very small dt, it can run successfully (it just takes even longer). I've yet to see the program crash when dt=0.0005. I'll keep trying the other settings, it's just a slow process since it takes a really long time to run with any of these changed and dt so small

Copy link
Owner

@DanielDoehring DanielDoehring Oct 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, that explains the poor behaviour, thanks for investigating this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants