Skip to content

Conversation

@DanielDoehring
Copy link
Member

@DanielDoehring DanielDoehring requested a review from vchuravy July 28, 2025 14:02
@DanielDoehring DanielDoehring added the example Adding/changing examples (elixirs) label Jul 28, 2025
@github-actions
Copy link
Contributor

Review checklist

This checklist is meant to assist creators of PRs (to let them know what reviewers will typically look for) and reviewers (to guide them in a structured review process). Items do not need to be checked explicitly for a PR to be eligible for merging.

Purpose and scope

  • The PR has a single goal that is clear from the PR title and/or description.
  • All code changes represent a single set of modifications that logically belong together.
  • No more than 500 lines of code are changed or there is no obvious way to split the PR into multiple PRs.

Code quality

  • The code can be understood easily.
  • Newly introduced names for variables etc. are self-descriptive and consistent with existing naming conventions.
  • There are no redundancies that can be removed by simple modularization/refactoring.
  • There are no leftover debug statements or commented code sections.
  • The code adheres to our conventions and style guide, and to the Julia guidelines.

Documentation

  • New functions and types are documented with a docstring or top-level comment.
  • Relevant publications are referenced in docstrings (see example for formatting).
  • Inline comments are used to document longer or unusual code sections.
  • Comments describe intent ("why?") and not just functionality ("what?").
  • If the PR introduces a significant change or new feature, it is documented in NEWS.md with its PR number.

Testing

  • The PR passes all tests.
  • New or modified lines of code are covered by tests.
  • New or modified tests run in less then 10 seconds.

Performance

  • There are no type instabilities or memory allocations in performance-critical parts.
  • If the PR intent is to improve performance, before/after time measurements are posted in the PR.

Verification

  • The correctness of the code was verified using appropriate tests.
  • If new equations/methods are added, a convergence test has been run and the results
    are posted in the PR.

Created with ❤️ by the Trixi.jl community.

@codecov
Copy link

codecov bot commented Jul 28, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 96.68%. Comparing base (035aec0) to head (a94dd47).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2504   +/-   ##
=======================================
  Coverage   96.68%   96.68%           
=======================================
  Files         511      511           
  Lines       42278    42278           
=======================================
  Hits        40875    40875           
  Misses       1403     1403           
Flag Coverage Δ
unittests 96.68% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@JoshuaLampert
Copy link
Member

JoshuaLampert commented Jul 28, 2025

Couldn't we also use the AnalysisCallback and then test the errors (including the uncertainty) with the usual @test_trixi_include? Out of curiosity I just tried that and it seems to work out of the box, which is pretty cool I think:

julia> l2, linf = analysis_callback(sol)
(l2 = Measurement{Float64}[0.0013 ± 0.018], linf = Measurement{Float64}[0.0044 ± 0.063])

Edit: However, it seems like in the printed output of the AnalysisCallback, the uncertainties are not shown:

 Simulation running 'LinearScalarAdvectionEquation1D' with DGSEM(polydeg=3)
────────────────────────────────────────────────────────────────────────────────────────────────────
 #timesteps:                 27                run time:       5.50768890e-02 s
 Δt:             4.16090622e-02                └── GC time:    2.47226840e-02 s (44.888%)
 sim. time:      1.50000000e+00 (100.000%)     time/DOF/rhs!:  3.57165381e-06 s
                                               PID:            3.95505084e-06 s
 #DOFs per field:           128                alloc'd memory:        198.639 MiB
 #elements:                  32

 Variable:       scalar        
 L2 error:       1.25768930e-03
 Linf error:     4.42520451e-03
 ∑∂S/∂U  Uₜ :  -3.90304695e-05
────────────────────────────────────────────────────────────────────────────────────────────────────

I haven't looked into that, but maybe that's easy to fix given that the AnalysisCallback is, in principle, able to track the measurements?

@DanielDoehring
Copy link
Member Author

Couldn't we also use the AnalysisCallback and then test the errors (including the uncertainty) with the usual @test_trixi_include? Out of curiosity I just tried that and it seems to work out of the box, which is pretty cool I think:

julia> l2, linf = analysis_callback(sol)
(l2 = Measurement{Float64}[0.0013 ± 0.018], linf = Measurement{Float64}[0.0044 ± 0.063])

Yeah I also tried this, but I get some strange errors when turning this into the regular tests:

elixir_advection_uncertainty.jl: Test Failed at /home/daniel/git/Trixi.jl/test/test_trixi.jl:71
  Expression: isapprox(l2_expected, l2_actual, atol = 1.1102230246251565e-13, rtol = 1.4901161193847656e-8)
   Evaluated: isapprox(0.0013 ± 0.018, 0.0013 ± 0.018; atol = 1.1102230246251565e-13, rtol = 1.4901161193847656e-8)

Stacktrace:
 [1] macro expansion
   @ ~/Software/julia-1.10.9/share/julia/stdlib/v1.10/Test/src/Test.jl:672 [inlined]
 [2] macro expansion
   @ ~/git/Trixi.jl/test/test_trixi.jl:71 [inlined]
 [3] macro expansion
   @ ~/git/Trixi.jl/test/test_tree_1d_advection.jl:174 [inlined]
 [4] macro expansion
   @ ~/Software/julia-1.10.9/share/julia/stdlib/v1.10/Test/src/Test.jl:1577 [inlined]
 [5] macro expansion
   @ ~/git/Trixi.jl/test/test_tree_1d_advection.jl:174 [inlined]
 [6] top-level scope
   @ ~/git/Trixi.jl/test/test_trixi.jl:186
elixir_advection_uncertainty.jl: Test Failed at /home/daniel/git/Trixi.jl/test/test_trixi.jl:78
  Expression: isapprox(linf_expected, linf_actual, atol = 1.1102230246251565e-13, rtol = 1.4901161193847656e-8)
   Evaluated: isapprox(0.0044 ± 0.063, 0.0044 ± 0.063; atol = 1.1102230246251565e-13, rtol = 1.4901161193847656e-8)

But maybe there is a workaround by checking the error lies e.g. in the specified bounds.

@JoshuaLampert
Copy link
Member

JoshuaLampert commented Jul 28, 2025

Just to be sure, did you enter 0.0013 ± 0.018 and 0.0044 ± 0.063 as the expected errors or the real errors with all digits. I think Measurements.jl just prints the first four digits. You can see all digits, e.g., with

julia> l2, linf = analysis_callback(sol)
(l2 = Measurement{Float64}[0.0013 ± 0.018], linf = Measurement{Float64}[0.0044 ± 0.063])

julia> l2[1].val
0.0012576893000440965

julia> l2[1].err
0.017581020765034417

julia> linf[1].val
0.004425204509676317

julia> linf[1].err
0.0633672486044246

Copy link
Member

@JoshuaLampert JoshuaLampert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Nice to see that the analysis_callback is now working in the tests!

@DanielDoehring
Copy link
Member Author

Just to be sure, did you enter 0.0013 ± 0.018 and 0.0044 ± 0.063 as the expected errors or the real errors with all digits. I think Measurements.jl just prints the first four digits. You can see all digits, e.g., with

Yeah that was the issue - I expected something like significant digits shenanigans here, but (in contrast to standard errors) indeed only rounded values where shown.

JoshuaLampert
JoshuaLampert previously approved these changes Jul 29, 2025
@JoshuaLampert
Copy link
Member

JoshuaLampert commented Jul 29, 2025

We have the rate limiting issue again: https://github.com/trixi-framework/Trixi.jl/actions/runs/16600567477/job/46967562325?pr=2504#step:7:13587. We thought this was fixed by #2415, but the problem seems to be back. We saw the same also in TrixiShallowWater.jl this morning (after rerunning the tests, they succeeded again). Do you have any idea why this happens again, @vchuravy? Maybe something changed in the last days because we didn't have this issue for a while and today I saw four tests (one in TrixiShallowWater.jl, two in this PR, and one in #2505) failing because of this.

@DanielDoehring DanielDoehring merged commit 13fd168 into trixi-framework:main Jul 31, 2025
137 of 150 checks passed
@DanielDoehring DanielDoehring deleted the ElixirMeasurements branch July 31, 2025 11:48
@vchuravy
Copy link
Member

Do you have any idea why this happens again

No no clue.

vchuravy added a commit that referenced this pull request Aug 19, 2025
@JoshuaLampert
Copy link
Member

Do you have any idea why this happens again

No no clue.

They disappeared again. So let's hope this was just a weird GitHub hiccup.

ranocha pushed a commit that referenced this pull request Aug 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

example Adding/changing examples (elixirs)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants