Skip to content

Conversation

@sbryngelson
Copy link
Member

@sbryngelson sbryngelson commented Nov 18, 2025

User description

User description

Vanilla CI test to see what passes/fails

PR Type

Documentation

Description

  • Fix documentation comment formatting in m_bubbles.fpp

  • Remove redundant article "the" from module brief description

Diagram Walkthrough

flowchart LR
  A["m_bubbles.fpp<br/>documentation comment"] -- "remove 'the'<br/>improve clarity" --> B["Updated brief<br/>description"]
Loading

File Walkthrough

Relevant files
Documentation
m_bubbles.fpp
Fix module documentation comment formatting                           

src/simulation/m_bubbles.fpp

  • Removed redundant article "the" from module brief description
  • Improved documentation comment clarity and conciseness
+1/-1     

CodeAnt-AI Description

Align Riemann solver direction handling and bubble pressure closure

What Changed

  • Riemann solver fluxes, momenta, energy, elasticity, and source terms now follow the current sweep direction so each axis uses the correct velocities and wave speeds.
  • Bubble number densities use direct volume-weighted sums and avg_state 2 now subtracts the same averaged bubble pressure terms from boundary pressures, keeping multiphase pressures consistent in fluxes.
  • Phoenix benchmark submissions now request a three-hour slot to accommodate NVHPC + OpenMP runs.

Impact

✅ Accurate directional fluxes
✅ Stable bubble pressure closures
✅ Longer NVHPC benchmarks

💡 Usage Guide

Checking Your Pull Request

Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.

Talking to CodeAnt AI

Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:

@codeant-ai ask: Your question here

This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.

Example

@codeant-ai ask: Can you suggest a safer alternative to storing this secret?

Preserve Org Learnings with CodeAnt

You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:

@codeant-ai: Your feedback here

This helps CodeAnt AI learn and adapt to your team's coding style and standards.

Example

@codeant-ai: Do not flag unused imports.

Retrigger review

Ask CodeAnt AI to review the PR again, by typing:

@codeant-ai: review

Check Your Repository Health

To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.

Summary by CodeRabbit

  • Documentation

    • Minor wording update for a module description to improve clarity.
  • Chores

    • Increased job time limit for benchmark submissions to allow longer runs.
  • Refactor

    • Internal solver indexing and variable usage centralized and reorganized to improve maintainability; no change to behavior or numerical results.

✏️ Tip: You can customize this high-level summary in your review settings.

@sbryngelson sbryngelson requested review from a team and Copilot November 18, 2025 22:35
@codeant-ai
Copy link

codeant-ai bot commented Nov 18, 2025

CodeAnt AI is reviewing your PR.

@qodo-code-review
Copy link
Contributor

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 1 🔵⚪⚪⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Documentation Only

Change is limited to comment/brief wording; verify no unintended preprocessing effects from comment syntax and that tooling parsing '@brief' in FPP comments remains consistent.

!> @brief This module contains procedures shared by the ensemble-averaged and volume-averaged bubble models.
module m_bubbles

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Nov 18, 2025

PR Code Suggestions ✨

No code suggestions found for the PR.

@codeant-ai codeant-ai bot added the size:XS This PR changes 0-9 lines, ignoring generated files label Nov 18, 2025
Copilot finished reviewing on behalf of sbryngelson November 18, 2025 22:36
@codeant-ai
Copy link

codeant-ai bot commented Nov 18, 2025

CodeAnt AI finished reviewing your PR.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR contains a minor documentation improvement to the m_bubbles module. The change removes the article "the" from the module's brief description comment for better grammatical clarity.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 18, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Updated module wording in m_bubbles; extended SLURM job time in CI bench submit script; large internal refactor in Riemann solvers centralizing directional indexing (idx1), reorganizing nbub accumulators and ptilde logic, and harmonizing GPU private lists across HLL/HLLC/HLLD, flux, source, and geometry code paths.

Changes

Cohort / File(s) Summary
Module doc tweak
src/simulation/m_bubbles.fpp
Minor wording change in module brief (removed article "the"); no functional or interface changes.
CI bench script
.github/workflows/phoenix/submit-bench.sh
Increased sbatch SLURM time limit from 02:00:00 to 03:00:00. No other behavioral changes.
Riemann solvers refactor
src/simulation/m_riemann_solvers.fpp
Centralized directional indexing by introducing idx1 (replacing many dir_idx(1) uses); replaced denom-style bubble accumulators with nbub_L/nbub_R; adjusted ptilde/pressure handling including avg_state == 2 branches; expanded/normalized GPU private/copyin lists; applied consistent changes across HLL/HLLC/HLLD, flux/source, and geometry computations. Public interfaces unchanged.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Caller as Caller
  participant Solver as s_riemann_solver
  participant Flux as Flux/Source Logic
  participant GPU as GPU/OpenACC

  Caller->>Solver: call s_riemann_solver(dir_idx)
  Note right of Solver: compute idx1 = dir_idx(1)
  Solver->>Flux: build L/R states using idx1
  Flux->>Flux: accumulate nbub_L / nbub_R\ncompute ptilde_L / ptilde_R
  alt avg_state == 2
    Flux->>Flux: apply ptilde-driven pressure adjustments
  end
  Flux->>GPU: include idx1, nbub_* in private/copyin lists
  Flux->>Solver: return fluxes & source terms
  Solver->>Caller: deliver results
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Files/areas to pay extra attention to:

  • src/simulation/m_riemann_solvers.fpp — verify consistency of idx1 replacements across HLL/HLLC/HLLD, cylindrical/geometry sections, and GPU/OpenACC directive scopes.
  • Correctness and normalization of nbub_L/nbub_R accumulation and usages.
  • ptilde and avg_state == 2 pressure adjustment logic and interactions with star-state calculations.
  • .github/workflows/phoenix/submit-bench.sh — confirm the SLURM time increase meets CI policy.

Suggested labels

Review effort 3/5, size:M

Suggested reviewers

  • wilfonba

Poem

🐰 I hopped through loops and index lines,
I nudged a script to buy more time.
I tidied bubbles, kept them neat,
Recentered steps with nimble feet.
A twitch, a thump — the solver's prime. 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning PR description lacks required sections and contains conflicting information about scope and changes. Add complete Description section, specify Type of change (currently only states 'Documentation' without checkbox), include 'How Has This Been Tested?' section with test details, and reconcile the user description (one-line doc fix) with CodeAnt-AI description (major Riemann solver refactor).
Title check ❓ Inconclusive The title 'CI test' is vague and generic, using non-descriptive terms that do not convey meaningful information about the actual changeset. Update the title to reflect the primary changes, such as 'Fix m_bubbles documentation and centralize Riemann solver indexing' or a more specific summary of the main objectives.
✅ Passed checks (1 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 1 file

@codecov
Copy link

codecov bot commented Nov 19, 2025

Codecov Report

❌ Patch coverage is 37.20930% with 27 lines in your changes missing coverage. Please review.
✅ Project coverage is 44.35%. Comparing base (76d6a5c) to head (2581ad7).
⚠️ Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
src/simulation/m_riemann_solvers.fpp 37.20% 26 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1054      +/-   ##
==========================================
- Coverage   44.35%   44.35%   -0.01%     
==========================================
  Files          71       71              
  Lines       20589    20587       -2     
  Branches     1990     1993       +3     
==========================================
- Hits         9133     9132       -1     
+ Misses      10311    10310       -1     
  Partials     1145     1145              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@codeant-ai
Copy link

codeant-ai bot commented Nov 24, 2025

CodeAnt AI is running Incremental review

@codeant-ai codeant-ai bot added size:XS This PR changes 0-9 lines, ignoring generated files and removed size:XS This PR changes 0-9 lines, ignoring generated files labels Nov 24, 2025
@codeant-ai
Copy link

codeant-ai bot commented Nov 24, 2025

CodeAnt AI Incremental review completed.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/simulation/m_riemann_solvers.fpp (1)

2593-2601: Uninitialized nbub_L/nbub_R in ME4 bubble-variable flux lead to undefined results

In the model_eqns == 4 branch, the “Add advection flux for bubble variables” loop uses nbub_L and nbub_R in the flux computation, but these scalars are never assigned in this branch before use. They are only computed later in the model_eqns == 2 .and. bubbles_euler path. This yields undefined values (compiler‑dependent garbage) in the bubble-variable flux whenever ME4+bubbles is active.

Please initialize nbub_L and nbub_R in the ME4 block before the loop (e.g., by computing them from the same bubble-state data that should define number density, or by refactoring the nbub computation from the ME2-bubbles path into a shared helper and calling it here as well).

🧹 Nitpick comments (1)
src/simulation/m_riemann_solvers.fpp (1)

2657-2663: Directional indexing refactor via idx1/idxi looks correct and consistent

The introduction of idx1 (normal-velocity index) and idxi (component index) in the HLLC bubble branches keeps all uses aligned with dir_idx(1) and dir_idx(i) while improving clarity of normal vs tangential components. idx1 is declared private in GPU regions and is set from dir_idx(1) before first use in each loop, so there’s no cross-iteration contamination. Cylindrical/geometric flux updates that were switched to idx1 also remain consistent with the existing dir_idx/dir_flg conventions.

No functional issues spotted; this refactor looks safe.

Also applies to: 2868-2896, 2956-2973, 2990-3014, 3050-3086, 3096-3376

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 41c5a96 and 3df8ccb.

📒 Files selected for processing (2)
  • .github/workflows/phoenix/submit-bench.sh (1 hunks)
  • src/simulation/m_riemann_solvers.fpp (10 hunks)
✅ Files skipped from review due to trivial changes (1)
  • .github/workflows/phoenix/submit-bench.sh
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
  • GitHub Check: Coverage Test on CodeCov
  • GitHub Check: Code Cleanliness Check
  • GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
  • GitHub Check: Github (macos, mpi, debug, false)
  • GitHub Check: Github (macos, mpi, no-debug, false)
  • GitHub Check: Github (ubuntu, mpi, debug, true)
  • GitHub Check: Github (ubuntu, mpi, debug, false)
  • GitHub Check: Github (ubuntu, mpi, no-debug, false)
  • GitHub Check: Github (ubuntu, mpi, no-debug, true)
  • GitHub Check: cubic · AI code reviewer
  • GitHub Check: Build & Publish

Comment on lines 2772 to 2785
nbub_L = qL_prim_rs${XYZ}$_vf(j, k, l, n_idx)
nbub_R = qR_prim_rs${XYZ}$_vf(j + 1, k, l, n_idx)
else
nbub_L_denom = 0._wp
nbub_R_denom = 0._wp
nbub_L = 0._wp
nbub_R = 0._wp
$:GPU_LOOP(parallelism='[seq]')
do i = 1, nb
nbub_L_denom = nbub_L_denom + (R0_L(i)**3._wp)*weight(i)
nbub_R_denom = nbub_R_denom + (R0_R(i)**3._wp)*weight(i)
nbub_L = nbub_L + (R0_L(i)**3._wp)*weight(i)
nbub_R = nbub_R + (R0_R(i)**3._wp)*weight(i)
end do
nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L_denom
nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R_denom

nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R
end if
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard division by nbub_L/nbub_R when reconstructing number density

In the ME2+bubbles avg_state == 2 path, nbub_L and nbub_R are computed via a weighted sum of R0^3 and then used as denominators:

nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R

If the R0^3 * weight sums are zero or extremely small, this will generate NaNs/inf and destabilize the solver.

Recommend guarding with max(..., sgm_eps) (as done elsewhere in this file for similar normalizations), e.g.:

- nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
- nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R
+ nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/max(nbub_L, sgm_eps)
+ nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/max(nbub_R, sgm_eps)
🤖 Prompt for AI Agents
In src/simulation/m_riemann_solvers.fpp around lines 2772 to 2785, the code
divides by nbub_L and nbub_R when reconstructing number density in the
avg_state==2 path, which can be zero or tiny; change the denominators to use a
guarded value (e.g. denom_L = max(nbub_L, sgm_eps) and denom_R = max(nbub_R,
sgm_eps) using the same sgm_eps used elsewhere in this file) and divide by those
guarded values so the assignments use denom_L and denom_R to avoid NaN/inf.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 2 files (reviewed changes from recent commits).

Prompt for AI agents (all 1 issues)

Understand the root cause of the following 1 issues and fix them.


<file name="src/simulation/m_riemann_solvers.fpp">

<violation number="1" location="src/simulation/m_riemann_solvers.fpp:2783">
`nbub_L`/`nbub_R` store the weighted \(R_0^3\) sums and can legitimately be zero when no bubbles exist, so dividing by them directly will produce NaN/Inf and destabilize the ME2+bubbles solver. Clamp those denominators with `sgm_eps` (as elsewhere in this file) before performing the normalization.</violation>
</file>

Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR

nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L_denom
nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R_denom

nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nbub_L/nbub_R store the weighted (R_0^3) sums and can legitimately be zero when no bubbles exist, so dividing by them directly will produce NaN/Inf and destabilize the ME2+bubbles solver. Clamp those denominators with sgm_eps (as elsewhere in this file) before performing the normalization.

Prompt for AI agents
Address the following comment on src/simulation/m_riemann_solvers.fpp at line 2783:

<comment>`nbub_L`/`nbub_R` store the weighted \(R_0^3\) sums and can legitimately be zero when no bubbles exist, so dividing by them directly will produce NaN/Inf and destabilize the ME2+bubbles solver. Clamp those denominators with `sgm_eps` (as elsewhere in this file) before performing the normalization.</comment>

<file context>
@@ -2771,15 +2772,16 @@ contains
-                                            nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L_denom
-                                            nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R_denom
+
+                                            nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
+                                            nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R
                                         end if
</file context>
Suggested change
nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/max(nbub_L, sgm_eps)
Fix with Cubic

@codeant-ai
Copy link

codeant-ai bot commented Nov 24, 2025

CodeAnt AI is running Incremental review

@codeant-ai codeant-ai bot added size:L This PR changes 100-499 lines, ignoring generated files and removed size:XS This PR changes 0-9 lines, ignoring generated files labels Nov 24, 2025
@codeant-ai
Copy link

codeant-ai bot commented Nov 24, 2025

CodeAnt AI Incremental review completed.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
src/simulation/m_riemann_solvers.fpp (1)

2760-2775: Guard division by nbub_L/nbub_R when reconstructing bubble number density

In the avg_state == 2 ME2+bubbles path (non-qbmm, adv_n == .false.), nbub_L and nbub_R are built as weighted sums of R0^3 and then used directly as denominators:

nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R

If the R0^3 * weight sums vanish or are extremely small (e.g., no bubbles in a cell), this will generate Inf/NaN and can destabilize the ME2+bubbles solver.

Clamp the denominators with sgm_eps (as elsewhere in this file for similar normalizations), e.g.:

-        do i = 1, nb
-            nbub_L = nbub_L + (R0_L(i)**3._wp)*weight(i)
-            nbub_R = nbub_R + (R0_R(i)**3._wp)*weight(i)
-        end do
-
-        nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
-        nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R
+        do i = 1, nb
+            nbub_L = nbub_L + (R0_L(i)**3._wp)*weight(i)
+            nbub_R = nbub_R + (R0_R(i)**3._wp)*weight(i)
+        end do
+
+        nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/max(nbub_L, sgm_eps)
+        nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/max(nbub_R, sgm_eps)

This keeps the normalization consistent with the rest of the module and prevents NaN/Inf when the moment sum is zero.

🧹 Nitpick comments (2)
src/simulation/m_riemann_solvers.fpp (2)

2930-2976: Bubble-pressure correction for avg_state == 2 is numerically safe

The new avg_state == 2 block that adjusts pres_L/pres_R using PbwR3*bar, R3*bar, and R3V2*bar is guarded by checks on alpha_(L/R)(num_fluids) and R3(L/R)bar < small_alf, so the divisions by R3Lbar/R3Rbar only occur when those moments are well-resolved. That avoids obvious divide-by-zero issues and keeps the ptilde-style correction well behaved.

If you touch this again, consider pulling the repeated pattern into a small helper to reduce duplication across left/right.


3334-3551: HLLC 5‑equation refactor maintains structure but remains hard to maintain

The refactored 5‑equation HLLC path (wave‑speed 2 logic, xi_* factors, fluxes for mass/momentum/energy/species, and cylindrical flux_gsrc_rs) still matches the standard Einfeldt/Toro-style HLLC structure and consistently uses dir_idx(1) and xi_L/xi_R across all fluxes. I don’t see new correctness issues here.

That said, this block is extremely long and duplicated in spirit across ME3/ME4/ME2; extracting common helpers for computing s_L/s_R/s_S, xi_*, and generic flux assembly would make future changes less error‑prone.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3df8ccb and 1844a02.

📒 Files selected for processing (1)
  • src/simulation/m_riemann_solvers.fpp (26 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
  • GitHub Check: Code Cleanliness Check
  • GitHub Check: cubic · AI code reviewer
  • GitHub Check: Github (ubuntu, mpi, debug, false)
  • GitHub Check: Github (macos, mpi, debug, false)
  • GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
  • GitHub Check: Github (ubuntu, mpi, no-debug, false)
  • GitHub Check: Github (macos, mpi, no-debug, false)
  • GitHub Check: Github (ubuntu, mpi, no-debug, true)
  • GitHub Check: Github (ubuntu, mpi, debug, true)
  • GitHub Check: Self Hosted (gpu, omp, gt)
  • GitHub Check: Self Hosted (gpu, acc, gt)
  • GitHub Check: Georgia Tech | Phoenix (NVHPC) (gpu)
  • GitHub Check: Georgia Tech | Phoenix (NVHPC) (gpu)
  • GitHub Check: Oak Ridge | Frontier (CCE) (gpu)
  • GitHub Check: Oak Ridge | Frontier (CCE) (gpu)
  • GitHub Check: Georgia Tech | Phoenix (NVHPC) (cpu)
  • GitHub Check: Coverage Test on CodeCov
  • GitHub Check: Build & Publish
🔇 Additional comments (1)
src/simulation/m_riemann_solvers.fpp (1)

1952-1958: Bubble-related locals and GPU private lists look consistent

The new bubble scalars (nbub_L/R, ptilde_L/R, PbwR3*, R3*) are correctly declared once and added to the relevant GPU_PARALLEL_LOOP private lists in the ME3, ME4, and ME2+bubbles paths, so each thread gets its own copy and there’s no shared-state race. I don’t see issues with these additions.

Also applies to: 1997-1997, 2405-2405, 2649-2649

@codeant-ai
Copy link

codeant-ai bot commented Nov 25, 2025

CodeAnt AI is running Incremental review

@codeant-ai codeant-ai bot added size:L This PR changes 100-499 lines, ignoring generated files and removed size:L This PR changes 100-499 lines, ignoring generated files labels Nov 25, 2025
@codeant-ai
Copy link

codeant-ai bot commented Nov 25, 2025

CodeAnt AI Incremental review completed.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
src/simulation/m_riemann_solvers.fpp (1)

2651-2837: Guard nbub_L/nbub_R normalization in ME2+bubbles avg_state==2 and confirm new p̃/flux logic

  • In the ME2 + bubbles_euler/avg_state == 2 path, nbub_L/nbub_R are first accumulated as
    (\sum (R_0^3, \text{weight})) and then used as denominators:

nbub_L = (3._wp/(4._wp*pi))qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
nbub_R = (3._wp/(4._wp
pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R

If the weighted `R0^3` sum vanishes or becomes tiny (no bubbles or degenerate distribution), this produces NaN/Inf and can destabilize the ME2+bubbles solver. Same concern was raised in a previous review.

- Recommend clamping the denominators with `sgm_eps` (as done elsewhere for similar normalizations) so that the number density reconstruction remains bounded, e.g.:

```diff
-                                            nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/nbub_L
-                                            nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/nbub_R
+                                            nbub_L = (3._wp/(4._wp*pi))*qL_prim_rs${XYZ}$_vf(j, k, l, E_idx + num_fluids)/max(nbub_L, sgm_eps)
+                                            nbub_R = (3._wp/(4._wp*pi))*qR_prim_rs${XYZ}$_vf(j + 1, k, l, E_idx + num_fluids)/max(nbub_R, sgm_eps)
  • The new avg_state == 2 pressure adjustments using PbwR3*/R3* and R3V2*/R3* already guard on alpha_*(num_fluids) and R3* < small_alf, so those divisions look numerically safe.
  • The updated bubble-variable advection fluxes that now scale by nbub_L/nbub_R in ME2 and ME4 are consistent with advecting per-bubble quantities, but once the above normalization is clamped, it would be good to re-run the ME2+bubbles regression cases to confirm no unintended change in effective bubble-number transport.

Also applies to: 2933-3055

🧹 Nitpick comments (1)
src/simulation/m_riemann_solvers.fpp (1)

1954-2355: HLLC ME3 directional-index refactor and new bubble accumulators

  • The additions of nbub_L, nbub_R, PbwR3Lbar, PbwR3Rbar and the extended GPU private list are local to s_hllc_riemann_solver and not used in the ME3 branch itself, so they won’t change current ME3 behavior. No issues there.
  • The rewrites of the elastic-wave contact speed s_S, xi_L, xi_R, vel_K_Star, and the associated mass/momentum/energy fluxes to use vel_L(dir_idx(1))/vel_R(dir_idx(1)) and dir_idx(i) are algebraically consistent with the existing HLL/HLLC patterns and should preserve correctness across sweep directions.
  • In the elastic energy-flux correction (flux_ene_e), outer factors now use vel_L(dir_idx(i))/vel_R(dir_idx(i)) but the inner (s_S - vel_L(i)) and (s_L - vel_L(i)) terms still index with plain i. If i is meant to be the permuted component index (via dir_idx), consider switching those to vel_L(dir_idx(1)) or vel_L(dir_idx(i)) for full directional consistency, or confirm that the current mixed indexing is intentional.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1844a02 and 2581ad7.

📒 Files selected for processing (1)
  • src/simulation/m_riemann_solvers.fpp (26 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f
_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop

**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_<feature> prefix (e.g., m_transport)
Name public subroutines as s_<verb>_<noun> (e.g., s_compute_flux) and functions as f_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoid goto statements (except unavoidable legacy); avoid global state (COMMON, save)
Every variable must have intent(in|out|inout) specification and appropriate dimension / allocatable / pointer
Use s_mpi_abort(<msg>) for error termination instead of stop
Use !> style documentation for header comments; follow Doxygen Fortran format with !! @param and !! @return tags
Use implicit none statement in all modules
Use private declaration followed by explicit public exports in modules
Use derived types with pointers for encapsulation (e.g., pointer, dimension(:,:,:) => null())
Use pure and elemental attributes for side-effect-free functions; combine them for array ...

Files:

  • src/simulation/m_riemann_solvers.fpp
src/simulation/**/*.{fpp,f90}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers

src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with $:GPU_ROUTINE(function_name='...', parallelism='[seq]') immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros from src/common/include/parallel_macros.fpp instead
Wrap tight loops with $:GPU_PARALLEL_FOR(private='[...]', copy='[...]') macro; add collapse=n for safe nested loop merging
Declare loop-local variables with private='[...]' in GPU parallel loop macros
Allocate large arrays with managed or move them into a persistent $:GPU_ENTER_DATA(...) region at start-up
Do not place stop or error stop inside device code

Files:

  • src/simulation/m_riemann_solvers.fpp
src/**/*.fpp

📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)

src/**/*.fpp: Use .fpp file extension for Fypp preprocessed files; CMake transpiles them to .f90
Start module files with Fypp include for macros: #:include 'macros.fpp'
Use the fypp ASSERT macro for validating conditions: @:ASSERT(predicate, message)
Use fypp macro @:ALLOCATE(var1, var2) for device-aware allocation instead of standard Fortran allocate
Use fypp macro @:DEALLOCATE(var1, var2) for device-aware deallocation instead of standard Fortran deallocate

Files:

  • src/simulation/m_riemann_solvers.fpp
🧠 Learnings (11)
📚 Learning: 2025-11-24T21:50:16.684Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.684Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)

Applied to files:

  • src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging

Applied to files:

  • src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:16.684Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.684Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers

Applied to files:

  • src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration

Applied to files:

  • src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Declare loop-local variables with `private='[...]'` in GPU parallel loop macros

Applied to files:

  • src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead

Applied to files:

  • src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:16.684Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.684Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code

Applied to files:

  • src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:16.684Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.684Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up

Applied to files:

  • src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large arrays with `managed` or move them into a persistent `$:GPU_ENTER_DATA(...)` region at start-up

Applied to files:

  • src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to **/*.{fpp,f90} : Use `private` declaration followed by explicit `public` exports in modules

Applied to files:

  • src/simulation/m_riemann_solvers.fpp
📚 Learning: 2025-11-24T21:50:46.879Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.879Z
Learning: Applies to **/*.{fpp,f90} : Favor array operations over explicit loops when possible; use `collapse=N` directive to optimize nested loops

Applied to files:

  • src/simulation/m_riemann_solvers.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (20)
  • GitHub Check: Georgia Tech | Phoenix (NVHPC) (gpu)
  • GitHub Check: Oak Ridge | Frontier (CCE) (gpu)
  • GitHub Check: Georgia Tech | Phoenix (NVHPC) (gpu)
  • GitHub Check: cubic · AI code reviewer
  • GitHub Check: Code Cleanliness Check
  • GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
  • GitHub Check: Github (ubuntu, mpi, no-debug, false)
  • GitHub Check: Github (macos, mpi, no-debug, false)
  • GitHub Check: Github (macos, mpi, debug, false)
  • GitHub Check: Github (ubuntu, mpi, debug, false)
  • GitHub Check: Github (ubuntu, mpi, no-debug, true)
  • GitHub Check: Self Hosted (cpu, none, frontier)
  • GitHub Check: Self Hosted (gpu, omp, gt)
  • GitHub Check: Self Hosted (gpu, omp, frontier)
  • GitHub Check: Github (ubuntu, mpi, debug, true)
  • GitHub Check: Self Hosted (gpu, acc, gt)
  • GitHub Check: Self Hosted (cpu, none, gt)
  • GitHub Check: Self Hosted (gpu, acc, frontier)
  • GitHub Check: Coverage Test on CodeCov
  • GitHub Check: Build & Publish
🔇 Additional comments (1)
src/simulation/m_riemann_solvers.fpp (1)

3337-3555: 5‑equation HLLC directional indexing and geometrical source fluxes

  • The replacements of raw component indices with dir_idx(1)/dir_idx(i) and dir_flg(dir_idx(i)) in wave speeds, mass/momentum/energy fluxes, volume-fraction/chemistry fluxes, and cylindrical/3D geometric source terms appear internally consistent: normal contributions always use dir_idx(1), tangential ones use dir_idx(2:3), and the same pattern is mirrored in flux_gsrc_rs${XYZ}.
  • Low‑Mach corrections and pcorr terms are still multiplied by the appropriate directional flags, so the refactor should preserve existing behavior across sweeps.

@sbryngelson sbryngelson enabled auto-merge (squash) November 25, 2025 20:13
@sbryngelson sbryngelson disabled auto-merge November 25, 2025 20:14
@sbryngelson sbryngelson merged commit 11375f3 into MFlowCode:master Nov 25, 2025
55 of 73 checks passed
@sbryngelson sbryngelson deleted the test-ci branch November 25, 2025 20:14
@coderabbitai coderabbitai bot mentioned this pull request Dec 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Review effort 1/5 size:L This PR changes 100-499 lines, ignoring generated files

Development

Successfully merging this pull request may close these issues.

2 participants