Add LBM solvers for tortuosity and permeability#15
Conversation
Add Taichi-based Lattice Boltzmann Method solvers as an alternative to the existing Julia/FD backend: - tortuosity_lbm(): D3Q7 BGK diffusion solver for effective diffusivity and tortuosity (Fick's law) - permeability_lbm(): D3Q19 MRT flow solver for absolute permeability (Darcy's law), with optional physical unit conversion (m², mD) Key implementation details: - All Taichi kernels are pre-compiled once in _compile_step_kernels() and stored as instance attributes, avoiding per-step JIT overhead - Taichi lazily initialized with GPU auto-detect + CPU fallback - Diffusive flux computed via concentration gradient at midplane (distribution moments are unreliable at Dirichlet BC faces) - Public API uses PoreSpy convention (True=pore); internal solvers use inverted convention (1=solid) New modules: _lbm/_taichi_helpers.py - lazy Taichi init _lbm/_lattice.py - D3Q7/D3Q19 lattice constants and MRT matrices _lbm/_diffusion_solver.py - D3Q7 BGK solver _lbm/_flow_solver.py - D3Q19 MRT solver _permeability.py - permeability_lbm() public API Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…olutions Replace basic positive/finite checks with analytical validation tests for four duct geometries: circular (cylinder), parallel plates, rectangular, and equilateral triangle cross-sections. Each test compares LBM flow rate at the midplane against the closed-form Hagen-Poiseuille / Boussinesq solution. The dp/dx is measured in the interior (20%-80%) to avoid the artificial pressure jump caused by the equilibrium pressure BC at inlet/outlet faces. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Merge the finalize kernel (F→rho,v,f copy) into the collision kernel, reducing GPU memory passes from 4 to 3 per time step. This saves one full-field read of 19 floats per cell, yielding ~1.5x speedup on Metal GPU (180 → 265 MLUPS at 100³+). Other improvements to the solver class: - Extract _build_relaxation_diag() with annotated moment names - Split __init__ into _alloc_fields, _load_constants, _init_bc_state - Import D3Q19_LR from _lattice instead of hardcoding - Remove dispatch wrapper methods; step() calls kernels directly - Rename _kern_* to descriptive names (finalize_collide, stream, bc_x) - Simplify _apply_bc with if/elif instead of dict-of-lambdas - Align lattice constant formatting in _lattice.py Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Same optimization as the D3Q19 flow solver: merge the finalize kernel (G→g,c copy) into the BGK collision kernel, reducing GPU memory passes from 4 to 3 per time step. Eliminates one full-field read of 7 floats per cell, yielding ~1.5x speedup (490 → 750 MLUPS at 100³+). Also applies the same readability cleanup as the flow solver: - Split __init__ into _alloc_fields, _load_constants, _init_bc_state - Import D3Q7_LR from _lattice instead of hardcoding - Remove dispatch wrapper methods; step() calls kernels directly - Rename _kern_* to descriptive names (finalize_collide, stream, bc_x) - Simplify _apply_bc with if/elif instead of dict-of-lambdas Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Introduce a two-tier public API: - `poromics.simulation` exposes `TransientDiffusion` and `TransientFlow` solver classes that accept SI units (D in m²/s, nu in m²/s, voxel_size in m) and handle lattice-unit conversion internally. - `poromics._metrics` pipelines (`tortuosity_lbm`, `permeability_lbm`) now use these solvers instead of internal entry-point functions. Other changes: - Centralize lattice params into frozen dataclasses (D3Q7Params, D3Q19Params) in _lattice.py with shared axis_to_face mapping - Unify result classes: SimulationResult base, TortuosityResult (shared by FD and LBM), PermeabilityResult (moved from _permeability.py) - Make BC values configurable constructor args (c_in/c_out, rho_in/rho_out) - Remove solve_diffusion/solve_flow entry-point functions from internal solver modules, keeping only kernel classes - Delete _permeability.py (absorbed into _metrics.py) - Update tests to use new API (voxel_size=1.0 for lattice-unit tests) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…th Taichi Taichi statically links LLVM 15 and Julia dynamically links LLVM 18; loading both in the same process triggers a fatal abort from LLVM's cl::Option uniqueness assertion. This runs tortuosity_fd in a persistent Julia subprocess communicating via pickle tempfiles over a dedicated pipe fd (Julia clobbers stdin on init). The worker stays alive across calls so Julia's JIT cache is reused (~0.16s warm overhead vs ~16s cold start). Controlled by POROMICS_JULIA_SUBPROCESS env var: "1" (default) uses subprocess isolation, "0" runs in-process with a fail-fast Taichi check. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add usage examples for tortuosity_lbm, permeability_lbm, and the TransientDiffusion/TransientFlow solver classes. Update intro, install notes, result object docs, and roadmap to reflect current capabilities. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update all docs to cover the full feature set: tortuosity (Julia FD + Taichi LBM), permeability (Taichi LBM), and the simulation subpackage. Add result object tables, solver class examples, dependency notes, API reference for poromics.simulation, and acknowledgments sections. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds Taichi-based LBM solvers (D3Q7 diffusion, D3Q19 flow), a simulation subpackage (TransientDiffusion, TransientFlow), isolates the Julia FD solver into a persistent subprocess to avoid LLVM symbol conflicts, exposes convenience metrics ( Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Metrics as poromics._metrics
participant Taichi as ensure_taichi()
participant Solver as LBM Solver
participant Kernels as Taichi Kernels
User->>Metrics: tortuosity_lbm(im, axis, D, voxel_size, ...)
Metrics->>Taichi: ensure_taichi() [GPU or CPU]
Taichi-->>Metrics: initialized Taichi module (ti)
Metrics->>Solver: instantiate _D3Q7Solver/_D3Q19Solver with ti
Solver->>Solver: alloc fields, load constants
Solver->>Kernels: _init_lattice(), _init_distributions()
loop steps
Solver->>Kernels: finalize+collide()
Kernels->>Kernels: compute equilibrium & relax
Solver->>Kernels: streaming()
Solver->>Kernels: apply BCs
end
Solver->>Metrics: extract fields (concentration/velocity)
Metrics->>User: TortuosityResult / PermeabilityResult
sequenceDiagram
participant User
participant Metrics as poromics._metrics
participant Subproc as Julia Subprocess
participant Julia as Julia/Tortuosity.jl
User->>Metrics: tortuosity_fd(im, axis, ...)
alt POROMICS_JULIA_SUBPROCESS enabled
Metrics->>Subproc: write payload (pickle) + notify via pipe fd
Subproc->>Subproc: _ensure_julia()
Subproc->>Julia: initialize Tortuosity.jl
Subproc->>Julia: run TortuositySimulation.solve()
Julia-->>Subproc: results
Subproc->>Metrics: write results file + signal "done"
Metrics->>Metrics: read results (pickle) and return
else In-process
Metrics->>Metrics: _ensure_julia() in-process
Metrics->>Julia: call TortuositySimulation.solve()
Julia-->>Metrics: results
end
Metrics-->>User: TortuosityResult
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Generated by `make dependencies` and not consumed by CI or install. Users can regenerate locally via `make dependencies`. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace unresolvable `juliacall.ModuleValue` type hints with `Any` (juliacall has no type stubs) and make `D` parameter explicitly Optional. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Julia's runtime and juliacall print verbose output to stdout during initialization. The worker uses stdout as its protocol channel (writing "done\n" after each request). Julia's init output was being read by the parent as premature acknowledgments, causing it to read the output pickle file before the worker had written results — triggering EOFError on the second and subsequent tortuosity_fd calls. Fix: temporarily redirect fd 1 (stdout) to fd 2 (stderr) during Julia initialization, keeping the protocol channel clean. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (5)
tests/test_lbm_diffusion.py (1)
23-27: Consider adding a descriptive message to the exception assertion.The test correctly expects
RuntimeErrorfor fully solid images. Addingmatchcould help verify the error message is meaningful.♻️ Optional: Add error message matching
def test_tau_lbm_no_pores(): """Fully solid image should raise RuntimeError.""" im = np.zeros((10, 10, 10), dtype=bool) - with pytest.raises(RuntimeError): + with pytest.raises(RuntimeError, match="(?i)pore|solid|porosity"): tortuosity_lbm(im, axis=0, voxel_size=1.0)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_lbm_diffusion.py` around lines 23 - 27, The test test_tau_lbm_no_pores should assert the RuntimeError message to ensure it's meaningful: update the with pytest.raises(...) line to include a match argument that matches the error text thrown by tortuosity_lbm (e.g., match="no pore" or the exact message your tortuosity_lbm raises) so the test not only checks the exception type but also validates the error message content from tortuosity_lbm.src/poromics/_lbm/_taichi_helpers.py (1)
21-26: Consider narrowing the exception scope for GPU initialization.The broad
except Exceptionmay swallow meaningful errors unrelated to GPU availability (e.g., configuration issues, invalid Taichi settings). Consider catching more specific exceptions or at least logging the original exception for debugging.♻️ Proposed refinement to preserve error context
try: ti.init(arch=ti.gpu, default_fp=ti.f32) logger.info("Taichi initialized with GPU backend.") - except Exception: + except Exception as e: ti.init(arch=ti.cpu, default_fp=ti.f32) - logger.warning("No GPU backend available, falling back to CPU.") + logger.warning(f"GPU backend unavailable ({e}), falling back to CPU.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/_lbm/_taichi_helpers.py` around lines 21 - 26, Narrow the broad except by catching a more specific GPU-init failure (e.g., RuntimeError or taichi-specific error if available) around the ti.init(arch=ti.gpu, ...) call, and preserve the original exception context by capturing it (e) and including it in the log (use logger.warning or logger.exception) before falling back to ti.init(arch=ti.cpu, ...); keep the existing logger.info and fallback behavior but replace the bare "except Exception" with a targeted exception type and include the error details in the log.src/poromics/_lbm/_flow_solver.py (2)
124-135: Consider validating thefaceparameter.
set_bc_rhodoes not validate thatfaceis one of the expected values. An invalid face string would silently add an entry to the dictionaries that is never used, or cause aKeyErrorin_apply_bcif the face wasn't initialized.🛡️ Proposed validation
def set_bc_rho(self, face, rho): """Set fixed-pressure BC on a domain face. Parameters ---------- face : str One of 'x0', 'x1', 'y0', 'y1', 'z0', 'z1'. rho : float Density (pressure) value. """ + if face not in self._bc_mode: + raise ValueError(f"face must be one of {list(self._bc_mode.keys())}, got {face!r}") self._bc_mode[face] = 1 self._bc_rho[face] = float(rho)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/_lbm/_flow_solver.py` around lines 124 - 135, The set_bc_rho method should validate the face parameter to prevent silent corruption or later KeyError in _apply_bc: check that face is one of the allowed names ('x0','x1','y0','y1','z0','z1') at the start of set_bc_rho (and raise a ValueError with a clear message if not); only then assign to self._bc_mode[face] and self._bc_rho[face] (keep converting rho to float as currently done). Also ensure any other code that initializes valid faces (where _bc_mode/_bc_rho are created) uses the same allowed-face set to keep behavior consistent with _apply_bc.
92-99: Consider making sparse partition size configurable.The hardcoded
part = 3for sparse storage may not be optimal for all domain sizes. For very large domains, a larger partition size might improve performance; for small domains, it adds overhead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/_lbm/_flow_solver.py` around lines 92 - 99, The sparse partition size is hardcoded as part = 3 which reduces flexibility; make this configurable by adding a parameter (e.g., partition_size or part) to the FlowSolver class/constructor and use it in place of the literal 3 where you create the Taichi pointer/dense grid (the lines assigning part = 3 and calling ti.root.pointer(... (nx // part + 1, ny // part + 1, nz // part + 1)) and cell.dense(... (part, part, part)).place(self._rho, self._v, self._f, self._F)); ensure you validate the parameter (>0 and integer), provide a sensible default of 3 to preserve existing behavior, and update any callers/tests that instantiate the solver to accept the new optional argument.src/poromics/_metrics.py (1)
134-157: Consider adding a timeout to prevent indefinite blocking.If the Julia worker hangs without dying (e.g., infinite loop in Julia code),
proc.stdout.readline()at line 144 will block forever. Consider usingselector a thread-based timeout for production robustness.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/_metrics.py` around lines 134 - 157, The _julia_call function can block forever on proc.stdout.readline(); modify _julia_call to perform a bounded wait for the worker ack (e.g., use select.select on proc.stdout fileno with a configurable timeout or spawn a short-lived thread that reads stdout with a join timeout) and if the timeout elapses, clean up in_path/out_path, terminate the Julia process (proc.kill()/proc.terminate()), and raise a TimeoutError with context; keep existing error-path behavior (reading proc.stderr and raising RuntimeError) when the process has exited, and ensure cleanup in the finally block still runs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/usage.md`:
- Around line 50-67: The walkthrough reuses the variable name result causing a
later visualization that expects result.c to fail; rename the tortuosity output
to a distinct symbol (e.g., tau_result from the tortuosity_lbm call) and update
any downstream visualization references to use tau_result.c (or make that
visualization block self-contained by using the local tau_result variable).
Specifically update the tortuosity example calling poromics.tortuosity_lbm to
assign its return to tau_result and change any subsequent uses of result.c to
tau_result.c so permeability_lbm's result remains a separate PermeabilityResult.
In `@src/poromics/_julia_worker.py`:
- Around line 30-35: In _run_tortuosity_fd validate the axis before using
["x","y","z"][axis] — check that axis is an int in 0..2 and if not raise a
ValueError with a clear message (e.g. "invalid axis: {axis}, expected 0,1,2");
then build axis_jl = jl.Symbol(["x","y","z"][axis]) as before. This ensures
_run_tortuosity_fd preserves a proper bad-argument ValueError (so
callers/subprocess mapping can detect bad input) rather than letting an
IndexError bubble up.
In `@src/poromics/_metrics.py`:
- Around line 1-6: Run the Python formatter on src/poromics/_metrics.py to
satisfy CI: execute `black src/poromics/_metrics.py` (or `make black`) and
commit the resulting changes; ensure the import block (the top-level imports in
_metrics.py) is reformatted according to Black's style so the file passes
formatting checks.
In `@src/poromics/simulation/_diffusion.py`:
- Around line 88-93: The loop in run() is iterating one too many times because
it uses range(n_steps + 1); change the loop to range(n_steps) so calling
run(n_steps=0) does nothing and the logged step numbers align with
_n_iterations. Specifically, update the for loop that calls self._solver.step()
and increments self._n_iterations (referencing run, _solver.step, _n_iterations,
n_steps, log_every, t_start) to iterate exactly n_steps times and keep the
logging guard (if step % log_every != 0: continue) as-is.
- Around line 45-56: Validate constructor inputs in __init__: check that the
provided image is 3D (e.g., np.asarray(im).ndim == 3) and raise ValueError if
not; ensure D > 0 and voxel_size > 0 and raise ValueError for non-positive
values before computing self._dt; keep the existing axis and pore-voxel checks,
then compute self._dt = _d3q7.D_lu * voxel_size ** 2 / D only after validation.
Reference the __init__ method, self._dt, self._axis and the image/sold
calculation to locate where to add these guards.
- Around line 146-164: The public flux method currently returns lattice-space
flux from _solver.compute_flux(axis) and may accept singleton/invalid axes; fix
flux by (1) validating the chosen axis (defaulting to self._axis) against the
simulation grid shape and raising ValueError if the axis length is 1 or out of
range, and (2) converting the lattice flux to SI by scaling the returned value
using the D_lu factor from self._d3q7 (i.e., apply the appropriate
multiplication by self._d3q7.D_lu) before returning; keep references to flux,
_solver.compute_flux, _d3q7.D_lu and _axis to locate the change.
In `@src/poromics/simulation/_flow.py`:
- Around line 151-158: The pressure() method returns specific pressure (p/ρ)
because it multiplies (rho_lu - self._rho_out) by cs²*(dx/dt)² but never
multiplies by a physical reference density; update pressure() to accept or use a
physical fluid density (e.g., a new attribute like self._rho_ref or a
constructor parameter) and multiply the current expression ((rho_lu -
self._rho_out) * _d3q19.cs2 * (self._voxel_size / self._dt)**2) by that density,
or alternatively change the method name/docstring to explicitly state it returns
p/ρ (specific pressure) using rho_lu, _rho_out, _d3q19.cs2, _voxel_size and _dt
so callers understand the units.
- Around line 91-95: The loop in run() uses range(n_steps + 1) which runs one
extra iteration and causes the logged step counter to lag; change the loop to
for step in range(n_steps): so _solver.step(), self._n_iterations += 1, and the
log gating (if step % log_every != 0) remain correct for zero and positive
n_steps, ensuring no overshoot and consistent logging for the existing symbols
_solver.step(), _n_iterations, step, n_steps, and log_every.
- Around line 46-59: Validate constructor inputs in __init__: convert im to
array and raise ValueError if im.ndim != 3, raise ValueError if nu <= 0, and
raise ValueError if voxel_size <= 0 before computing self._dt; keep the existing
axis check and solid computation but perform these guards early (reference
symbols: __init__, im -> np.asarray(im), nu, voxel_size, self._dt, solid) so
invalid geometries or non-positive viscosity/voxel sizes fail fast with clear
error messages.
In `@tests/test_lbm_permeability.py`:
- Around line 1-6: This file fails CI due to Black formatting; run the project's
formatter (e.g., `black tests/test_lbm_permeability.py` or `make black`) to
reformat the test file so imports and spacing match Black's style, then stage
the updated file and push the change; the change targets the test module that
imports numpy/pytest and poromics.simulation/poromics.permeability_lbm.
---
Nitpick comments:
In `@src/poromics/_lbm/_flow_solver.py`:
- Around line 124-135: The set_bc_rho method should validate the face parameter
to prevent silent corruption or later KeyError in _apply_bc: check that face is
one of the allowed names ('x0','x1','y0','y1','z0','z1') at the start of
set_bc_rho (and raise a ValueError with a clear message if not); only then
assign to self._bc_mode[face] and self._bc_rho[face] (keep converting rho to
float as currently done). Also ensure any other code that initializes valid
faces (where _bc_mode/_bc_rho are created) uses the same allowed-face set to
keep behavior consistent with _apply_bc.
- Around line 92-99: The sparse partition size is hardcoded as part = 3 which
reduces flexibility; make this configurable by adding a parameter (e.g.,
partition_size or part) to the FlowSolver class/constructor and use it in place
of the literal 3 where you create the Taichi pointer/dense grid (the lines
assigning part = 3 and calling ti.root.pointer(... (nx // part + 1, ny // part +
1, nz // part + 1)) and cell.dense(... (part, part, part)).place(self._rho,
self._v, self._f, self._F)); ensure you validate the parameter (>0 and integer),
provide a sensible default of 3 to preserve existing behavior, and update any
callers/tests that instantiate the solver to accept the new optional argument.
In `@src/poromics/_lbm/_taichi_helpers.py`:
- Around line 21-26: Narrow the broad except by catching a more specific
GPU-init failure (e.g., RuntimeError or taichi-specific error if available)
around the ti.init(arch=ti.gpu, ...) call, and preserve the original exception
context by capturing it (e) and including it in the log (use logger.warning or
logger.exception) before falling back to ti.init(arch=ti.cpu, ...); keep the
existing logger.info and fallback behavior but replace the bare "except
Exception" with a targeted exception type and include the error details in the
log.
In `@src/poromics/_metrics.py`:
- Around line 134-157: The _julia_call function can block forever on
proc.stdout.readline(); modify _julia_call to perform a bounded wait for the
worker ack (e.g., use select.select on proc.stdout fileno with a configurable
timeout or spawn a short-lived thread that reads stdout with a join timeout) and
if the timeout elapses, clean up in_path/out_path, terminate the Julia process
(proc.kill()/proc.terminate()), and raise a TimeoutError with context; keep
existing error-path behavior (reading proc.stderr and raising RuntimeError) when
the process has exited, and ensure cleanup in the finally block still runs.
In `@tests/test_lbm_diffusion.py`:
- Around line 23-27: The test test_tau_lbm_no_pores should assert the
RuntimeError message to ensure it's meaningful: update the with
pytest.raises(...) line to include a match argument that matches the error text
thrown by tortuosity_lbm (e.g., match="no pore" or the exact message your
tortuosity_lbm raises) so the test not only checks the exception type but also
validates the error message content from tortuosity_lbm.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8b5b32ae-8d4d-4d62-8beb-47479bbb4f08
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (20)
.gitignoreREADME.mddocs/index.mddocs/install.mddocs/reference.mddocs/usage.mdpyproject.tomlsrc/poromics/__init__.pysrc/poromics/_julia_worker.pysrc/poromics/_lbm/__init__.pysrc/poromics/_lbm/_diffusion_solver.pysrc/poromics/_lbm/_flow_solver.pysrc/poromics/_lbm/_lattice.pysrc/poromics/_lbm/_taichi_helpers.pysrc/poromics/_metrics.pysrc/poromics/simulation/__init__.pysrc/poromics/simulation/_diffusion.pysrc/poromics/simulation/_flow.pytests/test_lbm_diffusion.pytests/test_lbm_permeability.py
docs/usage.md
Outdated
| ```python | ||
| import poromics | ||
|
|
||
| result = poromics.tortuosity_lbm(im, axis=1, D=1e-9, voxel_size=1e-6) | ||
| print(result.tau, result.D_eff) | ||
| ``` | ||
|
|
||
| ## Permeability (LBM solver) | ||
|
|
||
| The permeability solver uses the D3Q19 MRT lattice Boltzmann method to solve creeping (Stokes) flow and extract permeability via Darcy's law. | ||
|
|
||
| ```python | ||
| import numpy as np | ||
| import poromics | ||
|
|
||
| im_3d = np.random.rand(50, 50, 50) > 0.4 # 3D porous image | ||
| result = poromics.permeability_lbm(im_3d, axis=0, nu=1e-6, voxel_size=1e-6) | ||
| print(result.k_m2, result.k_mD) |
There was a problem hiding this comment.
Use distinct result names in the walkthrough.
This section rebinds result again, but the later visualization example still does result.c. Following the page top-to-bottom now leaves result as a PermeabilityResult, so that snippet will fail with AttributeError.
Suggested rename in this section
-result = poromics.tortuosity_lbm(im, axis=1, D=1e-9, voxel_size=1e-6)
-print(result.tau, result.D_eff)
+tau_result = poromics.tortuosity_lbm(im, axis=1, D=1e-9, voxel_size=1e-6)
+print(tau_result.tau, tau_result.D_eff)
@@
-result = poromics.permeability_lbm(im_3d, axis=0, nu=1e-6, voxel_size=1e-6)
-print(result.k_m2, result.k_mD)
+perm_result = poromics.permeability_lbm(im_3d, axis=0, nu=1e-6, voxel_size=1e-6)
+print(perm_result.k_m2, perm_result.k_mD)Update the later visualization block to read from tau_result.c (or make that snippet self-contained).
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 50-50: Code block style
Expected: indented; Actual: fenced
(MD046, code-block-style)
[warning] 61-61: Code block style
Expected: indented; Actual: fenced
(MD046, code-block-style)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/usage.md` around lines 50 - 67, The walkthrough reuses the variable name
result causing a later visualization that expects result.c to fail; rename the
tortuosity output to a distinct symbol (e.g., tau_result from the tortuosity_lbm
call) and update any downstream visualization references to use tau_result.c (or
make that visualization block self-contained by using the local tau_result
variable). Specifically update the tortuosity example calling
poromics.tortuosity_lbm to assign its return to tau_result and change any
subsequent uses of result.c to tau_result.c so permeability_lbm's result remains
a separate PermeabilityResult.
| def _run_tortuosity_fd(im, axis, D, rtol, gpu, verbose): | ||
| """Execute the Julia-backed tortuosity solver and return a plain dict.""" | ||
| import numpy as np | ||
| jl, taujl = _ensure_julia() | ||
|
|
||
| axis_jl = jl.Symbol(["x", "y", "z"][axis]) |
There was a problem hiding this comment.
Preserve a proper bad-argument error for invalid axes.
["x", "y", "z"][axis] raises IndexError here. On the subprocess path, src/poromics/_metrics.py currently remaps unknown worker exceptions to a generic RuntimeError, so callers lose the expected bad-input signal. Please validate axis before dispatching, or preserve ValueError in the parent mapping.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/poromics/_julia_worker.py` around lines 30 - 35, In _run_tortuosity_fd
validate the axis before using ["x","y","z"][axis] — check that axis is an int
in 0..2 and if not raise a ValueError with a clear message (e.g. "invalid axis:
{axis}, expected 0,1,2"); then build axis_jl = jl.Symbol(["x","y","z"][axis]) as
before. This ensures _run_tortuosity_fd preserves a proper bad-argument
ValueError (so callers/subprocess mapping can detect bad input) rather than
letting an IndexError bubble up.
| for step in range(n_steps + 1): | ||
| self._solver.step() | ||
| self._n_iterations += 1 | ||
| if step % log_every != 0: | ||
| continue | ||
| elapsed = time.time() - t_start |
There was a problem hiding this comment.
run() advances one extra iteration.
range(n_steps + 1) means run(n_steps=0) still mutates the state once, and the logged step number stays off by one after that.
Keep the loop count aligned with the API
- for step in range(n_steps + 1):
+ for step in range(1, n_steps + 1):
self._solver.step()
self._n_iterations += 1
if step % log_every != 0:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/poromics/simulation/_diffusion.py` around lines 88 - 93, The loop in
run() is iterating one too many times because it uses range(n_steps + 1); change
the loop to range(n_steps) so calling run(n_steps=0) does nothing and the logged
step numbers align with _n_iterations. Specifically, update the for loop that
calls self._solver.step() and increments self._n_iterations (referencing run,
_solver.step, _n_iterations, n_steps, log_every, t_start) to iterate exactly
n_steps times and keep the logging guard (if step % log_every != 0: continue)
as-is.
| def flux(self, axis=None): | ||
| """Mean diffusive flux through the domain midplane. | ||
|
|
||
| Uses Fick's law on the concentration gradient rather than | ||
| distribution moments (which are unreliable at Dirichlet faces). | ||
|
|
||
| Parameters | ||
| ---------- | ||
| axis : int or None | ||
| Axis along which to compute flux. Defaults to the | ||
| axis used in the constructor. | ||
|
|
||
| Returns | ||
| ------- | ||
| J_mean : float | ||
| """ | ||
| if axis is None: | ||
| axis = self._axis | ||
| return self._solver.compute_flux(axis) |
There was a problem hiding this comment.
Don't leak lattice flux through the physical-unit API.
_solver.compute_flux() returns lattice-space flux using _d3q7.D_lu, so this public SI-facing method currently returns the wrong scale. It also needs to reject invalid/singleton axes before delegating; otherwise a size-1 axis wraps to plane -1 in the underlying midplane calculation.
Scale the result and guard the axis
def flux(self, axis=None):
"""Mean diffusive flux through the domain midplane.
@@
"""
if axis is None:
axis = self._axis
- return self._solver.compute_flux(axis)
+ if axis not in (0, 1, 2):
+ raise ValueError(f"axis must be 0, 1, or 2, got {axis}")
+ if self._solver.get_concentration().shape[axis] < 2:
+ raise ValueError(
+ f"axis {axis} must have at least 2 voxels to compute flux"
+ )
+ return self._solver.compute_flux(axis) * (self._voxel_size / self._dt)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/poromics/simulation/_diffusion.py` around lines 146 - 164, The public
flux method currently returns lattice-space flux from _solver.compute_flux(axis)
and may accept singleton/invalid axes; fix flux by (1) validating the chosen
axis (defaulting to self._axis) against the simulation grid shape and raising
ValueError if the axis length is 1 or out of range, and (2) converting the
lattice flux to SI by scaling the returned value using the D_lu factor from
self._d3q7 (i.e., apply the appropriate multiplication by self._d3q7.D_lu)
before returning; keep references to flux, _solver.compute_flux, _d3q7.D_lu and
_axis to locate the change.
src/poromics/simulation/_flow.py
Outdated
| def __init__(self, im, axis, nu, voxel_size, rho_in=1.0, rho_out=0.99, | ||
| sparse=False): # fmt: skip | ||
| if axis not in (0, 1, 2): | ||
| raise ValueError(f"axis must be 0, 1, or 2, got {axis}") | ||
| solid = (np.asarray(im) == 0).astype(np.int8) | ||
| if solid.sum() == solid.size: | ||
| raise RuntimeError("Image has no pore voxels.") | ||
|
|
||
| self._axis = axis | ||
| self._nu = nu | ||
| self._voxel_size = float(voxel_size) | ||
| self._rho_in = rho_in | ||
| self._rho_out = rho_out | ||
| self._dt = _d3q19.nu_lu * voxel_size ** 2 / nu |
There was a problem hiding this comment.
Fail fast on invalid geometry and viscosity inputs.
nu <= 0 or voxel_size <= 0 makes dt invalid, and non-3D images leak an internal unpack error from _D3Q19Solver. Please validate these at the public constructor boundary.
Minimal guardrails
if axis not in (0, 1, 2):
raise ValueError(f"axis must be 0, 1, or 2, got {axis}")
- solid = (np.asarray(im) == 0).astype(np.int8)
+ if nu <= 0:
+ raise ValueError(f"nu must be > 0, got {nu}")
+ if voxel_size <= 0:
+ raise ValueError(f"voxel_size must be > 0, got {voxel_size}")
+ im_arr = np.asarray(im)
+ if im_arr.ndim != 3:
+ raise ValueError(f"im must be a 3D array, got shape {im_arr.shape}")
+ solid = (im_arr == 0).astype(np.int8)
if solid.sum() == solid.size:
raise RuntimeError("Image has no pore voxels.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/poromics/simulation/_flow.py` around lines 46 - 59, Validate constructor
inputs in __init__: convert im to array and raise ValueError if im.ndim != 3,
raise ValueError if nu <= 0, and raise ValueError if voxel_size <= 0 before
computing self._dt; keep the existing axis check and solid computation but
perform these guards early (reference symbols: __init__, im -> np.asarray(im),
nu, voxel_size, self._dt, solid) so invalid geometries or non-positive
viscosity/voxel sizes fail fast with clear error messages.
src/poromics/simulation/_flow.py
Outdated
| for step in range(n_steps + 1): | ||
| self._solver.step() | ||
| self._n_iterations += 1 | ||
| if step % log_every != 0: | ||
| continue |
There was a problem hiding this comment.
run() overshoots n_steps by one.
range(n_steps + 1) performs one step even when n_steps=0, and it makes the logged step counter lag the actual iteration count.
Keep the loop count aligned with the API
- for step in range(n_steps + 1):
+ for step in range(1, n_steps + 1):
self._solver.step()
self._n_iterations += 1
if step % log_every != 0:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for step in range(n_steps + 1): | |
| self._solver.step() | |
| self._n_iterations += 1 | |
| if step % log_every != 0: | |
| continue | |
| for step in range(1, n_steps + 1): | |
| self._solver.step() | |
| self._n_iterations += 1 | |
| if step % log_every != 0: | |
| continue |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/poromics/simulation/_flow.py` around lines 91 - 95, The loop in run()
uses range(n_steps + 1) which runs one extra iteration and causes the logged
step counter to lag; change the loop to for step in range(n_steps): so
_solver.step(), self._n_iterations += 1, and the log gating (if step % log_every
!= 0) remain correct for zero and positive n_steps, ensuring no overshoot and
consistent logging for the existing symbols _solver.step(), _n_iterations, step,
n_steps, and log_every.
| def pressure(self): | ||
| """Gauge pressure field in Pa, shape (nx, ny, nz). | ||
|
|
||
| Pressure is relative to the outlet face density. | ||
| """ | ||
| rho_lu = self._solver.get_density() | ||
| scale = _d3q19.cs2 * (self._voxel_size / self._dt) ** 2 | ||
| return (rho_lu - self._rho_out) * scale |
There was a problem hiding this comment.
pressure is specific pressure unless you add a density scale.
(rho_lu - rho_out) * cs² * (dx / dt)² only becomes pascals after multiplying by a physical reference density. This class never accepts one, so the current docstring/return value is off by a factor of ρ for real fluids. Either add a fluid-density parameter and multiply by it, or rename/document this as p / ρ.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/poromics/simulation/_flow.py` around lines 151 - 158, The pressure()
method returns specific pressure (p/ρ) because it multiplies (rho_lu -
self._rho_out) by cs²*(dx/dt)² but never multiplies by a physical reference
density; update pressure() to accept or use a physical fluid density (e.g., a
new attribute like self._rho_ref or a constructor parameter) and multiply the
current expression ((rho_lu - self._rho_out) * _d3q19.cs2 * (self._voxel_size /
self._dt)**2) by that density, or alternatively change the method name/docstring
to explicitly state it returns p/ρ (specific pressure) using rho_lu, _rho_out,
_d3q19.cs2, _voxel_size and _dt so callers understand the units.
Redirecting to stderr (fd 2) caused a deadlock: both stderr and stdout are subprocess PIPEs, so Julia's init output filled the stderr pipe buffer while the parent blocked on stdout.readline(). Neither side could make progress. Use /dev/null instead, which never blocks on writes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (6)
src/poromics/simulation/_flow.py (3)
91-95:⚠️ Potential issue | 🟡 Minor
run()advances one step too many.Line 91 uses
range(n_steps + 1), sorun(n_steps=0)still mutates state once and the loggedstepvalue trails_n_iterations.Keep the loop count aligned with the API
- for step in range(n_steps + 1): + for step in range(1, n_steps + 1):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/simulation/_flow.py` around lines 91 - 95, The run() loop advances one extra step because it iterates range(n_steps + 1); change the loop to iterate range(n_steps) (or otherwise use a for _ in range(n_steps)) so that run(n_steps=0) performs zero steps and _solver.step() and _n_iterations are only updated exactly n_steps times; also ensure any logging that prints the loop variable `step` uses the same counter as `_n_iterations` (or log `_n_iterations` directly) so reported step numbers do not trail the actual `_n_iterations`.
46-59:⚠️ Potential issue | 🟠 MajorValidate
im,nu, andvoxel_sizeat construction.Line 50 forwards non-3D inputs into
_D3Q19Solver, and Line 59 allowsnu <= 0orvoxel_size <= 0, which makesself._dtinvalid before the solver even starts.Minimal guardrails
if axis not in (0, 1, 2): raise ValueError(f"axis must be 0, 1, or 2, got {axis}") - solid = (np.asarray(im) == 0).astype(np.int8) + if nu <= 0: + raise ValueError(f"nu must be > 0, got {nu}") + if voxel_size <= 0: + raise ValueError(f"voxel_size must be > 0, got {voxel_size}") + im_arr = np.asarray(im) + if im_arr.ndim != 3: + raise ValueError(f"im must be a 3D array, got shape {im_arr.shape}") + solid = (im_arr == 0).astype(np.int8)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/simulation/_flow.py` around lines 46 - 59, Add input validation in __init__: convert im to array and check im.ndim == 3 and raise ValueError if not, and validate that nu > 0 and voxel_size > 0 (raise ValueError with clear messages) before computing self._dt; perform these checks right after creating solid (or immediately after np.asarray(im)) and before computing self._dt so that _D3Q19Solver never receives invalid nu/voxel_size and self._dt is never computed with non-positive values.
150-157:⚠️ Potential issue | 🟠 Major
pressureis specific pressure unless you multiply by a reference density.Lines 155-157 only scale by
(dx / dt)^2, so the result ism²/s²(p / ρ), not pascals. Either add a physical density parameter and multiply here, or rename/document this property as specific pressure / gauge head.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/simulation/_flow.py` around lines 150 - 157, The pressure() property currently returns specific pressure (p/ρ) because it scales (rho_lu - _rho_out) by _d3q19.cs2 * (_voxel_size/_dt)**2 (units m²/s²) rather than by a physical density; update pressure() so it returns true pressure in Pa by multiplying the existing expression by a reference density (e.g., a new physical_density or rho_ref parameter/property) or, if you intend to keep it as specific pressure, change the docstring and name to indicate "specific pressure" or "gauge head"; adjust references to _solver.get_density(), _rho_out, _voxel_size, _dt, and _d3q19.cs2 accordingly so the units are consistent.src/poromics/simulation/_diffusion.py (3)
88-93:⚠️ Potential issue | 🟡 Minor
run()advances one step too many.Line 88 uses
range(n_steps + 1), sorun(n_steps=0)still mutates state once and the loggedstepvalue trails_n_iterations.Keep the loop count aligned with the API
- for step in range(n_steps + 1): + for step in range(1, n_steps + 1):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/simulation/_diffusion.py` around lines 88 - 93, The loop in run() advances one extra step because it uses range(n_steps + 1); change it to range(n_steps) so calling run(n_steps=0) does not mutate state. Keep the existing increment of _n_iterations after calling self._solver.step() and ensure any logging that uses step (and log_every) still aligns with _n_iterations (adjust logging logic if you rely on a 1-based iteration count), referencing the run function, self._solver.step(), _n_iterations, step, and log_every.
45-56:⚠️ Potential issue | 🟠 MajorValidate
im,D, andvoxel_sizeat construction.Line 49 forwards non-3D inputs into
_D3Q7Solver, and Line 56 allowsD <= 0orvoxel_size <= 0, which produces an invalid physical timestep.Minimal guardrails
if axis not in (0, 1, 2): raise ValueError(f"axis must be 0, 1, or 2, got {axis}") - solid = (np.asarray(im) == 0).astype(np.int8) + if D <= 0: + raise ValueError(f"D must be > 0, got {D}") + if voxel_size <= 0: + raise ValueError(f"voxel_size must be > 0, got {voxel_size}") + im_arr = np.asarray(im) + if im_arr.ndim != 3: + raise ValueError(f"im must be a 3D array, got shape {im_arr.shape}") + solid = (im_arr == 0).astype(np.int8)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/simulation/_diffusion.py` around lines 45 - 56, The constructor (__init__) lacks validation for the input image dimensionality and for positive physical parameters: ensure the provided im is a 3D array (e.g., convert with np.asarray(im) and check arr.ndim == 3 and arr.size > 0) and raise ValueError if not 3D; validate that D > 0 and voxel_size > 0 (convert to float) and raise ValueError for non-positive values so computing self._dt = _d3q7.D_lu * voxel_size**2 / D cannot produce an invalid timestep; keep the existing solid-empty check (solid.sum() == solid.size) but perform these new checks before any downstream use and mention __init__, _D3Q7Solver/_d3q7.D_lu, self._D, and self._voxel_size to locate the changes.
145-163:⚠️ Potential issue | 🟠 Major
flux()should convert and validate at the public boundary.Line 163 returns
_solver.compute_flux(axis)unchanged, so this physical wrapper leaks backend units and accepts anyaxisthe caller passes through. Please convert the value here and validate the public input here;src/poromics/_metrics.pycurrently only works because Lines 371-373 compensate implicitly for the backend units.
🧹 Nitpick comments (1)
pyproject.toml (1)
37-46: Clarify whether Taichi is optional or mandatory.Taichi is still loaded lazily through
ensure_taichi()and wrapped with ImportError handling, yet it's now listed as a mandatory runtime dependency inpyproject.toml. This inconsistency should be resolved:
- If LBM solvers are optional features, move Taichi to
extras_require(e.g.,lbm).- If Taichi is truly mandatory, remove the lazy loading pattern and error handling—users will fail fast at install time, which is clearer.
Update packaging and runtime behavior to match.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` around lines 37 - 46, The pyproject.toml currently lists Taichi as a mandatory dependency while the runtime still uses lazy loading via ensure_taichi() with ImportError handling; decide which behavior you want and make packaging and code consistent: if Taichi is optional, move "taichi>=1.7.4" from dependencies into an extras_require group (e.g., extras_require = {"lbm": ["taichi>=1.7.4"]}) and keep ensure_taichi() and its ImportError handling to gate LBM features; if Taichi is mandatory, remove the lazy-loading pattern and ImportError catches around ensure_taichi()/any taichi imports so the package fails at install/import time and delete the extras_require change, ensuring both pyproject.toml and the ensure_taichi() usage are updated to reflect the chosen option.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/poromics/_metrics.py`:
- Around line 91-115: Concurrent callers share _julia_proc, _julia_ctrl_w and
its stdout which causes interleaved writes/reads; add a module-level
threading.Lock (e.g. _julia_lock) and use it to serialize any interaction with
the Julia worker—specifically acquire the lock around the sequence that
gets/creates the worker via _get_julia_worker and around writing to
_julia_ctrl_w and reading from _julia_proc.stdout (e.g. inside tortuosity_fd and
any other callers); ensure the lock is created at module scope and held for the
entire request/response (write then readline/pickle load) so one caller cannot
consume another caller’s response.
- Around line 433-445: The code currently computes grad_P_lu using the boundary
driver (solver._rho_in - solver._rho_out) / L which includes the inlet/outlet
pressure jump and biases k_lu low; instead, compute dp/dx from interior density
slices (e.g., take mean densities from im slices excluding the inlet/outlet
planes used in tests and compute dp = mean(rho_interior_start) -
mean(rho_interior_end) then grad_P_lu = dp * _d3q19.cs2 / L_interior) and
replace the current grad_P_lu calculation in the block that uses
solver._rho_in/_rho_out (variables: solver._solver.get_velocity(), v_lu,
v_flow_lu, u_darcy_lu, pore_mask, _d3q19.cs2, k_lu); alternatively expose a
helper (TransientFlow) that returns the developed pressure gradient and use that
here.
- Around line 57-69: The function _tortuosity_fd_inprocess currently mutates the
caller's D in place when zeroing disconnected voxels (D[~im] = 0.0), causing
inconsistent behavior versus the subprocess path that pickles a copy; fix by
making a local copy of D at the top of the branch (e.g., D_local = D.copy() or
np.array(D, copy=True)) and use that copy for masking and passing into
taujl.TortuositySimulation (replace references to D with D_local), ensuring
shape/dtype are preserved so the public API behaves identically in both modes.
In `@src/poromics/julia_helpers.py`:
- Around line 1-5: This file fails ruff/formatting checks; run the formatter and
apply its edits to src/poromics/julia_helpers.py (e.g., run `ruff format` or
your configured formatter) so import ordering, spacing, and any trailing
newline/whitespace issues around the top-level imports (from __future__ import
annotations, import shutil, from pathlib import Path, from typing import Any)
and the other flagged regions are fixed; commit the reformatted file to satisfy
the CI check.
---
Duplicate comments:
In `@src/poromics/simulation/_diffusion.py`:
- Around line 88-93: The loop in run() advances one extra step because it uses
range(n_steps + 1); change it to range(n_steps) so calling run(n_steps=0) does
not mutate state. Keep the existing increment of _n_iterations after calling
self._solver.step() and ensure any logging that uses step (and log_every) still
aligns with _n_iterations (adjust logging logic if you rely on a 1-based
iteration count), referencing the run function, self._solver.step(),
_n_iterations, step, and log_every.
- Around line 45-56: The constructor (__init__) lacks validation for the input
image dimensionality and for positive physical parameters: ensure the provided
im is a 3D array (e.g., convert with np.asarray(im) and check arr.ndim == 3 and
arr.size > 0) and raise ValueError if not 3D; validate that D > 0 and voxel_size
> 0 (convert to float) and raise ValueError for non-positive values so computing
self._dt = _d3q7.D_lu * voxel_size**2 / D cannot produce an invalid timestep;
keep the existing solid-empty check (solid.sum() == solid.size) but perform
these new checks before any downstream use and mention __init__,
_D3Q7Solver/_d3q7.D_lu, self._D, and self._voxel_size to locate the changes.
In `@src/poromics/simulation/_flow.py`:
- Around line 91-95: The run() loop advances one extra step because it iterates
range(n_steps + 1); change the loop to iterate range(n_steps) (or otherwise use
a for _ in range(n_steps)) so that run(n_steps=0) performs zero steps and
_solver.step() and _n_iterations are only updated exactly n_steps times; also
ensure any logging that prints the loop variable `step` uses the same counter as
`_n_iterations` (or log `_n_iterations` directly) so reported step numbers do
not trail the actual `_n_iterations`.
- Around line 46-59: Add input validation in __init__: convert im to array and
check im.ndim == 3 and raise ValueError if not, and validate that nu > 0 and
voxel_size > 0 (raise ValueError with clear messages) before computing self._dt;
perform these checks right after creating solid (or immediately after
np.asarray(im)) and before computing self._dt so that _D3Q19Solver never
receives invalid nu/voxel_size and self._dt is never computed with non-positive
values.
- Around line 150-157: The pressure() property currently returns specific
pressure (p/ρ) because it scales (rho_lu - _rho_out) by _d3q19.cs2 *
(_voxel_size/_dt)**2 (units m²/s²) rather than by a physical density; update
pressure() so it returns true pressure in Pa by multiplying the existing
expression by a reference density (e.g., a new physical_density or rho_ref
parameter/property) or, if you intend to keep it as specific pressure, change
the docstring and name to indicate "specific pressure" or "gauge head"; adjust
references to _solver.get_density(), _rho_out, _voxel_size, _dt, and _d3q19.cs2
accordingly so the units are consistent.
---
Nitpick comments:
In `@pyproject.toml`:
- Around line 37-46: The pyproject.toml currently lists Taichi as a mandatory
dependency while the runtime still uses lazy loading via ensure_taichi() with
ImportError handling; decide which behavior you want and make packaging and code
consistent: if Taichi is optional, move "taichi>=1.7.4" from dependencies into
an extras_require group (e.g., extras_require = {"lbm": ["taichi>=1.7.4"]}) and
keep ensure_taichi() and its ImportError handling to gate LBM features; if
Taichi is mandatory, remove the lazy-loading pattern and ImportError catches
around ensure_taichi()/any taichi imports so the package fails at install/import
time and delete the extras_require change, ensuring both pyproject.toml and the
ensure_taichi() usage are updated to reflect the chosen option.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a6be2afc-4a69-49b4-9a32-3a3c4e4d502d
📒 Files selected for processing (12)
.github/workflows/black.yaml.github/workflows/ruff.yamlpyproject.tomlrequirements-dev.txtrequirements.txtsrc/poromics/_julia_worker.pysrc/poromics/_lbm/_lattice.pysrc/poromics/_metrics.pysrc/poromics/julia_helpers.pysrc/poromics/simulation/_diffusion.pysrc/poromics/simulation/_flow.pytests/test_lbm_permeability.py
💤 Files with no reviewable changes (2)
- requirements.txt
- requirements-dev.txt
✅ Files skipped from review due to trivial changes (2)
- .github/workflows/ruff.yaml
- .github/workflows/black.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
- src/poromics/_lbm/_lattice.py
| _julia_proc = None | ||
| _julia_ctrl_w = None | ||
|
|
||
|
|
||
| def _get_julia_worker(): | ||
| """Return a running Julia worker subprocess, spawning one if needed. | ||
|
|
||
| Uses a dedicated pipe fd for the control channel because Julia's | ||
| runtime clobbers fd 0 (stdin) on initialization. | ||
| """ | ||
| global _julia_proc, _julia_ctrl_w | ||
| if _julia_proc is not None and _julia_proc.poll() is None: | ||
| return _julia_proc, _julia_ctrl_w | ||
| r_fd, w_fd = os.pipe() | ||
| _julia_proc = subprocess.Popen( | ||
| [sys.executable, "-m", "poromics._julia_worker", str(r_fd)], | ||
| stdin=subprocess.DEVNULL, | ||
| stdout=subprocess.PIPE, | ||
| stderr=subprocess.PIPE, | ||
| pass_fds=(r_fd,), | ||
| ) | ||
| os.close(r_fd) | ||
| _julia_ctrl_w = os.fdopen(w_fd, "w") | ||
| atexit.register(_shutdown_julia_worker) | ||
| return _julia_proc, _julia_ctrl_w |
There was a problem hiding this comment.
Serialize access to the shared Julia worker.
Lines 132-156 share _julia_proc, _julia_ctrl_w, and proc.stdout across callers with no synchronization. Two concurrent tortuosity_fd() calls can interleave request writes and readline() calls on the same pipes, so one thread may consume the other's "done" and load the wrong or not-yet-written pickle.
Also applies to: 132-157
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/poromics/_metrics.py` around lines 91 - 115, Concurrent callers share
_julia_proc, _julia_ctrl_w and its stdout which causes interleaved writes/reads;
add a module-level threading.Lock (e.g. _julia_lock) and use it to serialize any
interaction with the Julia worker—specifically acquire the lock around the
sequence that gets/creates the worker via _get_julia_worker and around writing
to _julia_ctrl_w and reading from _julia_proc.stdout (e.g. inside tortuosity_fd
and any other callers); ensure the lock is created at module scope and held for
the entire request/response (write then readline/pickle load) so one caller
cannot consume another caller’s response.
| # Work in lattice units for Darcy's law, then convert | ||
| v_lu = solver._solver.get_velocity() | ||
| solid = (np.asarray(im) == 0).astype(np.int8) | ||
| pore_mask = solid == 0 | ||
| porosity = float(pore_mask.sum()) / pore_mask.size | ||
| L = im.shape[axis] | ||
|
|
||
| v_flow_lu = v_lu[..., axis] | ||
| u_darcy_lu = float(np.mean(v_flow_lu)) | ||
| u_pore_lu = float(np.mean(v_flow_lu[pore_mask])) | ||
| grad_P_lu = (solver._rho_in - solver._rho_out) * _d3q19.cs2 / L | ||
| k_lu = u_darcy_lu * _d3q19.nu_lu / grad_P_lu | ||
|
|
There was a problem hiding this comment.
Use the developed pressure gradient for Darcy's law.
Line 443 derives grad_P_lu from the imposed rho_in - rho_out over the full length, but tests/test_lbm_permeability.py Lines 89-94 already avoid the inlet/outlet slices because this boundary condition introduces a pressure jump there. Using the raw BC delta here bakes that jump into k_lu, which biases permeability low. Compute dp/dx from interior density slices, or expose a TransientFlow helper for that, instead of using the boundary values directly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/poromics/_metrics.py` around lines 433 - 445, The code currently computes
grad_P_lu using the boundary driver (solver._rho_in - solver._rho_out) / L which
includes the inlet/outlet pressure jump and biases k_lu low; instead, compute
dp/dx from interior density slices (e.g., take mean densities from im slices
excluding the inlet/outlet planes used in tests and compute dp =
mean(rho_interior_start) - mean(rho_interior_end) then grad_P_lu = dp *
_d3q19.cs2 / L_interior) and replace the current grad_P_lu calculation in the
block that uses solver._rho_in/_rho_out (variables:
solver._solver.get_velocity(), v_lu, v_flow_lu, u_darcy_lu, pore_mask,
_d3q19.cs2, k_lu); alternatively expose a helper (TransientFlow) that returns
the developed pressure gradient and use that here.
The previous fix only redirected stdout during Julia init, but Julia and juliacall can print to fd 1 at any time (solver calls, GC, JIT compilation). This caused stale lines to accumulate in the stdout pipe, making the parent read them as premature "done" acks. Now fd 1 is redirected to /dev/null at worker startup. A dedicated file object wrapping the original pipe fd (via os.dup) is used exclusively for "done" protocol signals. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/poromics/_julia_worker.py`:
- Around line 56-58: The mask variable im (created from
trim_nonpercolating_paths and assigned via np.array) may be integer (0/1), so
using bitwise NOT (~im) on it will produce negative integers and incorrectly
index D; change the code to ensure im is boolean before negation by converting
im to a boolean array (e.g., using im = np.asarray(..., dtype=bool) or im =
np.asarray(...).astype(bool)) and then use D[~im] = 0.0; update the
creation/assignment of im (the variable used with ~im) to enforce dtype=bool so
D indexing behaves correctly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e1fa6741-fe70-43c4-af99-552ee79d53fe
📒 Files selected for processing (1)
src/poromics/_julia_worker.py
src/poromics/_julia_worker.py
Outdated
| if eps[1] != eps0[1]: | ||
| if D is not None: | ||
| D[~im] = 0.0 |
There was a problem hiding this comment.
Ensure im is boolean before using bitwise NOT.
At line 58, D[~im] relies on im being a boolean array. However, np.array() on line 52 may produce an integer array (0/1) depending on what Julia's trim_nonpercolating_paths returns. If im contains integers, ~0 = -1 and ~1 = -2 are both truthy, causing the mask to select all elements incorrectly.
Consider converting to bool explicitly:
🔧 Proposed fix
im = np.array(taujl.Imaginator.trim_nonpercolating_paths(im, axis=axis_jl))
+ im = im.astype(bool)
if jl.sum(im) == 0:
raise RuntimeError("No percolating paths along the given axis found in the image.")
eps = taujl.Imaginator.phase_fraction(im)
if eps[1] != eps0[1]:
if D is not None:
D[~im] = 0.0📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if eps[1] != eps0[1]: | |
| if D is not None: | |
| D[~im] = 0.0 | |
| im = np.array(taujl.Imaginator.trim_nonpercolating_paths(im, axis=axis_jl)) | |
| im = im.astype(bool) | |
| if jl.sum(im) == 0: | |
| raise RuntimeError("No percolating paths along the given axis found in the image.") | |
| eps = taujl.Imaginator.phase_fraction(im) | |
| if eps[1] != eps0[1]: | |
| if D is not None: | |
| D[~im] = 0.0 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/poromics/_julia_worker.py` around lines 56 - 58, The mask variable im
(created from trim_nonpercolating_paths and assigned via np.array) may be
integer (0/1), so using bitwise NOT (~im) on it will produce negative integers
and incorrectly index D; change the code to ensure im is boolean before negation
by converting im to a boolean array (e.g., using im = np.asarray(...,
dtype=bool) or im = np.asarray(...).astype(bool)) and then use D[~im] = 0.0;
update the creation/assignment of im (the variable used with ~im) to enforce
dtype=bool so D indexing behaves correctly.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…th trimming - Add Rich progress bar to TransientDiffusion and TransientFlow solvers with log-scale convergence estimation (_progress.py) - Add verbose parameter to run() and LBM metric functions (default True) - Add input validation for D, nu, voxel_size in solver constructors - Trim nonpercolating paths in tortuosity_lbm and permeability_lbm before running the solver to avoid wasted computation - Fix np.asarray dtype=bool in Julia worker and in-process solver - Copy D array before mutation in _tortuosity_fd_inprocess - Centralize loguru configuration in __init__.py (WARNING+ level) - Add ipywidgets and rich as dependencies Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…-based exec blocks - Replace docs/usage.md with dedicated solver pages (tortuosity-fd, tortuosity-lbm, permeability-lbm) with physics background and live output - Add getting-started.md quick start guide and simulation.md API reference - Add Jupyter notebook examples (diffusion-fd, diffusion-lbm, flow-lbm) - Use markdown-exec session parameter to share state across code blocks, eliminating duplicate hidden setup code - Use mkdocs_matplotlib shared namespace to avoid re-running solvers for plot-only blocks on the same page - Restructure mkdocs nav with tabs (Solvers, Examples, Reference sections) - Configure mkdocstrings with numpy docstring style and detailed options - Switch mkdocs-jupyter to light theme with execute=false Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add NaN masking to all plot methods so solid voxels render transparent - Fix unequal panel heights using make_axes_locatable with phantom axes - Add scientific notation formatting to all colorbars - Fix pressure plot orientation (remove erroneous .T transpose) - Unify velocity colormap to viridis across magnitude and streamlines - Migrate progress bars from Rich to tqdm.auto for env-aware rendering - Redesign result __repr__ to multi-line aligned format for readability - Shorten loguru timestamps to HH:mm:ss Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ledgments - Rename all nav entries and headings from "Julia FD" to "FDM" consistently - Polish homepage: concise description, accurate GPU acceleration details - Fix Quick Start page to show code blocks and use print(result) formatting - Polish notebook text: remove boilerplate, add descriptive visualization sections - Add acknowledgments page crediting Jeff Gostick's deff and kabs packages - Add roadmap page Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (5)
src/poromics/simulation/_diffusion.py (1)
101-105:⚠️ Potential issue | 🟡 Minor
run()advances one extra iteration.
range(n_steps + 1)performsn_steps + 1iterations, sorun(n_steps=0)still mutates the state once, andrun(n_steps=100)runs 101 steps.🔧 Proposed fix
- for step in range(n_steps + 1): + for step in range(1, n_steps + 1): self._solver.step() self._n_iterations += 1 if step % log_every != 0:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/simulation/_diffusion.py` around lines 101 - 105, The loop in run() uses range(n_steps + 1) so it advances one extra iteration; change the loop to range(n_steps) (or otherwise ensure it iterates exactly n_steps times) so calling run(n_steps=0) does not call self._solver.step(); update any related logic that depends on the loop bounds (e.g. the logging condition using log_every and the increment of self._n_iterations) to remain consistent with the new iteration count and preserve existing logging behavior around self._solver.step() and self._n_iterations.src/poromics/simulation/_flow.py (1)
104-108:⚠️ Potential issue | 🟡 Minor
run()advances one extra iteration.
range(n_steps + 1)performsn_steps + 1iterations. This mirrors the same issue in_diffusion.py.🔧 Proposed fix
- for step in range(n_steps + 1): + for step in range(1, n_steps + 1): self._solver.step() self._n_iterations += 1 if step % log_every != 0:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/simulation/_flow.py` around lines 104 - 108, run() advances one extra iteration because it iterates with range(n_steps + 1); change the loop in the run method to iterate exactly n_steps times by using range(n_steps) (or equivalent) so that self._solver.step() and self._n_iterations are incremented only n_steps times; update the loop that currently reads for step in range(n_steps + 1): (which calls self._solver.step() and increments self._n_iterations) to stop at n_steps and keep the existing log_every conditional intact to preserve logging behavior.src/poromics/_metrics.py (2)
96-162:⚠️ Potential issue | 🟠 MajorThread safety concern for shared Julia worker.
The Julia worker subprocess (
_julia_proc,_julia_ctrl_w, andproc.stdout) is shared across callers without synchronization. If two threads calltortuosity_fd()concurrently, their request writes andreadline()calls can interleave, causing one thread to consume another's"done"signal or pickle response.If thread-safe usage is not a design goal for the initial release, consider documenting this limitation. Otherwise, add a module-level lock:
🔒 Thread-safe approach
import threading _julia_lock = threading.Lock() def _julia_call(payload): """Send a request to the persistent Julia worker and return the response.""" with _julia_lock: proc, ctrl_w = _get_julia_worker() # ... rest of implementation🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/_metrics.py` around lines 96 - 162, The shared Julia worker (symbols: _julia_proc, _julia_ctrl_w, proc.stdout.readline in _julia_call) is not thread-safe; add a module-level threading.Lock (e.g., _julia_lock) and import threading, then wrap the core of _julia_call in with _julia_lock: so writes to _julia_ctrl_w and reads from proc.stdout/proc.stderr are serialized; ensure the lock is used around _get_julia_worker() plus the temporary-file write/flush, readline, and response read/unlink sequence to prevent interleaving.
1-6: 🛠️ Refactor suggestion | 🟠 MajorPipeline failure: Ruff format check.
The CI reports this file needs formatting. Run
ruff format src/poromics/_metrics.pyto fix.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/_metrics.py` around lines 1 - 6, The file src/poromics/_metrics.py fails Ruff formatting; run the formatter (e.g., `ruff format src/poromics/_metrics.py`) or apply the same formatting changes to the top-level import block (imports: atexit, os, pickle, subprocess, sys, tempfile) so they conform to project Ruff rules (sorting, spacing, and line breaks), then re-run ruff/CI and commit the formatted file.src/poromics/_julia_worker.py (1)
31-42:⚠️ Potential issue | 🟡 MinorValidate
axisbefore indexing.
["x", "y", "z"][axis]raisesIndexErrorfor invalid axis values. Since the worker propagates this as a generic error, callers lose the expected bad-input signal. Validateaxisexplicitly:🛡️ Proposed fix
def _run_tortuosity_fd(im, axis, D, rtol, gpu, verbose): """Execute the Julia-backed tortuosity solver and return a plain dict.""" import numpy as np jl, taujl = _ensure_julia() + if axis not in (0, 1, 2): + raise ValueError(f"axis must be 0, 1, or 2, got {axis}") axis_jl = jl.Symbol(["x", "y", "z"][axis])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/_julia_worker.py` around lines 31 - 42, The code in _run_tortuosity_fd uses ["x","y","z"][axis] which raises IndexError for invalid axis values; add explicit validation at the top of _run_tortuosity_fd to ensure axis is one of 0,1,2 (or a valid string "x","y","z") and if not raise a clear ValueError describing allowed values, then convert to the jl.Symbol as before (used in axis_jl and jl.Symbol calls); this preserves the intended behavior while returning a proper bad-input error instead of an IndexError.
🧹 Nitpick comments (2)
src/poromics/simulation/_progress.py (1)
1-63: Pipeline failure: Ruff format check.The CI reports this file needs formatting. Run
ruff format src/poromics/simulation/_progress.pyto fix.Additionally, Line 55 could use a chained comparison for readability:
♻️ Optional style improvement
- if tol is not None and tol < 1.0 and ratio > 0 and ratio < 1.0: + if tol is not None and tol < 1.0 and 0 < ratio < 1.0:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/simulation/_progress.py` around lines 1 - 63, Run ruff format on the file to satisfy CI (e.g., `ruff format src/poromics/simulation/_progress.py`) and then replace the explicit comparisons in update_progress (function update_progress, variables ratio/tol/pct) with a chained comparison for readability: change the condition `if tol is not None and tol < 1.0 and ratio > 0 and ratio < 1.0:` to use `0 < ratio < 1.0` (keeping the other checks for tol) so the line becomes `if tol is not None and tol < 1.0 and 0 < ratio < 1.0:`; leave the rest of update_progress and make_progress unchanged.src/poromics/simulation/_diffusion.py (1)
54-57:np.atleast_3dsilently reshapes 2D inputs.
np.atleast_3dconverts a 2D(M, N)array to(M, N, 1), which may be convenient but differs from explicit 3D validation. If 2D inputs are intentionally supported, consider documenting this behavior; otherwise, an explicit ndim check provides clearer error messages.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/poromics/simulation/_diffusion.py` around lines 54 - 57, The current use of np.atleast_3d on im (creating im_arr) silently turns 2D inputs into shape (M,N,1); replace this with an explicit dimensionality check and clear handling: validate im ndarray ndim (e.g., reject ndim != 3 with a RuntimeError stating expected 3D image) or, if 2D inputs should be supported, explicitly document and perform an intentional reshape (e.g., np.asarray(im)[..., None]) before computing solid, keeping the variables im_arr and solid unchanged and preserving the subsequent empty-pore check (solid.sum() == solid.size).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/getting-started.md`:
- Around line 52-56: Update the "Solvers" link in docs/getting-started.md (the
link text "Solvers" on the "What's next?" list) to point to the solvers overview
page instead of solvers/tortuosity-fd.md so the quick-start links the full
solver overview (e.g., change target to solvers/index.md or the solvers overview
file used in the repo); keep the link text unchanged and ensure the new target
file exists and builds in the site.
In `@src/poromics/__init__.py`:
- Around line 1-10: The module currently calls loguru.logger.remove() and
logger.add() at import time which mutates global sinks; remove those calls and
stop configuring loguru in src/poromics/__init__.py. Instead import loguru's
logger as a private symbol (e.g., _logger) and expose a bound child logger (use
_logger.bind(...) to create poromics-specific context) as the module-level
public logger; leave all sink/format/level configuration to the application/CLI
entrypoint where logger.add()/logger.remove() are performed. Ensure you remove
any direct calls to logger.remove() and logger.add() in this file and only
export the bound logger name to consumers.
In `@src/poromics/julia_helpers.py`:
- Around line 10-11: Remove the unused tqdm import from
src/poromics/julia_helpers.py: delete the "from tqdm.auto import tqdm" line (or
remove tqdm from any import list) so that only used imports like "from loguru
import logger" remain, resolving the F401 lint error.
In `@src/poromics/simulation/_diffusion.py`:
- Around line 1-4: The file src/poromics/simulation/_diffusion.py fails the Ruff
formatting check; run the formatter on that module to fix styling issues (e.g.,
execute `ruff format src/poromics/simulation/_diffusion.py`) or apply equivalent
changes so import ordering, spacing, and overall formatting in _diffusion.py
conform to Ruff rules; ensure the top-level imports and any functions/classes in
the module (file _diffusion.py) are formatted accordingly before committing.
In `@src/poromics/simulation/_flow.py`:
- Around line 1-4: The file src/poromics/simulation/_flow.py fails the Ruff
formatter; run the formatter (e.g., ruff format) or apply the same style fixes
to _flow.py so imports and top-of-file whitespace/comments match Ruff rules
(ensure the module docstring/comment placement and import ordering of time and
numpy are formatted properly); after formatting, re-run ruff check to confirm
the file passes.
---
Duplicate comments:
In `@src/poromics/_julia_worker.py`:
- Around line 31-42: The code in _run_tortuosity_fd uses ["x","y","z"][axis]
which raises IndexError for invalid axis values; add explicit validation at the
top of _run_tortuosity_fd to ensure axis is one of 0,1,2 (or a valid string
"x","y","z") and if not raise a clear ValueError describing allowed values, then
convert to the jl.Symbol as before (used in axis_jl and jl.Symbol calls); this
preserves the intended behavior while returning a proper bad-input error instead
of an IndexError.
In `@src/poromics/_metrics.py`:
- Around line 96-162: The shared Julia worker (symbols: _julia_proc,
_julia_ctrl_w, proc.stdout.readline in _julia_call) is not thread-safe; add a
module-level threading.Lock (e.g., _julia_lock) and import threading, then wrap
the core of _julia_call in with _julia_lock: so writes to _julia_ctrl_w and
reads from proc.stdout/proc.stderr are serialized; ensure the lock is used
around _get_julia_worker() plus the temporary-file write/flush, readline, and
response read/unlink sequence to prevent interleaving.
- Around line 1-6: The file src/poromics/_metrics.py fails Ruff formatting; run
the formatter (e.g., `ruff format src/poromics/_metrics.py`) or apply the same
formatting changes to the top-level import block (imports: atexit, os, pickle,
subprocess, sys, tempfile) so they conform to project Ruff rules (sorting,
spacing, and line breaks), then re-run ruff/CI and commit the formatted file.
In `@src/poromics/simulation/_diffusion.py`:
- Around line 101-105: The loop in run() uses range(n_steps + 1) so it advances
one extra iteration; change the loop to range(n_steps) (or otherwise ensure it
iterates exactly n_steps times) so calling run(n_steps=0) does not call
self._solver.step(); update any related logic that depends on the loop bounds
(e.g. the logging condition using log_every and the increment of
self._n_iterations) to remain consistent with the new iteration count and
preserve existing logging behavior around self._solver.step() and
self._n_iterations.
In `@src/poromics/simulation/_flow.py`:
- Around line 104-108: run() advances one extra iteration because it iterates
with range(n_steps + 1); change the loop in the run method to iterate exactly
n_steps times by using range(n_steps) (or equivalent) so that
self._solver.step() and self._n_iterations are incremented only n_steps times;
update the loop that currently reads for step in range(n_steps + 1): (which
calls self._solver.step() and increments self._n_iterations) to stop at n_steps
and keep the existing log_every conditional intact to preserve logging behavior.
---
Nitpick comments:
In `@src/poromics/simulation/_diffusion.py`:
- Around line 54-57: The current use of np.atleast_3d on im (creating im_arr)
silently turns 2D inputs into shape (M,N,1); replace this with an explicit
dimensionality check and clear handling: validate im ndarray ndim (e.g., reject
ndim != 3 with a RuntimeError stating expected 3D image) or, if 2D inputs should
be supported, explicitly document and perform an intentional reshape (e.g.,
np.asarray(im)[..., None]) before computing solid, keeping the variables im_arr
and solid unchanged and preserving the subsequent empty-pore check (solid.sum()
== solid.size).
In `@src/poromics/simulation/_progress.py`:
- Around line 1-63: Run ruff format on the file to satisfy CI (e.g., `ruff
format src/poromics/simulation/_progress.py`) and then replace the explicit
comparisons in update_progress (function update_progress, variables
ratio/tol/pct) with a chained comparison for readability: change the condition
`if tol is not None and tol < 1.0 and ratio > 0 and ratio < 1.0:` to use `0 <
ratio < 1.0` (keeping the other checks for tol) so the line becomes `if tol is
not None and tol < 1.0 and 0 < ratio < 1.0:`; leave the rest of update_progress
and make_progress unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 98833b91-6732-410d-b6b7-a99573b8dede
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (24)
docs/acknowledgments.mddocs/examples/diffusion-fd.ipynbdocs/examples/diffusion-lbm.ipynbdocs/examples/flow-lbm.ipynbdocs/examples/index.mddocs/getting-started.mddocs/index.mddocs/roadmap.mddocs/simulation.mddocs/solvers/index.mddocs/solvers/permeability-lbm.mddocs/solvers/tortuosity-fd.mddocs/solvers/tortuosity-lbm.mddocs/usage.mdmakefilemkdocs.ymlpyproject.tomlsrc/poromics/__init__.pysrc/poromics/_julia_worker.pysrc/poromics/_metrics.pysrc/poromics/julia_helpers.pysrc/poromics/simulation/_diffusion.pysrc/poromics/simulation/_flow.pysrc/poromics/simulation/_progress.py
💤 Files with no reviewable changes (1)
- docs/usage.md
✅ Files skipped from review due to trivial changes (4)
- docs/examples/index.md
- docs/acknowledgments.md
- docs/solvers/permeability-lbm.md
- docs/solvers/tortuosity-lbm.md
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/index.md
- pyproject.toml
| ## What's next? | ||
|
|
||
| - Learn about each solver's physics and options: [Solvers](solvers/tortuosity-fd.md) | ||
| - See worked examples with visualization: [Examples](examples/index.md) | ||
| - Use the lower-level simulation API for custom workflows: [Simulation API](simulation.md) |
There was a problem hiding this comment.
Point the "Solvers" link at the overview page.
Line 54 currently links to solvers/tortuosity-fd.md, so the quick-start path skips the new LBM and permeability solver docs even though the text says "each solver's physics and options."
📝 Proposed fix
-- Learn about each solver's physics and options: [Solvers](solvers/tortuosity-fd.md)
+- Learn about each solver's physics and options: [Solvers](solvers/index.md)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## What's next? | |
| - Learn about each solver's physics and options: [Solvers](solvers/tortuosity-fd.md) | |
| - See worked examples with visualization: [Examples](examples/index.md) | |
| - Use the lower-level simulation API for custom workflows: [Simulation API](simulation.md) | |
| ## What's next? | |
| - Learn about each solver's physics and options: [Solvers](solvers/index.md) | |
| - See worked examples with visualization: [Examples](examples/index.md) | |
| - Use the lower-level simulation API for custom workflows: [Simulation API](simulation.md) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/getting-started.md` around lines 52 - 56, Update the "Solvers" link in
docs/getting-started.md (the link text "Solvers" on the "What's next?" list) to
point to the solvers overview page instead of solvers/tortuosity-fd.md so the
quick-start links the full solver overview (e.g., change target to
solvers/index.md or the solvers overview file used in the repo); keep the link
text unchanged and ensure the new target file exists and builds in the site.
| from loguru import logger | ||
|
|
||
| # Configure loguru: WARNING+ only, no milliseconds in timestamps. | ||
| logger.remove() | ||
| logger.add( | ||
| lambda msg: __import__("sys").stderr.write(msg), | ||
| format="{time:HH:mm:ss} | <level>{level:<8}</level> | {name}:{function}:{line} - {message}", | ||
| colorize=True, | ||
| level="WARNING", | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Loguru, is loggera process-wide singleton, and does callinglogger.remove() inside a library module remove sinks that were previously configured by the importing application?
💡 Result:
Yes. In Loguru, from loguru import logger gives you a single, pre-instanced logger object intended to be used globally within the current Python process (“there is only one logger”). [1][2]
Because sinks/handlers are attached to that one shared logger, calling logger.remove() in a library module affects the same handler set the importing application configured. In particular, logger.remove(None) removes all handlers (including the app’s), and the pre-configured default handler is handler id 0. [2]
Implication: a library should generally not call logger.remove() as part of its import/init, because it can unintentionally dismantle the application’s logging configuration; prefer letting the application configure sinks, and if needed use filtering/disable() rather than removing global handlers. [2]
Sources:
[1] https://loguru.readthedocs.io/en/stable/_modules/loguru/_logger.html
[2] https://loguru.readthedocs.io/_/downloads/en/stable/pdf/
Avoid reconfiguring Loguru at import time.
Lines 4–10 call logger.remove() / logger.add() on Loguru's global logger. Since logger is a process-wide singleton, import poromics will wipe sinks that the host application already configured—a breaking side effect for a library. Expose a bound logger instead and keep sink formatting/configuration in the CLI or application entrypoint.
🔧 Proposed direction
-from loguru import logger
+from loguru import logger as _logger
-# Configure loguru: WARNING+ only, no milliseconds in timestamps.
-logger.remove()
-logger.add(
- lambda msg: __import__("sys").stderr.write(msg),
- format="{time:HH:mm:ss} | <level>{level:<8}</level> | {name}:{function}:{line} - {message}",
- colorize=True,
- level="WARNING",
-)
+logger = _logger.bind(package="poromics")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/poromics/__init__.py` around lines 1 - 10, The module currently calls
loguru.logger.remove() and logger.add() at import time which mutates global
sinks; remove those calls and stop configuring loguru in
src/poromics/__init__.py. Instead import loguru's logger as a private symbol
(e.g., _logger) and expose a bound child logger (use _logger.bind(...) to create
poromics-specific context) as the module-level public logger; leave all
sink/format/level configuration to the application/CLI entrypoint where
logger.add()/logger.remove() are performed. Ensure you remove any direct calls
to logger.remove() and logger.add() in this file and only export the bound
logger name to consumers.
| from loguru import logger | ||
| from tqdm.auto import tqdm |
There was a problem hiding this comment.
Remove the unused tqdm import.
make ruff_check is failing with F401 on Line 11, so this currently blocks the lint job.
🧹 Proposed fix
from loguru import logger
-from tqdm.auto import tqdm
from .utils import suppress_output📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from loguru import logger | |
| from tqdm.auto import tqdm | |
| from loguru import logger |
🧰 Tools
🪛 GitHub Actions: Ruff Linting
[error] 11-11: make ruff_check failed with exit code 1. Ruff error: F401 'tqdm.auto.tqdm' imported but unused in src/poromics/julia_helpers.py:11. Remove unused import (use --fix to auto-fix).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/poromics/julia_helpers.py` around lines 10 - 11, Remove the unused tqdm
import from src/poromics/julia_helpers.py: delete the "from tqdm.auto import
tqdm" line (or remove tqdm from any import list) so that only used imports like
"from loguru import logger" remain, resolving the F401 lint error.
| # Transient diffusion solver on 3D voxel images. | ||
| import time | ||
|
|
||
| import numpy as np |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Pipeline failure: Ruff format check.
The CI reports this file needs formatting. Run ruff format src/poromics/simulation/_diffusion.py to fix.
🧰 Tools
🪛 GitHub Actions: Black Formatting
[error] 1-1: Ruff format check would reformat this file. Run 'ruff format' or re-run the pipeline with formatting applied.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/poromics/simulation/_diffusion.py` around lines 1 - 4, The file
src/poromics/simulation/_diffusion.py fails the Ruff formatting check; run the
formatter on that module to fix styling issues (e.g., execute `ruff format
src/poromics/simulation/_diffusion.py`) or apply equivalent changes so import
ordering, spacing, and overall formatting in _diffusion.py conform to Ruff
rules; ensure the top-level imports and any functions/classes in the module
(file _diffusion.py) are formatted accordingly before committing.
| # Transient single-phase flow solver on 3D voxel images. | ||
| import time | ||
|
|
||
| import numpy as np |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Pipeline failure: Ruff format check.
The CI reports this file needs formatting. Run ruff format src/poromics/simulation/_flow.py to fix.
🧰 Tools
🪛 GitHub Actions: Black Formatting
[error] 1-1: Ruff format check would reformat this file. Run 'ruff format' or re-run the pipeline with formatting applied.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/poromics/simulation/_flow.py` around lines 1 - 4, The file
src/poromics/simulation/_flow.py fails the Ruff formatter; run the formatter
(e.g., ruff format) or apply the same style fixes to _flow.py so imports and
top-of-file whitespace/comments match Ruff rules (ensure the module
docstring/comment placement and import ordering of time and numpy are formatted
properly); after formatting, re-run ruff check to confirm the file passes.
Summary
poromics.simulationsubpackage with physical-unit solver classes (TransientDiffusion,TransientFlow)Fixes #12
Fixes #13
Fixes #14
Test plan
uv run pytestpasses all 19 tests)POROMICS_JULIA_SUBPROCESS=0mode works for Julia-only usage with fail-fast Taichi guard🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Tests
Chores