-
Notifications
You must be signed in to change notification settings - Fork 35
Description
Problem
When running very large models (millions of blocks / DOFs) the solver currently needs to hold the entire solver.in JSON file in memory at once. On HPC systems with per‑process memory limits, this can contribute to high memory usage and, in some cases, out‑of‑memory failures before the actual simulation starts. This is especially noticeable for large synthetic vascular trees where the JSON file itself is very large (~2 million vessel blocks).
Solution
Add support for a streaming / SAX‑style JSON reader for the main configuration file so that:
- The solver does not need to materialize the full JSON DOM in memory at once.
- Blocks, vessels, and boundary conditions can be constructed incrementally as the file is read.
- Memory usage during input parsing scales with the size of the “active” portion being processed, rather than with the size of the entire JSON file.
Ideally this would be implemented using a streaming API (nlohmann::json SAX interface or a similar streaming parser) behind the existing configuration loading functions, so that the rest of the code can remain largely unchanged.
Additional context
We used this approach to run large synthetic vascular models on the order of 4,000,000+ blocks and ~8,000,000 DOFs on Sherlock; however this has not yet been integrated with the main branch. The JSON configuration file for these cases is very large, and a streaming reader kept the input stage memory footprint under control and within per‑process memory limits otherwise OOM failures are seen.
Code of Conduct
- I agree to follow this project's Code of Conduct and Contributing Guidelines