You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -70,17 +70,17 @@ By default, a domain decomposition of the particle set is performed using octree
70
70
The implementation first computes the density of each particle using the typical SPH approach with a cubic kernel.
71
71
This density is then evaluated or mapped onto a sparse grid using spatial hashing in the support radius of each particle.
72
72
This implies that memory is only allocated in areas where the fluid density is non-zero. This is in contrast to a naive approach where the marching cubes background grid is allocated for the whole domain.
73
-
The marching cubes reconstruction is performed only in the narrow band of grid cells where the density values cross the surface threshold. Cells completely in the interior of the fluid are skipped. For more details, please refer to the [readme of the library]((https://github.com/w1th0utnam3/splashsurf/blob/main/splashsurf_lib/README.md)).
73
+
The marching cubes reconstruction is performed only in the narrowband of grid cells where the density values cross the surface threshold. Cells completely in the interior of the fluid are skipped. For more details, please refer to the [readme of the library]((https://github.com/w1th0utnam3/splashsurf/blob/main/splashsurf_lib/README.md)).
74
74
Finally, all surface patches are stitched together by walking the octree back up, resulting in a closed surface.
75
75
76
76
## Notes
77
77
78
-
Due the use of hash maps and multi-threading (if enabled), the output of this implementation is not deterministic.
79
-
In the future, flags may be added to switch the internal data structures to use binary trees for debugging purposes.
80
-
81
-
Note that for small numbers of fluid particles (i.e. in the low thousands or less) the multi-threaded implementation may have worse performance due to the task based parallelism and the additional overhead of domain decomposition and stitching.
78
+
For small numbers of fluid particles (i.e. in the low thousands or less) the multithreaded implementation may have worse performance due to the task based parallelism and the additional overhead of domain decomposition and stitching.
82
79
In this case, you can try to disable the domain decomposition. The reconstruction will then use a global approach that is parallelized using thread-local hashmaps.
83
-
For larger quantities of particles the decomposition approach will be faster, however.
80
+
For larger quantities of particles the decomposition approach is expected to be always faster.
81
+
82
+
Due to the use of hash maps and multi-threading (if enabled), the output of this implementation is not deterministic.
83
+
In the future, flags may be added to switch the internal data structures to use binary trees for debugging purposes.
84
84
85
85
As shown below, the tool can handle the output of large simulations.
86
86
However, it was not tested with a wide range of parameters and may not be totally robust against corner-cases or extreme parameters.
@@ -101,7 +101,7 @@ Good settings for the surface reconstruction depend on the original simulation a
101
101
-`particle-radius`: should be a bit larger than the particle radius used for the actual simulation. A radius around 1.4 to 1.6 times larger than the original SPH particle radius seems to be appropriate.
102
102
-`smoothing-length`: should be set around `1.2`. Larger values smooth out the iso-surface more but also artificially increase the fluid volume.
103
103
-`surface-threshold`: a good value depends on the selected `particle-radius` and `smoothing-length` and can be used to counteract a fluid volume increase e.g. due to a larger particle radius. In combination with the other recommended values a threshold of `0.6` seemed to work well.
104
-
-`cube-size`i.e. marching cubes resolution of less than `1.0`, e.g. start with `0.5` and increase/decrease it if the result is not smooth enough or the reconstruction takes too long.
104
+
-`cube-size`usually should not be chosen larger than `1.0` to avoid artifacts (e.g. single particles decaying into rhomboids), start with a value in the range of `0.75` to `0.5` and decrease/increase it if the result is too coarse or the reconstruction takes too long.
0 commit comments