File tree Expand file tree Collapse file tree 1 file changed +7
-3
lines changed
Expand file tree Collapse file tree 1 file changed +7
-3
lines changed Original file line number Diff line number Diff line change @@ -117,12 +117,16 @@ that you can reach when the operands fit comfortably in-memory:
117117
118118In this case, performance is a bit far from top-level libraries like Numexpr or Numba, but
119119it is still pretty nice (and probably using CPUs with more cores than M2 would allow closing the
120- performance gap even further).
120+ performance gap even further). One important thing to know is that the memory consumption when
121+ using the `LazyArray.eval() ` method is very low, because the output is an `NDArray ` object that
122+ is compressed and in-memory by default. On its hand `LazyArray.__getitem__() ` method returns
123+ an actual NumPy array, so it is not recommended to use it for large datasets, as it will consume
124+ quite a bit of memory (but it can still be convenient for small outputs).
121125
122126It is important to note that the `NDArray ` object can use memory-mapped files as well, and the
123127benchmark above is actually using a memory-mapped file as the storage for the operands.
124- Memory-mapped files are very useful when the operands do not fit in-memory, and the performance
125- is still very good . Thanks to Jan Sellner for his implementation in Blosc2.
128+ Memory-mapped files are very useful when the operands do not fit in-memory, while keeping good
129+ performance . Thanks to Jan Sellner for his implementation in Blosc2.
126130
127131And here it is the performance when the operands do not fit well in-memory:
128132
You can’t perform that action at this time.
0 commit comments