Skip to content

Commit 65bf382

Browse files
committed
More discussion for the benchmark
1 parent a10b8a6 commit 65bf382

File tree

1 file changed

+7
-3
lines changed

1 file changed

+7
-3
lines changed

README.rst

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -117,12 +117,16 @@ that you can reach when the operands fit comfortably in-memory:
117117

118118
In this case, performance is a bit far from top-level libraries like Numexpr or Numba, but
119119
it is still pretty nice (and probably using CPUs with more cores than M2 would allow closing the
120-
performance gap even further).
120+
performance gap even further). One important thing to know is that the memory consumption when
121+
using the `LazyArray.eval()` method is very low, because the output is an `NDArray` object that
122+
is compressed and in-memory by default. On its hand `LazyArray.__getitem__()` method returns
123+
an actual NumPy array, so it is not recommended to use it for large datasets, as it will consume
124+
quite a bit of memory (but it can still be convenient for small outputs).
121125

122126
It is important to note that the `NDArray` object can use memory-mapped files as well, and the
123127
benchmark above is actually using a memory-mapped file as the storage for the operands.
124-
Memory-mapped files are very useful when the operands do not fit in-memory, and the performance
125-
is still very good. Thanks to Jan Sellner for his implementation in Blosc2.
128+
Memory-mapped files are very useful when the operands do not fit in-memory, while keeping good
129+
performance. Thanks to Jan Sellner for his implementation in Blosc2.
126130

127131
And here it is the performance when the operands do not fit well in-memory:
128132

0 commit comments

Comments
 (0)