Skip to content

Commit 6122208

Browse files
Fix new typos found by codespell
1 parent 6ada5f9 commit 6122208

File tree

2 files changed

+6
-5
lines changed

2 files changed

+6
-5
lines changed

bench/ndarray/matmul.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -189,7 +189,7 @@
189189
"source": [
190190
"**Key observations:**\n",
191191
"- Automatic chunking can optimize performance for smaller matrix sizes.\n",
192-
"- Choosing square chunks of 1000x1000 can achive the best performance for matrices of sizes greater than 2000x2000.\n",
192+
"- Choosing square chunks of 1000x1000 can achieve the best performance for matrices of sizes greater than 2000x2000.\n",
193193
"\n",
194194
"**Next experiment:**\n",
195195
"We will increment the chunks' size, as we have seen that better performance can be achieved with bigger chunks."
@@ -294,7 +294,7 @@
294294
"**Key observations:**\n",
295295
"- The best performance is achieved for the biggest chunk size.\n",
296296
"- The larger the chunk size, the higher the bandwidth.\n",
297-
"- If the chunk size is choosen automatically, the performance is better than choosing any other chunk size. This is weird, because if choosen automatically, chunks of size 1000x1000 are choosen, which is the same size as the fixed chunks.\n",
297+
"- If the chunk size is chosen automatically, the performance is better than choosing any other chunk size. This is weird, because if chosen automatically, chunks of size 1000x1000 are chosen, which is the same size as the fixed chunks.\n",
298298
"\n",
299299
"**Next experiment:**\n",
300300
"We will increment the chunks' size again, as we have seen that better performance can be achieved with bigger chunks."
@@ -304,7 +304,7 @@
304304
"cell_type": "markdown",
305305
"metadata": {},
306306
"source": [
307-
"Presicion simple"
307+
"Precision simple"
308308
]
309309
},
310310
{
@@ -517,7 +517,7 @@
517517
"\n",
518518
"**Next experiment:**\n",
519519
"We are going to try with the same sizes for matrices and a square chunk size of 6000 to see if it improves the performance for that last matrix size.\n",
520-
"We will also remove chunk sizes of 1000 and 2000, and add a chunk size wich will be the same size as the matrix."
520+
"We will also remove chunk sizes of 1000 and 2000, and add a chunk size which will be the same size as the matrix."
521521
]
522522
},
523523
{

src/blosc2/lazyexpr.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1481,7 +1481,8 @@ def reduce_slices( # noqa: C901
14811481
iter_disk = all_ndarray and any_persisted
14821482
# Experiments say that iter_disk is faster than the regular path for reductions
14831483
# even when all operands are in memory, so no need to check any_persisted
1484-
# New benchs are saying the contrary (> 10% slower), so this needs more investigation
1484+
# New benchmarks are saying the contrary (> 10% slower), so this needs more
1485+
# investigation
14851486
# iter_disk = all_ndarray
14861487
else:
14871488
# WebAssembly does not support threading, so we cannot use the iter_disk option

0 commit comments

Comments
 (0)