Commit 7d4123a
committed
perf(cs): make inputBuffer size progressive
The current implementation of the WriteHighLevelOp class always asks
for maxBlocksPerHddWriteJob_ blocks for its next inputBuffer_, i.e the
structure to land the upcoming data blocks to be written to the drive.
This can became a significant waste of memory when the write high level
operation only writes 1 block, but asks for 32, if the
MAX_BLOCKS_PER_HDD_WRITE_JOB is set to that number, for instance.
The change proposed targets limiting the use of memory for any number
of blocks written in a single write high level operation to the double
of the strictly necessary memory. Considering the previous case, the
write high level operation will ask for 1, 2, 4, 8, 16, 32 and continue
asking for input buffers of 32 blocks.
The main downside is a possible increase in the number of drive
operations when the number of blocks written is between 2 and 15.
Signed-off-by: Dave <dave@leil.io>1 parent 5c742a6 commit 7d4123a
File tree
2 files changed
+7
-2
lines changed- src/chunkserver
2 files changed
+7
-2
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
466 | 466 | | |
467 | 467 | | |
468 | 468 | | |
469 | | - | |
| 469 | + | |
470 | 470 | | |
| 471 | + | |
| 472 | + | |
471 | 473 | | |
472 | 474 | | |
473 | 475 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
300 | 300 | | |
301 | 301 | | |
302 | 302 | | |
303 | | - | |
| 303 | + | |
| 304 | + | |
| 305 | + | |
| 306 | + | |
304 | 307 | | |
305 | 308 | | |
306 | 309 | | |
| |||
0 commit comments