Skip to content

Commit 9bf937a

Browse files
mount: performance improvement
The efficiency difference between `meta.extend(bytes(N))` and `meta = meta + bytes(N)` stems from how Python manages memory and objects during these operations. - **`bytearray.extend()`**: This is an **in-place** operation. If the current memory block allocated for the `bytearray` has enough extra capacity (pre-allocated space), Python simply writes the new bytes into that space and updates the length. If it needs more space, it uses `realloc()`, which can often expand the existing memory block without moving the entire data set to a new location. - **Concatenation (`+`)**: This creates a **completely new** `bytearray` object. It allocates a new memory block large enough to hold the sum of both parts, copies the contents of `meta`, copies the contents of `bytes(N)`, and then reassigns the variable `meta` to this new object. - **`bytearray.extend()`**: In the best case (when capacity exists), it is **O(K)**, where K is the number of bytes being added. In the worst case (reallocation), it is **O(N + K)**, but Python uses an over-allocation strategy (growth factor) that amortizes this cost, making it significantly faster on average. - **Concatenation (`+`)**: It is always **O(N + K)** because it must copy the existing `N` bytes every single time. As the `bytearray` grows larger (e.g., millions of items in a backup), this leads to **O(N²)** total time complexity across multiple additions, as you are repeatedly copying an ever-growing buffer. - Concatenation briefly requires memory for **both** the old buffer and the new buffer simultaneously before the old one is garbage collected. This increases the peak memory usage of the process. - `extend()` is more memory-efficient as it minimizes the need for multiple large allocations and relies on the underlying memory manager's ability to resize buffers efficiently. In the context of `borg mount`, where `meta` can grow to be many megabytes or even gigabytes for very large repositories, using concatenation causes a noticeable slowdown as the number of archives or files increases, whereas `extend()` remains performant.
1 parent c181845 commit 9bf937a

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

src/borg/fuse.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -177,7 +177,7 @@ def iter_archive_items(self, archive_item_ids, filter=None):
177177
for key, (csize, data) in zip(archive_item_ids, self.decrypted_repository.get_many(archive_item_ids)):
178178
# Store the chunk ID in the meta-array
179179
if write_offset + 32 >= len(meta):
180-
self.meta = meta = meta + bytes(self.GROW_META_BY)
180+
meta.extend(bytes(self.GROW_META_BY))
181181
meta[write_offset : write_offset + 32] = key
182182
current_id_offset = write_offset
183183
write_offset += 32
@@ -215,7 +215,7 @@ def iter_archive_items(self, archive_item_ids, filter=None):
215215
msgpacked_bytes = b""
216216

217217
if write_offset + 9 >= len(meta):
218-
self.meta = meta = meta + bytes(self.GROW_META_BY)
218+
meta.extend(bytes(self.GROW_META_BY))
219219

220220
# item entries in the meta-array come in two different flavours, both nine bytes long.
221221
# (1) for items that span chunks:

0 commit comments

Comments
 (0)