Skip to content

Commit da51e3d

Browse files
mount: performance improvement
### Performance Comparison: `bytearray.extend()` vs. Concatenation The efficiency difference between `bytearray.extend(bytes(N))` and `self.meta = self.meta + bytes(N)` stems from how Python manages memory and objects during these operations. #### 1. In-Place Modification vs. Object Creation - **`bytearray.extend()`**: This is an **in-place** operation. If the current memory block allocated for the `bytearray` has enough extra capacity (pre-allocated space), Python simply writes the new bytes into that space and updates the length. If it needs more space, it uses `realloc()`, which can often expand the existing memory block without moving the entire data set to a new location. - **Concatenation (`+`)**: This creates a **completely new** `bytearray` object. It allocates a new memory block large enough to hold the sum of both parts, copies the contents of `self.meta`, copies the contents of `bytes(N)`, and then reassigns the variable `self.meta` to this new object. #### 2. Computational Complexity - **`bytearray.extend()`**: In the best case (when capacity exists), it is **O(K)**, where K is the number of bytes being added. In the worst case (reallocation), it is **O(N + K)**, but Python uses an over-allocation strategy (growth factor) that amortizes this cost, making it significantly faster on average. - **Concatenation (`+`)**: It is always **O(N + K)** because it must copy the existing `N` bytes every single time. As the `bytearray` grows larger (e.g., millions of items in a backup), this leads to **O(N²)** total time complexity across multiple additions, as you are repeatedly copying an ever-growing buffer. #### 3. Memory Pressure and Garbage Collection - Concatenation briefly requires memory for **both** the old buffer and the new buffer simultaneously before the old one is garbage collected. This increases the peak memory usage of the process. - `extend()` is more memory-efficient as it minimizes the need for multiple large allocations and relies on the underlying memory manager's ability to resize buffers efficiently. In the context of `borg mount`, where `self.meta` can grow to be many megabytes or even gigabytes for very large repositories, using concatenation causes a noticeable slowdown as the number of archives or files increases, whereas `extend()` remains performant.
1 parent 7d63832 commit da51e3d

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

src/borg/fuse.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ def iter_archive_items(self, archive_item_ids, filter=None, consider_part_files=
161161
for key, (csize, data) in zip(archive_item_ids, self.decrypted_repository.get_many(archive_item_ids)):
162162
# Store the chunk ID in the meta-array
163163
if self.write_offset + 32 >= len(self.meta):
164-
self.meta = self.meta + bytes(self.GROW_META_BY)
164+
self.meta.extend(bytes(self.GROW_META_BY))
165165
self.meta[self.write_offset:self.write_offset + 32] = key
166166
current_id_offset = self.write_offset
167167
self.write_offset += 32
@@ -199,7 +199,7 @@ def iter_archive_items(self, archive_item_ids, filter=None, consider_part_files=
199199
msgpacked_bytes = b''
200200

201201
if self.write_offset + 9 >= len(self.meta):
202-
self.meta = self.meta + bytes(self.GROW_META_BY)
202+
self.meta.extend(bytes(self.GROW_META_BY))
203203

204204
# item entries in the meta-array come in two different flavours, both nine bytes long.
205205
# (1) for items that span chunks:

0 commit comments

Comments
 (0)