Skip to content

Commit 5a77200

Browse files
adam900710kdave
authored andcommitted
btrfs: make btrfs_cleanup_ordered_extents() support large folios
When hitting a large folio, btrfs_cleanup_ordered_extents() will get the same large folio multiple times, and clearing the same range again and again. Thankfully this is not causing anything wrong, just inefficiency. This is caused by the fact that we're iterating folios using the old page index, thus can hit the same large folio again and again. Enhance it by increasing @index to the index of the folio end, and only increase @index by 1 if we failed to grab a folio. Reviewed-by: Boris Burkov <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
1 parent f5e5249 commit 5a77200

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

fs/btrfs/inode.c

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -404,10 +404,12 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode,
404404

405405
while (index <= end_index) {
406406
folio = filemap_get_folio(inode->vfs_inode.i_mapping, index);
407-
index++;
408-
if (IS_ERR(folio))
407+
if (IS_ERR(folio)) {
408+
index++;
409409
continue;
410+
}
410411

412+
index = folio_end(folio) >> PAGE_SHIFT;
411413
/*
412414
* Here we just clear all Ordered bits for every page in the
413415
* range, then btrfs_mark_ordered_io_finished() will handle

0 commit comments

Comments
 (0)