Skip to content

Commit 15fc0be

Browse files
adam900710kdave
authored andcommitted
btrfs: make btrfs_cleanup_ordered_extents() support large folios
When hitting a large folio, btrfs_cleanup_ordered_extents() will get the same large folio multiple times, and clearing the same range again and again. Thankfully this is not causing anything wrong, just inefficiency. This is caused by the fact that we're iterating folios using the old page index, thus can hit the same large folio again and again. Enhance it by increasing @index to the index of the folio end, and only increase @index by 1 if we failed to grab a folio. Reviewed-by: Boris Burkov <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
1 parent ad580df commit 15fc0be

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

fs/btrfs/inode.c

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -401,10 +401,12 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode,
401401

402402
while (index <= end_index) {
403403
folio = filemap_get_folio(inode->vfs_inode.i_mapping, index);
404-
index++;
405-
if (IS_ERR(folio))
404+
if (IS_ERR(folio)) {
405+
index++;
406406
continue;
407+
}
407408

409+
index = folio_end(folio) >> PAGE_SHIFT;
408410
/*
409411
* Here we just clear all Ordered bits for every page in the
410412
* range, then btrfs_mark_ordered_io_finished() will handle

0 commit comments

Comments
 (0)