Skip to content

Commit 1c320d8

Browse files
LiBaokun96tytso
authored andcommitted
ext4: fix zombie groups in average fragment size lists
Groups with no free blocks shouldn't be in any average fragment size list. However, when all blocks in a group are allocated(i.e., bb_fragments or bb_free is 0), we currently skip updating the average fragment size, which means the group isn't removed from its previous s_mb_avg_fragment_size[old] list. This created "zombie" groups that were always skipped during traversal as they couldn't satisfy any block allocation requests, negatively impacting traversal efficiency. Therefore, when a group becomes completely full, bb_avg_fragment_size_order is now set to -1. If the old order was not -1, a removal operation is performed; if the new order is not -1, an insertion is performed. Fixes: 196e402 ("ext4: improve cr 0 / cr 1 group scanning") CC: [email protected] Signed-off-by: Baokun Li <[email protected]> Reviewed-by: Jan Kara <[email protected]> Reviewed-by: Zhang Yi <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
1 parent e7f101a commit 1c320d8

File tree

1 file changed

+18
-18
lines changed

1 file changed

+18
-18
lines changed

fs/ext4/mballoc.c

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -841,30 +841,30 @@ static void
841841
mb_update_avg_fragment_size(struct super_block *sb, struct ext4_group_info *grp)
842842
{
843843
struct ext4_sb_info *sbi = EXT4_SB(sb);
844-
int new_order;
844+
int new, old;
845845

846-
if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || grp->bb_fragments == 0)
846+
if (!test_opt2(sb, MB_OPTIMIZE_SCAN))
847847
return;
848848

849-
new_order = mb_avg_fragment_size_order(sb,
850-
grp->bb_free / grp->bb_fragments);
851-
if (new_order == grp->bb_avg_fragment_size_order)
849+
old = grp->bb_avg_fragment_size_order;
850+
new = grp->bb_fragments == 0 ? -1 :
851+
mb_avg_fragment_size_order(sb, grp->bb_free / grp->bb_fragments);
852+
if (new == old)
852853
return;
853854

854-
if (grp->bb_avg_fragment_size_order != -1) {
855-
write_lock(&sbi->s_mb_avg_fragment_size_locks[
856-
grp->bb_avg_fragment_size_order]);
855+
if (old >= 0) {
856+
write_lock(&sbi->s_mb_avg_fragment_size_locks[old]);
857857
list_del(&grp->bb_avg_fragment_size_node);
858-
write_unlock(&sbi->s_mb_avg_fragment_size_locks[
859-
grp->bb_avg_fragment_size_order]);
860-
}
861-
grp->bb_avg_fragment_size_order = new_order;
862-
write_lock(&sbi->s_mb_avg_fragment_size_locks[
863-
grp->bb_avg_fragment_size_order]);
864-
list_add_tail(&grp->bb_avg_fragment_size_node,
865-
&sbi->s_mb_avg_fragment_size[grp->bb_avg_fragment_size_order]);
866-
write_unlock(&sbi->s_mb_avg_fragment_size_locks[
867-
grp->bb_avg_fragment_size_order]);
858+
write_unlock(&sbi->s_mb_avg_fragment_size_locks[old]);
859+
}
860+
861+
grp->bb_avg_fragment_size_order = new;
862+
if (new >= 0) {
863+
write_lock(&sbi->s_mb_avg_fragment_size_locks[new]);
864+
list_add_tail(&grp->bb_avg_fragment_size_node,
865+
&sbi->s_mb_avg_fragment_size[new]);
866+
write_unlock(&sbi->s_mb_avg_fragment_size_locks[new]);
867+
}
868868
}
869869

870870
/*

0 commit comments

Comments
 (0)