Skip to content

Commit fd7449d

Browse files
dubeykobrauner
authored andcommitted
ceph: fix generic/421 test failure
The generic/421 fails to finish because of the issue: Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.894678] INFO: task kworker/u48:0:11 blocked for more than 122 seconds. Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.895403] Not tainted 6.13.0-rc5+ #1 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.895867] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.896633] task:kworker/u48:0 state:D stack:0 pid:11 tgid:11 ppid:2 flags:0x00004000 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.896641] Workqueue: writeback wb_workfn (flush-ceph-24) Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897614] Call Trace: Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897620] <TASK> Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897629] __schedule+0x443/0x16b0 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897637] schedule+0x2b/0x140 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897640] io_schedule+0x4c/0x80 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897643] folio_wait_bit_common+0x11b/0x310 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897646] ? _raw_spin_unlock_irq+0xe/0x50 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897652] ? __pfx_wake_page_function+0x10/0x10 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897655] __folio_lock+0x17/0x30 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897658] ceph_writepages_start+0xca9/0x1fb0 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897663] ? fsnotify_remove_queued_event+0x2f/0x40 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897668] do_writepages+0xd2/0x240 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897672] __writeback_single_inode+0x44/0x350 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897675] writeback_sb_inodes+0x25c/0x550 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897680] wb_writeback+0x89/0x310 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897683] ? finish_task_switch.isra.0+0x97/0x310 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897687] wb_workfn+0xb5/0x410 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897689] process_one_work+0x188/0x3d0 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897692] worker_thread+0x2b5/0x3c0 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897694] ? __pfx_worker_thread+0x10/0x10 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897696] kthread+0xe1/0x120 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897699] ? __pfx_kthread+0x10/0x10 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897701] ret_from_fork+0x43/0x70 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897705] ? __pfx_kthread+0x10/0x10 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897707] ret_from_fork_asm+0x1a/0x30 Jan 3 14:25:27 ceph-testing-0001 kernel: [ 369.897711] </TASK> There are several issues here: (1) ceph_kill_sb() doesn't wait ending of flushing all dirty folios/pages because of racy nature of mdsc->stopping_blockers. As a result, mdsc->stopping becomes CEPH_MDSC_STOPPING_FLUSHED too early. (2) The ceph_inc_osd_stopping_blocker(fsc->mdsc) fails to increment mdsc->stopping_blockers. Finally, already locked folios/pages are never been unlocked and the logic tries to lock the same page second time. (3) The folio_batch with found dirty pages by filemap_get_folios_tag() is not processed properly. And this is why some number of dirty pages simply never processed and we have dirty folios/pages after unmount anyway. This patch fixes the issues by means of: (1) introducing dirty_folios counter and flush_end_wq waiting queue in struct ceph_mds_client; (2) ceph_dirty_folio() increments the dirty_folios counter; (3) writepages_finish() decrements the dirty_folios counter and wake up all waiters on the queue if dirty_folios counter is equal or lesser than zero; (4) adding in ceph_kill_sb() method the logic of checking the value of dirty_folios counter and waiting if it is bigger than zero; (5) adding ceph_inc_osd_stopping_blocker() call in the beginning of the ceph_writepages_start() and ceph_dec_osd_stopping_blocker() at the end of the ceph_writepages_start() with the goal to resolve the racy nature of mdsc->stopping_blockers. sudo ./check generic/421 FSTYP -- ceph PLATFORM -- Linux/x86_64 ceph-testing-0001 6.13.0+ #137 SMP PREEMPT_DYNAMIC Mon Feb 3 20:30:08 UTC 2025 MKFS_OPTIONS -- 127.0.0.1:40551:/scratch MOUNT_OPTIONS -- -o name=fs,secret=<secret>,ms_mode=crc,nowsync,copyfrom 127.0.0.1:40551:/scratch /mnt/scratch generic/421 7s ... 4s Ran: generic/421 Passed all 1 tests Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Tested-by: David Howells <[email protected]> Signed-off-by: Christian Brauner <[email protected]>
1 parent 1551ec6 commit fd7449d

File tree

4 files changed

+35
-1
lines changed

4 files changed

+35
-1
lines changed

fs/ceph/addr.c

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,6 +82,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio)
8282
{
8383
struct inode *inode = mapping->host;
8484
struct ceph_client *cl = ceph_inode_to_client(inode);
85+
struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(inode->i_sb);
8586
struct ceph_inode_info *ci;
8687
struct ceph_snap_context *snapc;
8788

@@ -92,6 +93,8 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio)
9293
return false;
9394
}
9495

96+
atomic64_inc(&mdsc->dirty_folios);
97+
9598
ci = ceph_inode(inode);
9699

97100
/* dirty the head */
@@ -894,6 +897,7 @@ static void writepages_finish(struct ceph_osd_request *req)
894897
struct ceph_snap_context *snapc = req->r_snapc;
895898
struct address_space *mapping = inode->i_mapping;
896899
struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode);
900+
struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(inode->i_sb);
897901
unsigned int len = 0;
898902
bool remove_page;
899903

@@ -949,6 +953,12 @@ static void writepages_finish(struct ceph_osd_request *req)
949953

950954
ceph_put_snap_context(detach_page_private(page));
951955
end_page_writeback(page);
956+
957+
if (atomic64_dec_return(&mdsc->dirty_folios) <= 0) {
958+
wake_up_all(&mdsc->flush_end_wq);
959+
WARN_ON(atomic64_read(&mdsc->dirty_folios) < 0);
960+
}
961+
952962
doutc(cl, "unlocking %p\n", page);
953963

954964
if (remove_page)
@@ -1660,13 +1670,18 @@ static int ceph_writepages_start(struct address_space *mapping,
16601670

16611671
ceph_init_writeback_ctl(mapping, wbc, &ceph_wbc);
16621672

1673+
if (!ceph_inc_osd_stopping_blocker(fsc->mdsc)) {
1674+
rc = -EIO;
1675+
goto out;
1676+
}
1677+
16631678
retry:
16641679
rc = ceph_define_writeback_range(mapping, wbc, &ceph_wbc);
16651680
if (rc == -ENODATA) {
16661681
/* hmm, why does writepages get called when there
16671682
is no dirty data? */
16681683
rc = 0;
1669-
goto out;
1684+
goto dec_osd_stopping_blocker;
16701685
}
16711686

16721687
if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
@@ -1756,6 +1771,9 @@ static int ceph_writepages_start(struct address_space *mapping,
17561771
if (wbc->range_cyclic || (ceph_wbc.range_whole && wbc->nr_to_write > 0))
17571772
mapping->writeback_index = ceph_wbc.index;
17581773

1774+
dec_osd_stopping_blocker:
1775+
ceph_dec_osd_stopping_blocker(fsc->mdsc);
1776+
17591777
out:
17601778
ceph_put_snap_context(ceph_wbc.last_snapc);
17611779
doutc(cl, "%llx.%llx dend - startone, rc = %d\n", ceph_vinop(inode),

fs/ceph/mds_client.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5489,6 +5489,8 @@ int ceph_mdsc_init(struct ceph_fs_client *fsc)
54895489
spin_lock_init(&mdsc->stopping_lock);
54905490
atomic_set(&mdsc->stopping_blockers, 0);
54915491
init_completion(&mdsc->stopping_waiter);
5492+
atomic64_set(&mdsc->dirty_folios, 0);
5493+
init_waitqueue_head(&mdsc->flush_end_wq);
54925494
init_waitqueue_head(&mdsc->session_close_wq);
54935495
INIT_LIST_HEAD(&mdsc->waiting_for_map);
54945496
mdsc->quotarealms_inodes = RB_ROOT;

fs/ceph/mds_client.h

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -458,6 +458,9 @@ struct ceph_mds_client {
458458
atomic_t stopping_blockers;
459459
struct completion stopping_waiter;
460460

461+
atomic64_t dirty_folios;
462+
wait_queue_head_t flush_end_wq;
463+
461464
atomic64_t quotarealms_count; /* # realms with quota */
462465
/*
463466
* We keep a list of inodes we don't see in the mountpoint but that we

fs/ceph/super.c

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1563,6 +1563,17 @@ static void ceph_kill_sb(struct super_block *s)
15631563
*/
15641564
sync_filesystem(s);
15651565

1566+
if (atomic64_read(&mdsc->dirty_folios) > 0) {
1567+
wait_queue_head_t *wq = &mdsc->flush_end_wq;
1568+
long timeleft = wait_event_killable_timeout(*wq,
1569+
atomic64_read(&mdsc->dirty_folios) <= 0,
1570+
fsc->client->options->mount_timeout);
1571+
if (!timeleft) /* timed out */
1572+
pr_warn_client(cl, "umount timed out, %ld\n", timeleft);
1573+
else if (timeleft < 0) /* killed */
1574+
pr_warn_client(cl, "umount was killed, %ld\n", timeleft);
1575+
}
1576+
15661577
spin_lock(&mdsc->stopping_lock);
15671578
mdsc->stopping = CEPH_MDSC_STOPPING_FLUSHING;
15681579
wait = !!atomic_read(&mdsc->stopping_blockers);

0 commit comments

Comments
 (0)