Skip to content

Commit 517236c

Browse files
author
Kent Overstreet
committed
bcachefs: Kill read lock dropping in bch2_btree_node_lock_write_nofail()
dropping read locks in bch2_btree_node_lock_write_nofail() dates from before we had the cycle detector; we can now tell the cycle detector directly when taking a lock may not fail because we can't handle transaction restarts. This is needed for adding should_be_locked asserts. Signed-off-by: Kent Overstreet <[email protected]>
1 parent beccf29 commit 517236c

File tree

1 file changed

+1
-27
lines changed

1 file changed

+1
-27
lines changed

fs/bcachefs/btree_locking.c

Lines changed: 1 addition & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -440,33 +440,7 @@ void bch2_btree_node_lock_write_nofail(struct btree_trans *trans,
440440
struct btree_path *path,
441441
struct btree_bkey_cached_common *b)
442442
{
443-
struct btree_path *linked;
444-
unsigned i, iter;
445-
int ret;
446-
447-
/*
448-
* XXX BIG FAT NOTICE
449-
*
450-
* Drop all read locks before taking a write lock:
451-
*
452-
* This is a hack, because bch2_btree_node_lock_write_nofail() is a
453-
* hack - but by dropping read locks first, this should never fail, and
454-
* we only use this in code paths where whatever read locks we've
455-
* already taken are no longer needed:
456-
*/
457-
458-
trans_for_each_path(trans, linked, iter) {
459-
if (!linked->nodes_locked)
460-
continue;
461-
462-
for (i = 0; i < BTREE_MAX_DEPTH; i++)
463-
if (btree_node_read_locked(linked, i)) {
464-
btree_node_unlock(trans, linked, i);
465-
btree_path_set_dirty(linked, BTREE_ITER_NEED_RELOCK);
466-
}
467-
}
468-
469-
ret = __btree_node_lock_write(trans, path, b, true);
443+
int ret = __btree_node_lock_write(trans, path, b, true);
470444
BUG_ON(ret);
471445
}
472446

0 commit comments

Comments
 (0)