Skip to content

Commit d425207

Browse files
Mikulas Patockatorvalds
authored andcommitted
add barriers to buffer_uptodate and set_buffer_uptodate
Let's have a look at this piece of code in __bread_slow: get_bh(bh); bh->b_end_io = end_buffer_read_sync; submit_bh(REQ_OP_READ, 0, bh); wait_on_buffer(bh); if (buffer_uptodate(bh)) return bh; Neither wait_on_buffer nor buffer_uptodate contain any memory barrier. Consequently, if someone calls sb_bread and then reads the buffer data, the read of buffer data may be executed before wait_on_buffer(bh) on architectures with weak memory ordering and it may return invalid data. Fix this bug by adding a memory barrier to set_buffer_uptodate and an acquire barrier to buffer_uptodate (in a similar way as folio_test_uptodate and folio_mark_uptodate). Signed-off-by: Mikulas Patocka <[email protected]> Reviewed-by: Matthew Wilcox (Oracle) <[email protected]> Cc: [email protected] Signed-off-by: Linus Torvalds <[email protected]>
1 parent e394ff8 commit d425207

File tree

1 file changed

+24
-1
lines changed

1 file changed

+24
-1
lines changed

include/linux/buffer_head.h

Lines changed: 24 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,6 @@ static __always_inline int test_clear_buffer_##name(struct buffer_head *bh) \
118118
* of the form "mark_buffer_foo()". These are higher-level functions which
119119
* do something in addition to setting a b_state bit.
120120
*/
121-
BUFFER_FNS(Uptodate, uptodate)
122121
BUFFER_FNS(Dirty, dirty)
123122
TAS_BUFFER_FNS(Dirty, dirty)
124123
BUFFER_FNS(Lock, locked)
@@ -136,6 +135,30 @@ BUFFER_FNS(Meta, meta)
136135
BUFFER_FNS(Prio, prio)
137136
BUFFER_FNS(Defer_Completion, defer_completion)
138137

138+
static __always_inline void set_buffer_uptodate(struct buffer_head *bh)
139+
{
140+
/*
141+
* make it consistent with folio_mark_uptodate
142+
* pairs with smp_load_acquire in buffer_uptodate
143+
*/
144+
smp_mb__before_atomic();
145+
set_bit(BH_Uptodate, &bh->b_state);
146+
}
147+
148+
static __always_inline void clear_buffer_uptodate(struct buffer_head *bh)
149+
{
150+
clear_bit(BH_Uptodate, &bh->b_state);
151+
}
152+
153+
static __always_inline int buffer_uptodate(const struct buffer_head *bh)
154+
{
155+
/*
156+
* make it consistent with folio_test_uptodate
157+
* pairs with smp_mb__before_atomic in set_buffer_uptodate
158+
*/
159+
return (smp_load_acquire(&bh->b_state) & (1UL << BH_Uptodate)) != 0;
160+
}
161+
139162
#define bh_offset(bh) ((unsigned long)(bh)->b_data & ~PAGE_MASK)
140163

141164
/* If we *know* page->private refers to buffer_heads */

0 commit comments

Comments
 (0)