Skip to content

Commit 794d8cf

Browse files
dhowellsbrauner
authored andcommitted
netfs: Report on NULL folioq in netfs_writeback_unlock_folios()
It seems that it's possible to get to netfs_writeback_unlock_folios() with an empty rolling buffer during buffered writes. This should not be possible as the rolling buffer is initialised as the write request is set up and thereafter maintains at least one folio_queue struct therein until it gets destroyed. This allows lockless addition and removal of folio_queue structs in the buffer because, unlike with a ring buffer, the producer and consumer each only need to look at and alter one pointer into the buffer. Now, the rolling buffer is only used for buffered I/O operations as netfs_collect_write_results() should only call netfs_writeback_unlock_folios() if the request is of origin type NETFS_WRITEBACK, NETFS_WRITETHROUGH or NETFS_PGPRIV2_COPY_TO_CACHE. So it would seem that one of the following occurred: (1) I/O started before the request was fully initialised, (2) the origin got switched mid-flow or (3) the request has already been freed and this is a UAF error. I think the last is the most likely. Make netfs_writeback_unlock_folios() report information about the request and subrequests if folioq is seen to be NULL to try and help debug this, throw a warning and return. Note that this does not try to fix the problem. Reported-by: [email protected] Link: https://syzkaller.appspot.com/bug?extid=af5c06208fa71bf31b16 Signed-off-by: David Howells <[email protected]> Link: https://lore.kernel.org/r/[email protected]/ Link: https://lore.kernel.org/r/[email protected] cc: Chang Yu <[email protected]> cc: Jeff Layton <[email protected]> cc: [email protected] cc: [email protected] Signed-off-by: Christian Brauner <[email protected]>
1 parent 3c49e52 commit 794d8cf

File tree

1 file changed

+34
-0
lines changed

1 file changed

+34
-0
lines changed

fs/netfs/write_collect.c

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,34 @@
2121
#define NEED_RETRY 0x10 /* A front op requests retrying */
2222
#define SAW_FAILURE 0x20 /* One stream or hit a permanent failure */
2323

24+
static void netfs_dump_request(const struct netfs_io_request *rreq)
25+
{
26+
pr_err("Request R=%08x r=%d fl=%lx or=%x e=%ld\n",
27+
rreq->debug_id, refcount_read(&rreq->ref), rreq->flags,
28+
rreq->origin, rreq->error);
29+
pr_err(" st=%llx tsl=%zx/%llx/%llx\n",
30+
rreq->start, rreq->transferred, rreq->submitted, rreq->len);
31+
pr_err(" cci=%llx/%llx/%llx\n",
32+
rreq->cleaned_to, rreq->collected_to, atomic64_read(&rreq->issued_to));
33+
pr_err(" iw=%pSR\n", rreq->netfs_ops->issue_write);
34+
for (int i = 0; i < NR_IO_STREAMS; i++) {
35+
const struct netfs_io_subrequest *sreq;
36+
const struct netfs_io_stream *s = &rreq->io_streams[i];
37+
38+
pr_err(" str[%x] s=%x e=%d acnf=%u,%u,%u,%u\n",
39+
s->stream_nr, s->source, s->error,
40+
s->avail, s->active, s->need_retry, s->failed);
41+
pr_err(" str[%x] ct=%llx t=%zx\n",
42+
s->stream_nr, s->collected_to, s->transferred);
43+
list_for_each_entry(sreq, &s->subrequests, rreq_link) {
44+
pr_err(" sreq[%x:%x] sc=%u s=%llx t=%zx/%zx r=%d f=%lx\n",
45+
sreq->stream_nr, sreq->debug_index, sreq->source,
46+
sreq->start, sreq->transferred, sreq->len,
47+
refcount_read(&sreq->ref), sreq->flags);
48+
}
49+
}
50+
}
51+
2452
/*
2553
* Successful completion of write of a folio to the server and/or cache. Note
2654
* that we are not allowed to lock the folio here on pain of deadlocking with
@@ -87,6 +115,12 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq,
87115
unsigned long long collected_to = wreq->collected_to;
88116
unsigned int slot = wreq->buffer.first_tail_slot;
89117

118+
if (WARN_ON_ONCE(!folioq)) {
119+
pr_err("[!] Writeback unlock found empty rolling buffer!\n");
120+
netfs_dump_request(wreq);
121+
return;
122+
}
123+
90124
if (wreq->origin == NETFS_PGPRIV2_COPY_TO_CACHE) {
91125
if (netfs_pgpriv2_unlock_copied_folios(wreq))
92126
*notes |= MADE_PROGRESS;

0 commit comments

Comments
 (0)