Skip to content

Commit ef5c600

Browse files
Dylan Yudakenaxboe
authored andcommitted
io_uring: always prep_async for drain requests
Drain requests all go through io_drain_req, which has a quick exit in case there is nothing pending (ie the drain is not useful). In that case it can run the issue the request immediately. However for safety it queues it through task work. The problem is that in this case the request is run asynchronously, but the async work has not been prepared through io_req_prep_async. This has not been a problem up to now, as the task work always would run before returning to userspace, and so the user would not have a chance to race with it. However - with IORING_SETUP_DEFER_TASKRUN - this is no longer the case and the work might be defered, giving userspace a chance to change data being referred to in the request. Instead _always_ prep_async for drain requests, which is simpler anyway and removes this issue. Cc: [email protected] Fixes: c0e0d6b ("io_uring: add IORING_SETUP_DEFER_TASKRUN") Signed-off-by: Dylan Yudaken <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
1 parent b00c51e commit ef5c600

File tree

1 file changed

+8
-10
lines changed

1 file changed

+8
-10
lines changed

io_uring/io_uring.c

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1765,17 +1765,12 @@ static __cold void io_drain_req(struct io_kiocb *req)
17651765
}
17661766
spin_unlock(&ctx->completion_lock);
17671767

1768-
ret = io_req_prep_async(req);
1769-
if (ret) {
1770-
fail:
1771-
io_req_defer_failed(req, ret);
1772-
return;
1773-
}
17741768
io_prep_async_link(req);
17751769
de = kmalloc(sizeof(*de), GFP_KERNEL);
17761770
if (!de) {
17771771
ret = -ENOMEM;
1778-
goto fail;
1772+
io_req_defer_failed(req, ret);
1773+
return;
17791774
}
17801775

17811776
spin_lock(&ctx->completion_lock);
@@ -2048,13 +2043,16 @@ static void io_queue_sqe_fallback(struct io_kiocb *req)
20482043
req->flags &= ~REQ_F_HARDLINK;
20492044
req->flags |= REQ_F_LINK;
20502045
io_req_defer_failed(req, req->cqe.res);
2051-
} else if (unlikely(req->ctx->drain_active)) {
2052-
io_drain_req(req);
20532046
} else {
20542047
int ret = io_req_prep_async(req);
20552048

2056-
if (unlikely(ret))
2049+
if (unlikely(ret)) {
20572050
io_req_defer_failed(req, ret);
2051+
return;
2052+
}
2053+
2054+
if (unlikely(req->ctx->drain_active))
2055+
io_drain_req(req);
20582056
else
20592057
io_queue_iowq(req, NULL);
20602058
}

0 commit comments

Comments
 (0)