Skip to content

Commit a8cf95f

Browse files
isilenceaxboe
authored andcommitted
io_uring: fix overflow handling regression
Because the single task locking series got reordered ahead of the timeout and completion lock changes, two hunks inadvertently ended up using __io_fill_cqe_req() rather than io_fill_cqe_req(). This meant that we dropped overflow handling in those two spots. Reinstate the correct CQE filling helper. Fixes: f66f734 ("io_uring: skip spinlocking for ->task_complete") Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
1 parent e5f30f6 commit a8cf95f

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

io_uring/io_uring.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -927,7 +927,7 @@ static void __io_req_complete_post(struct io_kiocb *req)
927927

928928
io_cq_lock(ctx);
929929
if (!(req->flags & REQ_F_CQE_SKIP))
930-
__io_fill_cqe_req(ctx, req);
930+
io_fill_cqe_req(ctx, req);
931931

932932
/*
933933
* If we're the last reference to this request, add to our locked

io_uring/rw.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1062,7 +1062,7 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
10621062
continue;
10631063

10641064
req->cqe.flags = io_put_kbuf(req, 0);
1065-
__io_fill_cqe_req(req->ctx, req);
1065+
io_fill_cqe_req(req->ctx, req);
10661066
}
10671067

10681068
if (unlikely(!nr_events))

0 commit comments

Comments
 (0)