Skip to content

Commit 5da0fb1

Browse files
yangerkunaxboe
authored andcommitted
io_uring: consider the overflow of sequence for timeout req
Now we recalculate the sequence of timeout with 'req->sequence = ctx->cached_sq_head + count - 1', judge the right place to insert for timeout_list by compare the number of request we still expected for completion. But we have not consider about the situation of overflow: 1. ctx->cached_sq_head + count - 1 may overflow. And a bigger count for the new timeout req can have a small req->sequence. 2. cached_sq_head of now may overflow compare with before req. And it will lead the timeout req with small req->sequence. This overflow will lead to the misorder of timeout_list, which can lead to the wrong order of the completion of timeout_list. Fix it by reuse req->submit.sequence to store the count, and change the logic of inserting sort in io_timeout. Signed-off-by: yangerkun <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
1 parent 7a7c5e7 commit 5da0fb1

File tree

1 file changed

+21
-6
lines changed

1 file changed

+21
-6
lines changed

fs/io_uring.c

Lines changed: 21 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1884,7 +1884,7 @@ static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
18841884

18851885
static int io_timeout(struct io_kiocb *req, const struct io_uring_sqe *sqe)
18861886
{
1887-
unsigned count, req_dist, tail_index;
1887+
unsigned count;
18881888
struct io_ring_ctx *ctx = req->ctx;
18891889
struct list_head *entry;
18901890
struct timespec64 ts;
@@ -1907,21 +1907,36 @@ static int io_timeout(struct io_kiocb *req, const struct io_uring_sqe *sqe)
19071907
count = 1;
19081908

19091909
req->sequence = ctx->cached_sq_head + count - 1;
1910+
/* reuse it to store the count */
1911+
req->submit.sequence = count;
19101912
req->flags |= REQ_F_TIMEOUT;
19111913

19121914
/*
19131915
* Insertion sort, ensuring the first entry in the list is always
19141916
* the one we need first.
19151917
*/
1916-
tail_index = ctx->cached_cq_tail - ctx->rings->sq_dropped;
1917-
req_dist = req->sequence - tail_index;
19181918
spin_lock_irq(&ctx->completion_lock);
19191919
list_for_each_prev(entry, &ctx->timeout_list) {
19201920
struct io_kiocb *nxt = list_entry(entry, struct io_kiocb, list);
1921-
unsigned dist;
1921+
unsigned nxt_sq_head;
1922+
long long tmp, tmp_nxt;
19221923

1923-
dist = nxt->sequence - tail_index;
1924-
if (req_dist >= dist)
1924+
/*
1925+
* Since cached_sq_head + count - 1 can overflow, use type long
1926+
* long to store it.
1927+
*/
1928+
tmp = (long long)ctx->cached_sq_head + count - 1;
1929+
nxt_sq_head = nxt->sequence - nxt->submit.sequence + 1;
1930+
tmp_nxt = (long long)nxt_sq_head + nxt->submit.sequence - 1;
1931+
1932+
/*
1933+
* cached_sq_head may overflow, and it will never overflow twice
1934+
* once there is some timeout req still be valid.
1935+
*/
1936+
if (ctx->cached_sq_head < nxt_sq_head)
1937+
tmp_nxt += UINT_MAX;
1938+
1939+
if (tmp >= tmp_nxt)
19251940
break;
19261941
}
19271942
list_add(&req->list, entry);

0 commit comments

Comments
 (0)