Skip to content

Commit c16bda3

Browse files
committed
io_uring/poll: allow some retries for poll triggering spuriously
If we get woken spuriously when polling and fail the operation with -EAGAIN again, then we generally only allow polling again if data had been transferred at some point. This is indicated with REQ_F_PARTIAL_IO. However, if the spurious poll triggers when the socket was originally empty, then we haven't transferred data yet and we will fail the poll re-arm. This either punts the socket to io-wq if it's blocking, or it fails the request with -EAGAIN if not. Neither condition is desirable, as the former will slow things down, while the latter will make the application confused. We want to ensure that a repeated poll trigger doesn't lead to infinite work making no progress, that's what the REQ_F_PARTIAL_IO check was for. But it doesn't protect against a loop post the first receive, and it's unnecessarily strict if we started out with an empty socket. Add a somewhat random retry count, just to put an upper limit on the potential number of retries that will be done. This should be high enough that we won't really hit it in practice, unless something needs to be aborted anyway. Cc: [email protected] # v5.10+ Link: axboe/liburing#364 Signed-off-by: Jens Axboe <[email protected]>
1 parent 7605c43 commit c16bda3

File tree

2 files changed

+13
-2
lines changed

2 files changed

+13
-2
lines changed

io_uring/poll.c

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -650,6 +650,14 @@ static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
650650
__io_queue_proc(&apoll->poll, pt, head, &apoll->double_poll);
651651
}
652652

653+
/*
654+
* We can't reliably detect loops in repeated poll triggers and issue
655+
* subsequently failing. But rather than fail these immediately, allow a
656+
* certain amount of retries before we give up. Given that this condition
657+
* should _rarely_ trigger even once, we should be fine with a larger value.
658+
*/
659+
#define APOLL_MAX_RETRY 128
660+
653661
static struct async_poll *io_req_alloc_apoll(struct io_kiocb *req,
654662
unsigned issue_flags)
655663
{
@@ -665,14 +673,18 @@ static struct async_poll *io_req_alloc_apoll(struct io_kiocb *req,
665673
if (entry == NULL)
666674
goto alloc_apoll;
667675
apoll = container_of(entry, struct async_poll, cache);
676+
apoll->poll.retries = APOLL_MAX_RETRY;
668677
} else {
669678
alloc_apoll:
670679
apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
671680
if (unlikely(!apoll))
672681
return NULL;
682+
apoll->poll.retries = APOLL_MAX_RETRY;
673683
}
674684
apoll->double_poll = NULL;
675685
req->apoll = apoll;
686+
if (unlikely(!--apoll->poll.retries))
687+
return NULL;
676688
return apoll;
677689
}
678690

@@ -694,8 +706,6 @@ int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags)
694706
return IO_APOLL_ABORTED;
695707
if (!file_can_poll(req->file))
696708
return IO_APOLL_ABORTED;
697-
if ((req->flags & (REQ_F_POLLED|REQ_F_PARTIAL_IO)) == REQ_F_POLLED)
698-
return IO_APOLL_ABORTED;
699709
if (!(req->flags & REQ_F_APOLL_MULTISHOT))
700710
mask |= EPOLLONESHOT;
701711

io_uring/poll.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ struct io_poll {
1212
struct file *file;
1313
struct wait_queue_head *head;
1414
__poll_t events;
15+
int retries;
1516
struct wait_queue_entry wait;
1617
};
1718

0 commit comments

Comments
 (0)