-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Description
This is an invitation to brainstorm. :-)
There is currently a mismatch between io_uring's and libuv's I/O model that stops libuv from achieving maximal performance if it were to use io_uring for network i/o, particularly when it comes to receiving incoming connections and packets.
https://github.com/axboe/liburing/wiki/io_uring-and-networking-in-2023 describes best practices that can be summarized as:
-
keep multiple i/o requests enqueued (mismatch: libuv's "firehose" approach to incoming connections/data)
-
let io_uring manage buffers (mismatch: libuv punts memory management to the user)
I've been thinking about what needs to change and this is what I came up with so far:
-
add request-based apis for reading data and accepting connections, like
int uv_stream_read(uv_read_t* req, uv_stream_t* handle, uv_buf_t* bufs, size_t nbufs, unsigned flags, uv_stream_read_cb cb)whereuv_stream_read_cbisvoid (*)(uv_read_t* req, ssize_t nread, uv_buf_t* bufs) -
add a memory pool api that tells libuv it can allocate this much memory for buffers. Strawman proposal:
uv_loop_configure(loop, UV_LOOP_SET_BUFFER_POOL_SIZE, 8<<20)with 0 meaning "pick a suitable default" -
introduce a flag that tells
uv_stream_read()to ignorebufsand interpretnbufsas "bytes to read into buffer pool" (kind of ugly, suggestions welcome) -
introduce a new
int uv_release_buffers(loop, bufs, nbufs)(maybes/loop/handle/?) that tells libuv it's okay to reuse those buffer pool slices again. Alternatively: reclaim the memory automatically whenuv_stream_read_cbreturns but that means users may have to copy, or that they hold on longer to the buffers than is needed (inefficient if they queue up new requests in the callback)
Libuv-on-Windows is internally already request-based so should be relatively easy to adapt.
Suggestions for improvements very welcome!