If io_send() is using bundles or ring provided buffers in general, then it must call io_kbuf_recycle() for the -EAGAIN case that ends up punting to polling. If not, a later send could potentially pick a later buffer and complete before the previous buffer, if space has freed up in the socket and it races with poll retry of the previous send. Link: https://github.com/axboe/liburing/discussions/1528 Cc: stable@vger.kernel.org Fixes: a05d1f625c7a ("io_uring/net: support bundles for send") Signed-off-by: Jens Axboe --- diff --git a/io_uring/net.c b/io_uring/net.c index 519ea055b761..b1c3b86539ee 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -673,8 +673,10 @@ int io_send(struct io_kiocb *req, unsigned int issue_flags) kmsg->msg.msg_flags = flags; ret = sock_sendmsg(sock, &kmsg->msg); if (ret < min_ret) { - if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK)) + if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK)) { + io_kbuf_recycle(req, sel.buf_list, issue_flags); return -EAGAIN; + } if (ret > 0 && io_net_retry(sock, flags)) { sr->len -= ret; -- Jens Axboe