When a socket send and shutdown() happen back-to-back, both fire wake-ups before the receiver's task_work has a chance to run. The first wake gets poll ownership (poll_refs=1), and the second bumps it to 2. When io_poll_check_events() runs, it calls io_poll_issue() which does a recv that reads the data and returns IOU_RETRY. The loop then drains all accumulated refs (atomic_sub_return(2) -> 0) and exits, even though only the first event was consumed. Since the shutdown is a persistent state change, no further wakeups will happen, and the multishot recv can hang forever. Fix this by only draining a single poll ref after io_poll_issue() returns IOU_RETRY for the APOLL_MULTISHOT path. If additional wakes raced in (poll_refs was > 1), the loop iterates again, vfs_poll() discovers the remaining state. Move the v &= IO_POLL_REF_MASK (drain all refs) into the non-APOLL multishot poll path, since poll CQEs report the current mask state rather than consuming individual events. Cc: stable@vger.kernel.org Fixes: dbc2564cfe0f ("io_uring: let fast poll support multishot") Reported-by: francis Link: https://github.com/axboe/liburing/issues/1549 Signed-off-by: Jens Axboe --- diff --git a/io_uring/poll.c b/io_uring/poll.c index b671b84657d9..0f0949d919e9 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -303,6 +303,7 @@ static int io_poll_check_events(struct io_kiocb *req, io_tw_token_t tw) io_req_set_res(req, mask, 0); return IOU_POLL_REMOVE_POLL_USE_RES; } + v &= IO_POLL_REF_MASK; } else { int ret = io_poll_issue(req, tw); @@ -312,6 +313,11 @@ static int io_poll_check_events(struct io_kiocb *req, io_tw_token_t tw) return IOU_POLL_REQUEUE; if (ret != IOU_RETRY && ret < 0) return ret; + /* + * One event consumed, but additional wakes may have + * raced. Only drain a single ref. + */ + v = 1; } /* force the next iteration to vfs_poll() */ @@ -321,7 +327,6 @@ static int io_poll_check_events(struct io_kiocb *req, io_tw_token_t tw) * Release all references, retry if someone tried to restart * task_work while we were executing it. */ - v &= IO_POLL_REF_MASK; } while (atomic_sub_return(v, &req->poll_refs) & IO_POLL_REF_MASK); io_napi_add(req); -- Jens Axboe