From: Chuck Lever While lock_sock is held, incoming TCP segments land on sk->sk_backlog rather than sk->sk_receive_queue. tls_rx_rec_wait() inspects only sk_receive_queue, so backlog data remains invisible. For non-blocking callers (read_sock, and recvmsg or splice_read with MSG_DONTWAIT) this causes a spurious -EAGAIN. For blocking callers it forces an unnecessary sleep/wakeup cycle. Flush the backlog inside tls_rx_rec_wait() before checking sk_receive_queue so the strparser can parse newly-arrived segments immediately. On the next loop iteration tls_read_flush_backlog() may redundantly flush, but this path is cold and the cost is negligible. Suggested-by: Sabrina Dubroca Reviewed-by: Hannes Reinecke Signed-off-by: Chuck Lever --- net/tls/tls_sw.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 8fb2f2a93846..e15adf250d78 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -1372,6 +1372,8 @@ tls_rx_rec_wait(struct sock *sk, struct sk_psock *psock, bool nonblock, if (ret < 0) return ret; + if (sk_flush_backlog(sk)) + released = true; if (!skb_queue_empty(&sk->sk_receive_queue)) { /* Defer notification to the exit point; * this thread will consume the record -- 2.53.0