From: Chuck Lever While lock_sock is held during read_sock, incoming TCP segments land on sk->sk_backlog rather than sk->sk_receive_queue. tls_rx_rec_wait() inspects only sk_receive_queue, so backlog data remains invisible until release_sock() drains it, forcing an extra workqueue cycle for records that arrive during decryption. Calling sk_flush_backlog() before tls_rx_rec_wait() moves backlog data into sk_receive_queue, where tls_strp_check_rcv() can parse it immediately. The existing tls_read_flush_backlog call after decryption is retained for TCP window management. Acked-by: Alistair Francis Reviewed-by: Hannes Reinecke Signed-off-by: Chuck Lever --- net/tls/tls_sw.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 43d37b0e6d59..7e1560d5ab79 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -2387,6 +2387,11 @@ int tls_sw_read_sock(struct sock *sk, read_descriptor_t *desc, } else { struct tls_decrypt_arg darg; + /* Drain backlog so segments that arrived while the + * lock was held appear on sk_receive_queue before + * tls_rx_rec_wait waits for a new record. + */ + sk_flush_backlog(sk); err = tls_rx_rec_wait(sk, NULL, true, released); if (err <= 0) goto read_sock_end; -- 2.52.0