Commit f620af11c27b ("xsk: avoid double checking against rx queue being full") addressed a case in copy mode, when working with multi-buffer xdp_buff, where we were peeking onto XSK Rx queue twice, to find out if there is a space to produce descriptors. Adjust ZC path to follow the same principle. Signed-off-by: Maciej Fijalkowski --- net/xdp/xsk.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index ef265e45810c..cdf93728fcba 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -196,13 +196,13 @@ static int xsk_rcv_zc(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len) goto err; } - __xsk_rcv_zc(xs, xskb, len, contd); + __xsk_rcv_zc_safe(xs, xskb, len, contd); xskb_list = &xskb->pool->xskb_list; list_for_each_entry_safe(pos, tmp, xskb_list, list_node) { if (list_is_singular(xskb_list)) contd = 0; len = pos->xdp.data_end - pos->xdp.data; - __xsk_rcv_zc(xs, pos, len, contd); + __xsk_rcv_zc_safe(xs, pos, len, contd); list_del_init(&pos->list_node); } -- 2.43.0