From: Jason Xing Only set xmit.more false for the last skb. In theory, only making xmit.more false for the last packets to be sent in each round can bring much benefit like avoid triggering too many irqs. Compared to the numbers for batch mode, a huge improvement (26%) can be seen on i40e/ixgbe driver since the cost of triggering irqs is expensive. Suggested-by: Jesper Dangaard Brouer Signed-off-by: Jason Xing --- net/core/dev.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index 32de76c79d29..549a95b9d96f 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4797,14 +4797,18 @@ int xsk_direct_xmit_batch(struct xdp_sock *xs, struct net_device *dev) { u16 queue_id = xs->queue_id; struct netdev_queue *txq = netdev_get_tx_queue(dev, queue_id); + struct sk_buff_head *send_queue = &xs->batch.send_queue; int ret = NETDEV_TX_BUSY; struct sk_buff *skb; + bool more = true; local_bh_disable(); HARD_TX_LOCK(dev, txq, smp_processor_id()); - while ((skb = __skb_dequeue(&xs->batch.send_queue)) != NULL) { + while ((skb = __skb_dequeue(send_queue)) != NULL) { + if (!skb_peek(send_queue)) + more = false; skb_set_queue_mapping(skb, queue_id); - ret = netdev_start_xmit(skb, dev, txq, false); + ret = netdev_start_xmit(skb, dev, txq, more); if (ret != NETDEV_TX_OK) break; } -- 2.41.3