From: Jason Xing After getting the current skb in napi_skb_cache_get(), the next skb in cache is highly likely to be used soon, so prefetch would be helpful. Suggested-by: Eric Dumazet Signed-off-by: Jason Xing --- net/core/skbuff.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index d81ac78c32ff..5a1d123e7ef7 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -299,6 +299,8 @@ static struct sk_buff *napi_skb_cache_get(bool alloc) } skb = nc->skb_cache[--nc->skb_count]; + if (nc->skb_count) + prefetch(nc->skb_cache[nc->skb_count - 1]); local_unlock_nested_bh(&napi_alloc_cache.bh_lock); kasan_mempool_unpoison_object(skb, skbuff_cache_size); -- 2.41.3