From: Jason Xing When net.core.skb_defer_max is adjusted to zero, napi_consume_skb shouldn't go into deeper in skb_attempt_defer_free because that function adds a pair of local_bh_enable/disable() which can be found in kfree_skb_napi_cache(). Advancing the check of the static key saves more cycles and benefits the single flow/few flows workloads. Signed-off-by: Jason Xing --- v3 Link: https://lore.kernel.org/all/20260327153347.98647-1-kerneljasonxing@gmail.com/ 1. use a simpler approach to avoid adding a new sysctl. V2 Link: https://lore.kernel.org/all/20260326144249.97213-1-kerneljasonxing@gmail.com/ 1. reuse proc_do_static_key() (Eric) 2. add doc (Stan) --- net/core/skbuff.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 3d6978dd0aa8..c1562ba6903e 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1519,7 +1519,8 @@ void napi_consume_skb(struct sk_buff *skb, int budget) DEBUG_NET_WARN_ON_ONCE(!in_softirq()); - if (skb->alloc_cpu != smp_processor_id() && !skb_shared(skb)) { + if (!static_branch_unlikely(&skb_defer_disable_key) && + skb->alloc_cpu != smp_processor_id() && !skb_shared(skb)) { skb_release_head_state(skb); return skb_attempt_defer_free(skb); } -- 2.41.3