When a BPF program that supports BPF_F_XDP_HAS_FRAGS is issuing bpf_xdp_adjust_tail and a large packet is injected via /dev/net/tun a crash occurs due to detecting a bad page state (page_pool leak). This is because xdp_buff does not record the type of memory and instead relies on the netdev receive queue xdp info. Since the TUN/TAP driver is using a MEM_TYPE_PAGE_SHARED memory model buffer, shrinking will eventually call page_frag_free. But with current multi-buff support for BPF_F_XDP_HAS_FRAGS programs buffers are allocated via the page pool. To fix this issue check that the receive queue memory mode is of MEM_TYPE_PAGE_POOL before using multi-buffs. Reported-by: syzbot+ff145014d6b0ce64a173@syzkaller.appspotmail.com Closes: https://lore.kernel.org/netdev/6756c37b.050a0220.a30f1.019a.GAE@google.com/ Fixes: e6d5dbdd20aa ("xdp: add multi-buff support for xdp running in generic mode") Signed-off-by: Octavian Purdila --- net/core/dev.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index 8d49b2198d07..b195ee3068c2 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -5335,13 +5335,18 @@ static int netif_skb_check_for_xdp(struct sk_buff **pskb, const struct bpf_prog *prog) { struct sk_buff *skb = *pskb; + struct netdev_rx_queue *rxq; int err, hroom, troom; - local_lock_nested_bh(&system_page_pool.bh_lock); - err = skb_cow_data_for_xdp(this_cpu_read(system_page_pool.pool), pskb, prog); - local_unlock_nested_bh(&system_page_pool.bh_lock); - if (!err) - return 0; + rxq = netif_get_rxqueue(skb); + if (rxq->xdp_rxq.mem.type == MEM_TYPE_PAGE_POOL) { + local_lock_nested_bh(&system_page_pool.bh_lock); + err = skb_cow_data_for_xdp(this_cpu_read(system_page_pool.pool), + pskb, prog); + local_unlock_nested_bh(&system_page_pool.bh_lock); + if (!err) + return 0; + } /* In case we have to go down the path and also linearize, * then lets do the pskb_expand_head() work just once here. -- 2.51.0.534.gc79095c0ca-goog