This patch enhances GSO segment checks by verifying the presence of frag_list and protocol consistency, addressing low throughput issues on IPv4 servers when used as hotspots Specifically, it fixes a bug in GSO segmentation when forwarding GRO packets with frag_list. The function skb_segment_list cannot correctly process GRO skbs converted by XLAT, because XLAT only converts the header of the head skb. As a result, skbs in the frag_list may remain unconverted, leading to protocol inconsistencies and reduced throughput. To resolve this, the patch uses skb_segment to handle forwarded packets converted by XLAT, ensuring that all fragments are properly converted and segmented. Signed-off-by: Jibin Zhang --- v3: Apply the same fix to tcp6_gso_segment(), as suggested. v2: To apply the added condition to a narrower scop In this version, the condition (skb_has_frag_list(gso_skb) && (gso_skb->protocol == skb_shinfo(gso_skb)->frag_list->protocol)) is moved into inner 'if' statement to a narrower scope. Send out the patch again for further discussion because: 1. This issue has a significant impact and has occurred in many countries and regions. 2. Currently, modifying BPF is not a good option, because BPF code cannot access the header of skb on the fraglist, and the required changes would affect a wide range of code. 3. Directly disabling GRO aggregation for XLAT flows is also not a good solution, as this change would disable GRO even when forwarding is not needed, and it would also require cooperation from all device drivers. [2]: https://patchwork.kernel.org/patch/14375646 [1]: https://patchwork.kernel.org/patch/14350844 --- net/ipv4/tcp_offload.c | 4 +++- net/ipv4/udp_offload.c | 4 +++- net/ipv6/tcpv6_offload.c | 4 +++- 3 files changed, 9 insertions(+), 3 deletions(-) diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c index fdda18b1abda..6c2c10f37f87 100644 --- a/net/ipv4/tcp_offload.c +++ b/net/ipv4/tcp_offload.c @@ -107,7 +107,9 @@ static struct sk_buff *tcp4_gso_segment(struct sk_buff *skb, if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) { struct tcphdr *th = tcp_hdr(skb); - if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size) + if ((skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size) && + skb_has_frag_list(skb) && + (skb->protocol == skb_shinfo(skb)->frag_list->protocol)) return __tcp4_gso_segment_list(skb, features); skb->ip_summed = CHECKSUM_NONE; diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c index 19d0b5b09ffa..2a99f011793f 100644 --- a/net/ipv4/udp_offload.c +++ b/net/ipv4/udp_offload.c @@ -514,7 +514,9 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb, if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST) { /* Detect modified geometry and pass those to skb_segment. */ - if (skb_pagelen(gso_skb) - sizeof(*uh) == skb_shinfo(gso_skb)->gso_size) + if ((skb_pagelen(gso_skb) - sizeof(*uh) == skb_shinfo(gso_skb)->gso_size) && + skb_has_frag_list(gso_skb) && + (gso_skb->protocol == skb_shinfo(gso_skb)->frag_list->protocol)) return __udp_gso_segment_list(gso_skb, features, is_ipv6); ret = __skb_linearize(gso_skb); diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c index effeba58630b..3c7fd0362475 100644 --- a/net/ipv6/tcpv6_offload.c +++ b/net/ipv6/tcpv6_offload.c @@ -170,7 +170,9 @@ static struct sk_buff *tcp6_gso_segment(struct sk_buff *skb, if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) { struct tcphdr *th = tcp_hdr(skb); - if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size) + if ((skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size) && + skb_has_frag_list(skb) && + (skb->protocol == skb_shinfo(skb)->frag_list->protocol)) return __tcp6_gso_segment_list(skb, features); skb->ip_summed = CHECKSUM_NONE; -- 2.45.2