Back in 2024 I reported a 7-12% regression on an iperf3 UDP loopback thoughput test that we traced to the extra overhead of calling compute_score on two places, introduced by commit f0ea27e7bfe1 ("udp: re-score reuseport groups when connected sockets are present"). At the time, I pointed out the overhead was caused by the multiple calls, associated with cpu-specific mitigations, and merged commit 50aee97d1511 ("udp: Avoid call to compute_score on multiple sites") to jump back explicitly, to force the rescore call in a single place. Recently though, we got another regression report against a newer distro version, which a team colleague traced back to the same root-cause. Turns out that once we updated to gcc-13, the compiler got smart enough to unroll the loop, undoing my previous mitigation. Let's bite the bullet and __always_inline compute_score on both ipv4 and ipv6 to prevent gcc from de-optimizing it again in the future. These functions are only called in two places each, udpX_lib_lookup1 and udpX_lib_lookup2, so the extra size shouldn't be a problem and it is hot enough to be very visible in profilings. In fact, with gcc13, forcing the inline will prevent gcc from unrolling the fix from commit 50aee97d1511, so we don't end up increasing udpX_lib_lookup2 at all. I haven't recollected the results myself, as I don't have access to the machine at the moment. But the same colleague reported 4.67% inprovement with this patch in the loopback benchmark, solving the regression report within noise margins. Eric Dumazet reported no size change to vmlinux when built with clang. I report the same also with gcc-13: scripts/bloat-o-meter vmlinux vmlinux-inline add/remove: 0/2 grow/shrink: 4/0 up/down: 616/-416 (200) Function old new delta udp6_lib_lookup2 762 949 +187 __udp6_lib_lookup 810 975 +165 udp4_lib_lookup2 757 906 +149 __udp4_lib_lookup 871 986 +115 __pfx_compute_score 32 - -32 compute_score 384 - -384 Total: Before=35011784, After=35011984, chg +0.00% Fixes: 50aee97d1511 ("udp: Avoid call to compute_score on multiple sites") Reviewed-by: Eric Dumazet Acked-by: Willem de Bruijn Signed-off-by: Gabriel Krisman Bertazi --- v2: - update comment in udpX_lib_lookup2(Willem) - add bloat-o-meter information (Eric) --- net/ipv4/udp.c | 12 ++++++------ net/ipv6/udp.c | 13 +++++++------ 2 files changed, 13 insertions(+), 12 deletions(-) diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index 6c6b68a66dcd..74b621b20e83 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -365,10 +365,10 @@ int udp_v4_get_port(struct sock *sk, unsigned short snum) return udp_lib_get_port(sk, snum, hash2_nulladdr); } -static int compute_score(struct sock *sk, const struct net *net, - __be32 saddr, __be16 sport, - __be32 daddr, unsigned short hnum, - int dif, int sdif) +static __always_inline int +compute_score(struct sock *sk, const struct net *net, + __be32 saddr, __be16 sport, __be32 daddr, + unsigned short hnum, int dif, int sdif) { int score; struct inet_sock *inet; @@ -508,8 +508,8 @@ static struct sock *udp4_lib_lookup2(const struct net *net, continue; /* compute_score is too long of a function to be - * inlined, and calling it again here yields - * measurable overhead for some + * inlined twice here, and calling it uninlined + * here yields measurable overhead for some * workloads. Work around it by jumping * backwards to rescore 'result'. */ diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index 010b909275dd..301649a63e8a 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -127,10 +127,11 @@ void udp_v6_rehash(struct sock *sk) udp_lib_rehash(sk, new_hash, new_hash4); } -static int compute_score(struct sock *sk, const struct net *net, - const struct in6_addr *saddr, __be16 sport, - const struct in6_addr *daddr, unsigned short hnum, - int dif, int sdif) +static __always_inline int +compute_score(struct sock *sk, const struct net *net, + const struct in6_addr *saddr, __be16 sport, + const struct in6_addr *daddr, unsigned short hnum, + int dif, int sdif) { int bound_dev_if, score; struct inet_sock *inet; @@ -260,8 +261,8 @@ static struct sock *udp6_lib_lookup2(const struct net *net, continue; /* compute_score is too long of a function to be - * inlined, and calling it again here yields - * measurable overhead for some + * inlined twice here, and calling it uninlined + * here yields measurable overhead for some * workloads. Work around it by jumping * backwards to rescore 'result'. */ -- 2.52.0