From: Håkon Bugge Change the return code from rds_send_xmit() when it is unable to acquire the RDS_IN_XMIT lock-bit from -ENOMEM to -EBUSY. This to avoid re-queuing of the rds_send_worker() when someone else is actually executing rds_send_xmit(). Performance is improved by 2% running rds-stress with the following parameters: "-t 16 -d 32 -q 64 -a 64 -o". The test was run five times, each time running for one minute, and the arithmetic average of the tx IOPS was used as performance metric. Send lock contention was reduced by 6.5% and the ib_tx_ring_full condition was more than doubled, indicating better ability to send. Signed-off-by: Håkon Bugge Signed-off-by: Allison Henderson --- net/rds/send.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/rds/send.c b/net/rds/send.c index 3e3d028bc21e..747e348f48ba 100644 --- a/net/rds/send.c +++ b/net/rds/send.c @@ -158,7 +158,7 @@ int rds_send_xmit(struct rds_conn_path *cp) */ if (!acquire_in_xmit(cp)) { rds_stats_inc(s_send_lock_contention); - ret = -ENOMEM; + ret = -EBUSY; goto out; } @@ -1375,7 +1375,7 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len) rds_stats_inc(s_send_queued); ret = rds_send_xmit(cpath); - if (ret == -ENOMEM || ret == -EAGAIN) { + if (ret == -ENOMEM || ret == -EAGAIN || ret == -EBUSY) { ret = 0; rcu_read_lock(); if (rds_destroy_pending(cpath->cp_conn)) -- 2.43.0