inet_unhash() checks sk_unhashed() twice at the entry and after locking ehash/lhash bucket. The former was somehow added redundantly by commit 4f9bf2a2f5aa ("tcp: Don't acquire inet_listen_hashbucket::lock with disabled BH."). inet_unhash() is called for the full socket from 4 places, and it is always under lock_sock() or the socket is not yet published to other threads: 1. __sk_prot_rehash() -> called from inet_sk_reselect_saddr(), which has lockdep_sock_is_held() 2. sk_common_release() -> called when inet_create() or inet6_create() fail, then the socket is not yet published 3. tcp_set_state() -> calls tcp_call_bpf_2arg(), and tcp_call_bpf() has sock_owned_by_me() 4. inet_ctl_sock_create() -> creates a kernel socket and unhashes it immediately, but TCP socket is not hashed in sock_create_kern() (only SOCK_RAW is) So we do not need to check sk_unhashed() twice before/after ehash/lhash lock in inet_unhash(). Let's remove the 2nd one. Signed-off-by: Kuniyuki Iwashima --- net/ipv4/inet_hashtables.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index efa8a615b868..4eb933f56fe6 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -793,11 +793,6 @@ void inet_unhash(struct sock *sk) * avoid circular locking dependency on PREEMPT_RT. */ spin_lock(&ilb2->lock); - if (sk_unhashed(sk)) { - spin_unlock(&ilb2->lock); - return; - } - if (rcu_access_pointer(sk->sk_reuseport_cb)) reuseport_stop_listen_sock(sk); @@ -808,10 +803,6 @@ void inet_unhash(struct sock *sk) spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash); spin_lock_bh(lock); - if (sk_unhashed(sk)) { - spin_unlock_bh(lock); - return; - } __sk_nulls_del_node_init_rcu(sk); sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); spin_unlock_bh(lock); -- 2.51.0.470.ga7dc726c21-goog