======================================================
WARNING: possible circular locking dependency detected
6.16.0-rc6-syzkaller-00411-g95993dc3039e-dirty #0 Not tainted
------------------------------------------------------
kworker/u9:0/26 is trying to acquire lock:
ffff888108458e00 (team->team_lock_key){+.+.}-{4:4}, at: team_device_event+0x544/0xa20

but task is already holding lock:
ffff88811a2acd30 (&dev_instance_lock_key#20){+.+.}-{4:4}, at: __linkwatch_run_queue+0x4a0/0x7e0

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&dev_instance_lock_key#20){+.+.}-{4:4}:
       lock_acquire+0x120/0x360
       __mutex_lock+0x182/0xe80
       dev_set_mtu+0x10e/0x260
       team_add_slave+0x8b8/0x2840
       do_set_master+0x533/0x6d0
       do_setlink+0xcf0/0x41c0
       rtnl_newlink+0x160b/0x1c70
       rtnetlink_rcv_msg+0x7cf/0xb70
       netlink_rcv_skb+0x208/0x470
       netlink_unicast+0x75c/0x8e0
       netlink_sendmsg+0x805/0xb30
       __sock_sendmsg+0x21c/0x270
       ____sys_sendmsg+0x505/0x830
       ___sys_sendmsg+0x21f/0x2a0
       __x64_sys_sendmsg+0x19b/0x260
       do_syscall_64+0xfa/0x3b0
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (team->team_lock_key){+.+.}-{4:4}:
       validate_chain+0xb9b/0x2140
       __lock_acquire+0xab9/0xd20
       lock_acquire+0x120/0x360
       __mutex_lock+0x182/0xe80
       team_device_event+0x544/0xa20
       notifier_call_chain+0x1b6/0x3e0
       netif_state_change+0x284/0x3a0
       linkwatch_do_dev+0x117/0x170
       __linkwatch_run_queue+0x56d/0x7e0
       linkwatch_event+0x4c/0x60
       process_scheduled_works+0xae1/0x17b0
       worker_thread+0x8a0/0xda0
       kthread+0x711/0x8a0
       ret_from_fork+0x3fc/0x770
       ret_from_fork_asm+0x1a/0x30

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&dev_instance_lock_key#20);
                               lock(team->team_lock_key);
                               lock(&dev_instance_lock_key#20);
  lock(team->team_lock_key);

 *** DEADLOCK ***

4 locks held by kworker/u9:0/26:
 #0: ffff88801a489148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0
 #1: ffffc900001efbc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0
 #2: ffffffff8f51cdc8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60
 #3: ffff88811a2acd30 (&dev_instance_lock_key#20){+.+.}-{4:4}, at: __linkwatch_run_queue+0x4a0/0x7e0

stack backtrace:
CPU: 0 UID: 0 PID: 26 Comm: kworker/u9:0 Not tainted 6.16.0-rc6-syzkaller-00411-g95993dc3039e-dirty #0 PREEMPT(full) 
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Workqueue: events_unbound linkwatch_event
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250
 print_circular_bug+0x2ee/0x310
 check_noncircular+0x134/0x160
 validate_chain+0xb9b/0x2140
 __lock_acquire+0xab9/0xd20
 lock_acquire+0x120/0x360
 __mutex_lock+0x182/0xe80
 team_device_event+0x544/0xa20
 notifier_call_chain+0x1b6/0x3e0
 netif_state_change+0x284/0x3a0
 linkwatch_do_dev+0x117/0x170
 __linkwatch_run_queue+0x56d/0x7e0
 linkwatch_event+0x4c/0x60
 process_scheduled_works+0xae1/0x17b0
 worker_thread+0x8a0/0xda0
 kthread+0x711/0x8a0
 ret_from_fork+0x3fc/0x770
 ret_from_fork_asm+0x1a/0x30
 </TASK>
