Otherwise the lock is susceptible to ever-changing false-sharing due to unrelated changes. This in particular popped up here where an unrelated change improved performance: https://lore.kernel.org/oe-lkp/202511281306.51105b46-lkp@intel.com/ Stabilize it with an explicit annotation which also has a side effect of furher improving scalability: > in our oiginal report, 284922f4c5 has a 6.1% performance improvement comparing > to parent 17d85f33a8. > we applied your patch directly upon 284922f4c5. as below, now by > "284922f4c5 + your patch" > we observe a 12.8% performance improvements (still comparing to 17d85f33a8). Note nothing was done for the other fields, so some fluctuation is still possible. Tested-by: kernel test robot Signed-off-by: Mateusz Guzik --- net/unix/garbage.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/unix/garbage.c b/net/unix/garbage.c index 78323d43e63e..25f65817faab 100644 --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -199,7 +199,7 @@ static void unix_free_vertices(struct scm_fp_list *fpl) } } -static DEFINE_SPINLOCK(unix_gc_lock); +static __cacheline_aligned_in_smp DEFINE_SPINLOCK(unix_gc_lock); void unix_add_edges(struct scm_fp_list *fpl, struct unix_sock *receiver) { -- 2.48.1