From: Lance Yang When freeing page tables, we try to batch them. If batch allocation fails (GFP_NOWAIT), __tlb_remove_table_one() immediately frees the one without batching. On !CONFIG_PT_RECLAIM, the fallback sends an IPI to all CPUs via tlb_remove_table_sync_one(). It disrupts all CPUs even when only a single process is unmapping memory. IPI broadcast was reported to hurt RT workloads[1]. tlb_remove_table_sync_one() synchronizes with lockless page-table walkers (e.g. GUP-fast) that rely on IRQ disabling. These walkers use local_irq_disable(), which is also an RCU read-side critical section. synchronize_rcu() waits for all such sections to complete, providing the same guarantee as IPI but without disrupting all CPUs. Since batch allocation already failed, we are in a way slow path, so replacing the IPI with synchronize_rcu() is fine. We are in process context (unmap_region, exit_mmap) with only mmap_lock held, a sleeping lock. synchronize_rcu() will catch any invalid context via might_sleep(). [1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/ Link: https://lore.kernel.org/linux-mm/20260202150957.GD1282955@noisy.programming.kicks-ass.net/ Link: https://lore.kernel.org/linux-mm/dfdfeac9-5cd5-46fc-a5c1-9ccf9bd3502a@intel.com/ Link: https://lore.kernel.org/linux-mm/bc489455-bb18-44dc-8518-ae75abda6bec@kernel.org/ Suggested-by: Peter Zijlstra Suggested-by: Dave Hansen Suggested-by: David Hildenbrand (Arm) Signed-off-by: Lance Yang --- mm/mmu_gather.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index fe5b6a031717..df670c219260 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -339,7 +339,8 @@ static inline void __tlb_remove_table_one(void *table) #else static inline void __tlb_remove_table_one(void *table) { - tlb_remove_table_sync_one(); + if (IS_ENABLED(CONFIG_MMU_GATHER_RCU_TABLE_FREE)) + synchronize_rcu(); __tlb_remove_table(table); } #endif /* CONFIG_PT_RECLAIM */ -- 2.49.0