From: Lance Yang Similar to the hugetlb PMD unsharing optimization, skip the second IPI in collapse_huge_page() when the TLB flush already provides necessary synchronization. Before commit a37259732a7d ("x86/mm: Make MMU_GATHER_RCU_TABLE_FREE unconditional"), bare metal x86 didn't enable MMU_GATHER_RCU_TABLE_FREE. In that configuration, tlb_remove_table_sync_one() was a NOP. GUP-fast synchronization relied on IRQ disabling, which blocks TLB flush IPIs. When Rik made MMU_GATHER_RCU_TABLE_FREE unconditional to support AMD's INVLPGB, all x86 systems started sending the second IPI. However, on native x86 this is redundant: - pmdp_collapse_flush() calls flush_tlb_range(), sending IPIs to all CPUs to invalidate TLB entries - GUP-fast runs with IRQs disabled, so when the flush IPI completes, any concurrent GUP-fast must have finished - tlb_remove_table_sync_one() provides no additional synchronization On x86, skip the second IPI when running native (without paravirt) and without INVLPGB. For paravirt with non-native flush_tlb_multi and for INVLPGB, conservatively keep both IPIs. Use tlb_table_flush_implies_ipi_broadcast(), consistent with the hugetlb optimization. Suggested-by: David Hildenbrand (Red Hat) Signed-off-by: Lance Yang --- mm/khugepaged.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 97d1b2824386..06ea793a8190 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1178,7 +1178,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, _pmd = pmdp_collapse_flush(vma, address, pmd); spin_unlock(pmd_ptl); mmu_notifier_invalidate_range_end(&range); - tlb_remove_table_sync_one(); + /* + * Skip the second IPI if the TLB flush above already synchronized + * with concurrent GUP-fast via broadcast IPIs. + */ + if (!tlb_table_flush_implies_ipi_broadcast()) + tlb_remove_table_sync_one(); pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl); if (pte) { -- 2.49.0