The folio_can_map_prot_numa() checks whether the folio can map prot numa, which skips unsuitable folio, i.e. zone device, shared folios (KSM, CoW), non-movable dma pinned, dirty file folio and folios that already have the expected node affinity. Although the ksm only applies to small folios, an extra test was added for large folios, but the other policies should be applied to pmd folio, which helps to avoid unnecessary pmd change and folio migration attempts. Acked-by: David Hildenbrand Reviewed-by: Sidhartha Kumar Reviewed-by: Lorenzo Stoakes Signed-off-by: Kefeng Wang --- mm/huge_memory.c | 17 +++-------------- 1 file changed, 3 insertions(+), 14 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2764613a9b3d..eda9316f71b3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2477,8 +2477,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, #endif if (prot_numa) { - struct folio *folio; - bool toptier; + /* * Avoid trapping faults against the zero page. The read-only * data is likely to be read-cached on the local CPU and @@ -2490,19 +2489,9 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, if (pmd_protnone(*pmd)) goto unlock; - folio = pmd_folio(*pmd); - toptier = node_is_toptier(folio_nid(folio)); - /* - * Skip scanning top tier node if normal numa - * balancing is disabled - */ - if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && - toptier) + if (!folio_can_map_prot_numa(pmd_folio(*pmd), vma, + vma_is_single_threaded_private(vma))) goto unlock; - - if (folio_use_access_time(folio)) - folio_xchg_access_time(folio, - jiffies_to_msecs(jiffies)); } /* * In case prot_numa, we are under mmap_read_lock(mm). It's critical -- 2.27.0