The folio_needs_prot_numa() checks whether to need prot numa, which skips unsuitable folio, i.e. zone device, shared folios(ksm, CoW), non-movable dma pinned, dirty file folio and already numa affinity's folios, the policy should be applied to pmd folio too, which helps to avoid unnecessary pmd change and folio migration attempts. Reviewed-by: Sidhartha Kumar Signed-off-by: Kefeng Wang --- mm/huge_memory.c | 22 ++++++++-------------- 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2764613a9b3d..121c92f5c486 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2477,8 +2477,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, #endif if (prot_numa) { - struct folio *folio; - bool toptier; + int target_node = NUMA_NO_NODE; + /* * Avoid trapping faults against the zero page. The read-only * data is likely to be read-cached on the local CPU and @@ -2490,19 +2490,13 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, if (pmd_protnone(*pmd)) goto unlock; - folio = pmd_folio(*pmd); - toptier = node_is_toptier(folio_nid(folio)); - /* - * Skip scanning top tier node if normal numa - * balancing is disabled - */ - if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && - toptier) - goto unlock; + /* Get target node for single threaded private VMAs */ + if (!(vma->vm_flags & VM_SHARED) && + atomic_read(&vma->vm_mm->mm_users) == 1) + target_node = numa_node_id(); - if (folio_use_access_time(folio)) - folio_xchg_access_time(folio, - jiffies_to_msecs(jiffies)); + if (!folio_needs_prot_numa(pmd_folio(*pmd), vma, target_node)) + goto unlock; } /* * In case prot_numa, we are under mmap_read_lock(mm). It's critical -- 2.27.0