From: xu xin Problem ======= When available memory is extremely tight, causing KSM pages to be swapped out, or when there is significant memory fragmentation and THP triggers memory compaction, the system will invoke the rmap_walk_ksm function to perform reverse mapping. However, we observed that this function becomes particularly time-consuming when a large number of VMAs (e.g., 20,000) share the same anon_vma. Through debug trace analysis, we found that most of the latency occurs within anon_vma_interval_tree_foreach, leading to an excessively long hold time on the anon_vma lock (even reaching 500ms or more), which in turn causes upper-layer applications (waiting for the anon_vma lock) to be blocked for extended periods. Root Reaon ========== Further investigation revealed that 99.9% of iterations inside the anon_vma_interval_tree_foreach loop are skipped due to the first check "if (addr < vma->vm_start || addr >= vma->vm_end)), indicating that a large number of loop iterations are ineffective. This inefficiency arises because the pgoff_start and pgoff_end parameters passed to anon_vma_interval_tree_foreach span the entire address space from 0 to ULONG_MAX, resulting in very poor loop efficiency. Solution ======== In fact, we can significantly improve performance by passing a more precise range based on the given addr. Since the original pages merged by KSM correspond to anonymous VMAs, the page offset can be calculated as pgoff = address >> PAGE_SHIFT. Therefore, we can optimize the call by defining: pgoff_start = rmap_item->address >> PAGE_SHIFT; since KSM folios are always order-0, so folio_nr_pages(KSM folio) is always 1, so the line: "pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;" becomes directly: "pgoff_end = pgoff_start;" Performance =========== In our real embedded Linux environment, the measured metrcis were as follows: 1) Time_ms: Max time for holding anon_vma lock in a single rmap_walk_ksm. 2) Nr_iteration_total: The max times of iterations in a loop of anon_vma_interval_tree_foreach 3) Skip_addr_out_of_range: The max times of skipping due to the first check (vma->vm_start and vma->vm_end) in a loop of anon_vma_interval_tree_foreach. 4) Skip_mm_mismatch: The max times of skipping due to the second check (rmap_item->mm == vma->vm_mm) in a loop of anon_vma_interval_tree_foreach. The result is as follows: Time_ms Nr_iteration_total Skip_addr_out_of_range Skip_mm_mismatch Before patched: 228.65 22169 22168 0 After pacthed: 0.396 3 0 2 The referenced reproducer of rmap_walk_ksm can be found at: https://lore.kernel.org/all/20260206151424734QIyWL_pA-1QeJPbJlUxsO@zte.com.cn/ Signed-off-by: xu xin --- mm/ksm.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/ksm.c b/mm/ksm.c index 950e122bcbf4..54f72e92b7f3 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -3170,6 +3170,9 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc) hlist_for_each_entry(rmap_item, &stable_node->hlist, hlist) { /* Ignore the stable/unstable/sqnr flags */ const unsigned long addr = rmap_item->address & PAGE_MASK; + const pgoff_t pgoff_start = rmap_item->address >> PAGE_SHIFT; + /* KSM folios are always order-0 normal pages */ + const pgoff_t pgoff_end = pgoff_start; struct anon_vma *anon_vma = rmap_item->anon_vma; struct anon_vma_chain *vmac; struct vm_area_struct *vma; @@ -3184,7 +3187,7 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc) } anon_vma_interval_tree_foreach(vmac, &anon_vma->rb_root, - 0, ULONG_MAX) { + pgoff_start, pgoff_end) { cond_resched(); vma = vmac->vma; -- 2.25.1