For example, create three task: hot1 -> cold -> hot2. After all three task are created, each allocate memory 128MB. the hot1/hot2 task continuously access 128 MB memory, while the cold task only accesses its memory briefly andthen call madvise(MADV_COLD). However, khugepaged still prioritizes scanning the cold task and only scans the hot2 task after completing the scan of the cold task. So if the user has explicitly informed us via MADV_COLD/FREE that this memory is cold or will be freed, it is appropriate for khugepaged to scan it only at the latest possible moment, thereby avoiding unnecessary scan and collapse operations to reducing CPU wastage. Here are the performance test results: (Throughput bigger is better, other smaller is better) Testing on x86_64 machine: | task hot2 | without patch | with patch | delta | |---------------------|---------------|---------------|---------| | total accesses time | 3.14 sec | 2.92 sec | -7.01% | | cycles per access | 4.91 | 2.07 | -57.84% | | Throughput | 104.38 M/sec | 112.12 M/sec | +7.42% | | dTLB-load-misses | 288966432 | 1292908 | -99.55% | Testing on qemu-system-x86_64 -enable-kvm: | task hot2 | without patch | with patch | delta | |---------------------|---------------|---------------|---------| | total accesses time | 3.35 sec | 2.96 sec | -11.64% | | cycles per access | 7.23 | 2.12 | -70.68% | | Throughput | 97.88 M/sec | 110.76 M/sec | +13.16% | | dTLB-load-misses | 237406497 | 3189194 | -98.66% | Signed-off-by: Vernon Yang --- include/linux/khugepaged.h | 1 + mm/khugepaged.c | 14 ++++++++++++++ mm/madvise.c | 3 +++ 3 files changed, 18 insertions(+) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index eb1946a70cff..726e99de84e9 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -15,6 +15,7 @@ extern void __khugepaged_enter(struct mm_struct *mm); extern void __khugepaged_exit(struct mm_struct *mm); extern void khugepaged_enter_vma(struct vm_area_struct *vma, vm_flags_t vm_flags); +void khugepaged_move_tail(struct mm_struct *mm); extern void khugepaged_min_free_kbytes_update(void); extern bool current_is_khugepaged(void); extern int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 1ec1af5be3c8..91836dda2015 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -468,6 +468,20 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, } } +void khugepaged_move_tail(struct mm_struct *mm) +{ + struct mm_slot *slot; + + if (!mm_flags_test(MMF_VM_HUGEPAGE, mm)) + return; + + spin_lock(&khugepaged_mm_lock); + slot = mm_slot_lookup(mm_slots_hash, mm); + if (slot && khugepaged_scan.mm_slot != slot) + list_move_tail(&slot->mm_node, &khugepaged_scan.mm_head); + spin_unlock(&khugepaged_mm_lock); +} + void __khugepaged_exit(struct mm_struct *mm) { struct mm_slot *slot; diff --git a/mm/madvise.c b/mm/madvise.c index fb1c86e630b6..3f9ca7af2c82 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -608,6 +608,8 @@ static long madvise_cold(struct madvise_behavior *madv_behavior) madvise_cold_page_range(&tlb, madv_behavior); tlb_finish_mmu(&tlb); + khugepaged_move_tail(vma->vm_mm); + return 0; } @@ -835,6 +837,7 @@ static int madvise_free_single_vma(struct madvise_behavior *madv_behavior) &walk_ops, tlb); tlb_end_vma(tlb, vma); mmu_notifier_invalidate_range_end(&range); + khugepaged_move_tail(mm); return 0; } -- 2.51.0