Whenever vmalloc allocates high order pages (e.g. for a huge mapping) it must immediately split_page() to order-0 so that it remains compatible with users that want to access the underlying struct page. Commit a06157804399 ("mm/vmalloc: request large order pages from buddy allocator") recently made it much more likely for vmalloc to allocate high order pages which are subsequently split to order-0. Unfortunately this had the side effect of causing performance regressions for tight vmalloc/vfree loops (e.g. test_vmalloc.ko benchmarks). See Closes: tag. This happens because the high order pages must be gotten from the buddy but then because they are split to order-0, when they are freed they are freed to the order-0 pcp. Previously allocation was for order-0 pages so they were recycled from the pcp. It would be preferable if when vmalloc allocates an (e.g.) order-3 page that it also frees that order-3 page to the order-3 pcp, then the regression could be removed. So let's do exactly that; use the new __free_contig_range() API to batch-free contiguous ranges of pfns. This not only removes the regression, but significantly improves performance of vfree beyond the baseline. A selection of test_vmalloc benchmarks running on AWS m7g.metal (arm64) system. v6.18 is the baseline. Commit a06157804399 ("mm/vmalloc: request large order pages from buddy allocator") was added in v6.19-rc1 where we see regressions. Then with this change performance is much better. (>0 is faster, <0 is slower, (R)/(I) = statistically significant Regression/Improvement): +----------------------------------------------------------+-------------+-------------+ | test_vmalloc benchmark | v6.19-rc1 | v6.19-rc1 | | | | + change | +==========================================================+=============+=============+ | fix_align_alloc_test: p:1, h:0, l:500000 (usec) | (R) -40.69% | (I) 4.85% | | fix_size_alloc_test: p:1, h:0, l:500000 (usec) | 0.10% | -1.04% | | fix_size_alloc_test: p:4, h:0, l:500000 (usec) | (R) -22.74% | (I) 14.12% | | fix_size_alloc_test: p:16, h:0, l:500000 (usec) | (R) -23.63% | (I) 43.81% | | fix_size_alloc_test: p:16, h:1, l:500000 (usec) | -1.58% | (I) 102.28% | | fix_size_alloc_test: p:64, h:0, l:100000 (usec) | (R) -24.39% | (I) 89.64% | | fix_size_alloc_test: p:64, h:1, l:100000 (usec) | (I) 2.34% | (I) 181.42% | | fix_size_alloc_test: p:256, h:0, l:100000 (usec) | (R) -23.29% | (I) 111.05% | | fix_size_alloc_test: p:256, h:1, l:100000 (usec) | (I) 3.74% | (I) 213.52% | | fix_size_alloc_test: p:512, h:0, l:100000 (usec) | (R) -23.80% | (I) 118.28% | | fix_size_alloc_test: p:512, h:1, l:100000 (usec) | (R) -2.84% | (I) 427.65% | | full_fit_alloc_test: p:1, h:0, l:500000 (usec) | 2.74% | -1.12% | | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | 0.58% | -0.79% | | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | -0.66% | -0.91% | | long_busy_list_alloc_test: p:1, h:0, l:500000 (usec) | (R) -25.24% | (I) 70.62% | | pcpu_alloc_test: p:1, h:0, l:500000 (usec) | -0.58% | -1.27% | | random_size_align_alloc_test: p:1, h:0, l:500000 (usec) | (R) -45.75% | (I) 11.11% | | random_size_alloc_test: p:1, h:0, l:500000 (usec) | (R) -28.16% | (I) 59.47% | | vm_map_ram_test: p:1, h:0, l:500000 (usec) | -0.54% | -0.85% | +----------------------------------------------------------+-------------+-------------+ Fixes: a06157804399 ("mm/vmalloc: request large order pages from buddy allocator") Closes: https://lore.kernel.org/all/66919a28-bc81-49c9-b68f-dd7c73395a0d@arm.com/ Signed-off-by: Ryan Roberts --- mm/vmalloc.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 32d6ee92d4ff..86407178b6d1 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3434,7 +3434,8 @@ void vfree_atomic(const void *addr) void vfree(const void *addr) { struct vm_struct *vm; - int i; + unsigned long start_pfn; + int i, nr; if (unlikely(in_interrupt())) { vfree_atomic(addr); @@ -3460,17 +3461,25 @@ void vfree(const void *addr) /* All pages of vm should be charged to same memcg, so use first one. */ if (vm->nr_pages && !(vm->flags & VM_MAP_PUT_PAGES)) mod_memcg_page_state(vm->pages[0], MEMCG_VMALLOC, -vm->nr_pages); - for (i = 0; i < vm->nr_pages; i++) { - struct page *page = vm->pages[i]; - BUG_ON(!page); - /* - * High-order allocs for huge vmallocs are split, so - * can be freed as an array of order-0 allocations - */ - __free_page(page); - cond_resched(); + if (vm->nr_pages) { + start_pfn = page_to_pfn(vm->pages[0]); + nr = 1; + for (i = 1; i < vm->nr_pages; i++) { + unsigned long pfn = page_to_pfn(vm->pages[i]); + + if (start_pfn + nr != pfn) { + __free_contig_range(start_pfn, nr); + start_pfn = pfn; + nr = 1; + cond_resched(); + } else { + nr++; + } + } + __free_contig_range(start_pfn, nr); } + if (!(vm->flags & VM_MAP_PUT_PAGES)) atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages); kvfree(vm->pages); -- 2.43.0