We iterate pfn from order 0 to MAX_PAGE_ORDER aligned to find large buddy. While if the order is less than start_pfn aligned order, we would get the same pfn and do the same check again. Iterate from start_pfn aligned order to reduce duplicated work. Link: https://lkml.kernel.org/r/20250828091618.7869-1-richard.weiyang@gmail.com Signed-off-by: Wei Yang Cc: Johannes Weiner Cc: Zi Yan Cc: Vlastimil Babka Cc: David Hildenbrand Signed-off-by: Andrew Morton Reviewed-by: Zi Yan --- v2: add comment on assignment of order --- mm/page_alloc.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 07d79ae557f8..5d9ceca869e5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2033,7 +2033,13 @@ static int move_freepages_block(struct zone *zone, struct page *page, /* Look for a buddy that straddles start_pfn */ static unsigned long find_large_buddy(unsigned long start_pfn) { - int order = 0; + /* + * If start_pfn is not an order-0 PageBuddy, next PageBuddy containing + * start_pfn has minimal order of __ffs(start_pfn) + 1. Start checking + * the order with __ffs(start_pfn). If start_pfn is order-0 PageBuddy, + * the starting order does not matter. + */ + int order = start_pfn ? __ffs(start_pfn) : MAX_PAGE_ORDER; struct page *page; unsigned long pfn = start_pfn; -- 2.34.1