We presently skip regions with hugepages entirely when trying to do contiguous page allocation. Instead, if hugepage migration is enabled, consider regions with hugepages smaller than the target contiguous allocation request as valid targets for allocation. isolate_migrate_pages_block() already expects requests with hugepages to originate from alloc_contig, and hugetlb code also does a migratable check when isolating in folio_isolate_hugetlb(). Suggested-by: David Hildenbrand Signed-off-by: Gregory Price Reviewed-by: Zi Yan Reviewed-by: Wei Yang --- mm/page_alloc.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 600d9e981c23..23866d4c26ff 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7048,8 +7048,19 @@ static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, if (PageReserved(page)) return false; - if (PageHuge(page)) - return false; + if (PageHuge(page)) { + unsigned int order; + + if (!IS_ENABLED(CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION)) + return false; + + /* Don't consider moving same size/larger pages */ + page = compound_head(page); + order = compound_order(page); + if ((order >= MAX_FOLIO_ORDER) || + (nr_pages <= (1 << order))) + return false; + } } return true; } -- 2.51.0