We presently skip regions with hugepages entirely when trying to do contiguous page allocation. Instead, if hugepage migration is enabled, consider regions with hugepages smaller than the target contiguous allocation request as valid targets for allocation. Compaction `isolate_migrate_pages_block()` already expects requests with hugepages to originate from alloc_contig, and hugetlb code also does a migratable check when isolating in `folio_isolate_hugetlb()`. We add the migration check here to avoid calling compaction on a region if we know migration is not possible at all. Suggested-by: David Hildenbrand Signed-off-by: Gregory Price --- mm/page_alloc.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 600d9e981c23..e0760eafe032 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7048,8 +7048,14 @@ static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, if (PageReserved(page)) return false; - if (PageHuge(page)) - return false; + if (PageHuge(page)) { + struct folio *folio = page_folio(page); + + /* Don't consider moving same size/larger pages */ + if (!folio_test_hugetlb_migratable(folio) || + (1 << folio_order(folio) >= nr_pages)) + return false; + } } return true; } -- 2.51.0