Function pageblock_pfn_to_page() is introduced by commit 7d49d8868336 ("mm, compaction: reduce zone checking frequency in the migration scanner"), where there is no requirement on start_pfn/end_pfn except they are in the same pageblock. So at that time, pageblock_pfn_to_page() would be passed with pfn without compared with zone boundary. But after commit 7cf91a98e607 ("mm/compaction: speed up pageblock_pfn_to_page() when zone is contiguous"), pageblock_pfn_to_page() would think the range is valid and in the same zone if zone->contiguous, even the range doesn't belong to this zone. For example, in fast_isolate_freepages(), min_pfn is assigned to pageblock_start_pfn() and passed to pageblock_pfn_to_page() without checking with zone_start_pfn. And mostly, the end_pfn is not checked with zone_end_pfn() before using. To make this function robust, check the range is within the zone first. Fixes: 7cf91a98e607 ("mm/compaction: speed up pageblock_pfn_to_page() when zone is contiguous") Signed-off-by: Wei Yang Cc: Vlastimil Babka Cc: Joonsoo Kim --- mm/internal.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/internal.h b/mm/internal.h index 38607b2821d9..8e1a3819c9f1 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -724,6 +724,9 @@ extern struct page *__pageblock_pfn_to_page(unsigned long start_pfn, static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn, unsigned long end_pfn, struct zone *zone) { + if (start_pfn < zone->zone_start_pfn || end_pfn > zone_end_pfn(zone)) + return NULL; + if (zone->contiguous) return pfn_to_page(start_pfn); -- 2.34.1