When freeing a high-order folio that contains HWPoison pages, to ensure these HWPoison pages are not added to buddy allocator, we can first uniformly split a free and unmapped high-order folio to 0-order folios first, then only add non-HWPoison folios to buddy allocator and exclude HWPoison ones. Introduce uniform_split_unmapped_folio_to_zero_order, a wrapper to the existing __split_unmapped_folio. Caller can use it to uniformly split an unmapped high-order folio into 0-order folios. No functional change. It will be used in a subsequent commit. Signed-off-by: Jiaqi Yan --- include/linux/huge_mm.h | 6 ++++++ mm/huge_memory.c | 8 ++++++++ 2 files changed, 14 insertions(+) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 71ac78b9f834f..ef6a84973e157 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -365,6 +365,7 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add vm_flags_t vm_flags); bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins); +int uniform_split_unmapped_folio_to_zero_order(struct folio *folio); int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, unsigned int new_order); int min_order_for_split(struct folio *folio); @@ -569,6 +570,11 @@ can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) { return false; } +static inline int uniform_split_unmapped_folio_to_zero_order(struct folio *folio) +{ + VM_WARN_ON_ONCE_PAGE(1, page); + return -EINVAL; +} static inline int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, unsigned int new_order) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 323654fb4f8cf..c7b6c1c75a18e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3515,6 +3515,14 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, return ret; } +int uniform_split_unmapped_folio_to_zero_order(struct folio *folio) +{ + return __split_unmapped_folio(folio, /*new_order=*/0, + /*split_at=*/&folio->page, + /*xas=*/NULL, /*mapping=*/NULL, + /*uniform_split=*/true); +} + bool non_uniform_split_supported(struct folio *folio, unsigned int new_order, bool warns) { -- 2.52.0.rc1.455.g30608eb744-goog At the end of dissolve_free_hugetlb_folio, when a free HugeTLB folio becomes non-HugeTLB, it is released to buddy allocator as a high-order folio, e.g. a folio that contains 262144 pages if the folio was a 1G HugeTLB hugepage. This is problematic if the HugeTLB hugepage contained HWPoison subpages. In that case, since buddy allocator does not check HWPoison for non-zero-order folio, the raw HWPoison page can be given out with its buddy page and be re-used by either kernel or userspace. Memory failure recovery (MFR) in kernel does attempt to take raw HWPoison page off buddy allocator after dissolve_free_hugetlb_folio. However, there is always a time window between freed to buddy allocator and taken off from buddy allocator. One obvious way to avoid this problem is to add page sanity checks in page allocate or free path. However, it is against the past efforts to reduce sanity check overhead [1,2,3]. Introduce hugetlb_free_hwpoison_folio to solve this problem. The idea is, in case a HugeTLB folio for sure contains HWPoison page(s), first split the non-HugeTLB high-order folio uniformly into 0-order folios, then let healthy pages join the buddy allocator while reject the HWPoison ones. [1] https://lore.kernel.org/linux-mm/1460711275-1130-15-git-send-email-mgorman@techsingularity.net/ [2] https://lore.kernel.org/linux-mm/1460711275-1130-16-git-send-email-mgorman@techsingularity.net/ [3] https://lore.kernel.org/all/20230216095131.17336-1-vbabka@suse.cz Signed-off-by: Jiaqi Yan --- include/linux/hugetlb.h | 4 ++++ mm/hugetlb.c | 8 ++++++-- mm/memory-failure.c | 43 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 53 insertions(+), 2 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 8e63e46b8e1f0..e1c334a7db2fe 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -870,8 +870,12 @@ int dissolve_free_hugetlb_folios(unsigned long start_pfn, unsigned long end_pfn); #ifdef CONFIG_MEMORY_FAILURE +extern void hugetlb_free_hwpoison_folio(struct folio *folio); extern void folio_clear_hugetlb_hwpoison(struct folio *folio); #else +static inline void hugetlb_free_hwpoison_folio(struct folio *folio) +{ +} static inline void folio_clear_hugetlb_hwpoison(struct folio *folio) { } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0455119716ec0..801ca1a14c0f0 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1596,6 +1596,7 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio) { bool clear_flag = folio_test_hugetlb_vmemmap_optimized(folio); + bool has_hwpoison = folio_test_hwpoison(folio); if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; @@ -1638,12 +1639,15 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, * Move PageHWPoison flag from head page to the raw error pages, * which makes any healthy subpages reusable. */ - if (unlikely(folio_test_hwpoison(folio))) + if (unlikely(has_hwpoison)) folio_clear_hugetlb_hwpoison(folio); folio_ref_unfreeze(folio, 1); - hugetlb_free_folio(folio); + if (unlikely(has_hwpoison)) + hugetlb_free_hwpoison_folio(folio); + else + hugetlb_free_folio(folio); } /* diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 3edebb0cda30b..e6a9deba6292a 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2002,6 +2002,49 @@ int __get_huge_page_for_hwpoison(unsigned long pfn, int flags, return ret; } +void hugetlb_free_hwpoison_folio(struct folio *folio) +{ + struct folio *curr, *next; + struct folio *end_folio = folio_next(folio); + int ret; + + VM_WARN_ON_FOLIO(folio_ref_count(folio) != 1, folio); + + ret = uniform_split_unmapped_folio_to_zero_order(folio); + if (ret) { + /* + * In case of split failure, none of the pages in folio + * will be freed to buddy allocator. + */ + pr_err("%#lx: failed to split free %d-order folio with HWPoison page(s): %d\n", + folio_pfn(folio), folio_order(folio), ret); + return; + } + + /* Expect 1st folio's refcount==1, and other's refcount==0. */ + for (curr = folio; curr != end_folio; curr = next) { + next = folio_next(curr); + + VM_WARN_ON_FOLIO(folio_order(curr), curr); + + if (PageHWPoison(&curr->page)) { + if (curr != folio) + folio_ref_inc(curr); + + VM_WARN_ON_FOLIO(folio_ref_count(curr) != 1, curr); + pr_warn("%#lx: prevented freeing HWPoison page\n", + folio_pfn(curr)); + continue; + } + + if (curr == folio) + folio_ref_dec(curr); + + VM_WARN_ON_FOLIO(folio_ref_count(curr), curr); + free_frozen_pages(&curr->page, folio_order(curr)); + } +} + /* * Taking refcount of hugetlb pages needs extra care about race conditions * with basic operations like hugepage allocation/free/demotion. -- 2.52.0.rc1.455.g30608eb744-goog