Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start. Proof: Both loops in hpage_collapse_scan_file and collapse_file, which iterate on the xarray, have the invariant that start <= folio->index < start + HPAGE_PMD_NR ... (i) A folio is always naturally aligned in the pagecache, therefore folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii) thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual offsets in the VMA are aligned to the order, => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii) Combining (i), (ii) and (iii), the claim is proven. Therefore, convert this to a VM_WARN_ON. Signed-off-by: Dev Jain --- Based on mm-unstable (d9982f38eb6e). mm-selftests pass. mm/khugepaged.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index fa1e57fd2c469..f27cbb4d1f62c 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -2000,8 +2000,9 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr, * we locked the first folio, then a THP might be there already. * This will be discovered on the first iteration. */ - if (folio_order(folio) == HPAGE_PMD_ORDER && - folio->index == start) { + if (folio_order(folio) == HPAGE_PMD_ORDER) { + VM_WARN_ON(folio->index != start); + /* Maybe PMD-mapped */ result = SCAN_PTE_MAPPED_HUGEPAGE; goto out_unlock; @@ -2329,8 +2330,9 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned continue; } - if (folio_order(folio) == HPAGE_PMD_ORDER && - folio->index == start) { + if (folio_order(folio) == HPAGE_PMD_ORDER) { + VM_WARN_ON(folio->index != start); + /* Maybe PMD-mapped */ result = SCAN_PTE_MAPPED_HUGEPAGE; /* -- 2.34.1