When MADV_COLLAPSE is called on file-backed mappings (e.g., executable text sections), the pages may still be dirty from recent writes and cause collapse to fail with -EINVAL. This is particularly problematic for freshly copied executables on filesystems, where page cache folios remain dirty until background writeback completes. The current code in collapse_file() triggers async writeback via filemap_flush() and expects khugepaged to revisit the page later. However, MADV_COLLAPSE is a synchronous operation where userspace expects immediate results. Perform synchronous writeback in madvise_collapse() before attempting collapse to avoid failing on first attempt. Reported-by: Branden Moore Closes: https://lore.kernel.org/all/4e26fe5e-7374-467c-a333-9dd48f85d7cc@amd.com Fixes: 34488399fa08 ("mm/madvise: add file and shmem support to MADV_COLLAPSE") Suggested-by: David Hildenbrand Signed-off-by: Shivank Garg --- mm/khugepaged.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 97d1b2824386..066a332c76ad 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include "internal.h" @@ -2784,6 +2785,31 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start, hstart = (start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; hend = end & HPAGE_PMD_MASK; + /* + * For file-backed VMAs, perform synchronous writeback to ensure + * dirty folios are flushed before attempting collapse. This avoids + * failing on the first attempt when freshly-written executable text + * is still dirty in the page cache. + */ + if (!vma_is_anonymous(vma) && vma->vm_file) { + struct address_space *mapping = vma->vm_file->f_mapping; + + if (mapping_can_writeback(mapping)) { + pgoff_t pgoff_start = linear_page_index(vma, hstart); + pgoff_t pgoff_end = linear_page_index(vma, hend); + loff_t lstart = (loff_t)pgoff_start << PAGE_SHIFT; + loff_t lend = ((loff_t)pgoff_end << PAGE_SHIFT) - 1; + + mmap_read_unlock(mm); + mmap_locked = false; + + if (filemap_write_and_wait_range(mapping, lstart, lend)) { + last_fail = SCAN_FAIL; + goto out_maybelock; + } + } + } + for (addr = hstart; addr < hend; addr += HPAGE_PMD_SIZE) { int result = SCAN_FAIL; -- 2.43.0 When collapse_file encounters dirty or writeback pages in file-backed mappings, it currently SCAN_FAIL which maps to -EINVAL. This is misleading as EINVAL suggests invalid arguments, whereas dirty/writeback pages represent transient conditions that may resolve on retry. Introduce SCAN_PAGE_NOT_CLEAN to cover both dirty and writeback states, mapping it to -EAGAIN. For MADV_COLLAPSE, this provides userspace with a clear signal that retry may succeed after writeback completes, making -EAGAIN semantically correct. For khugepaged, this is harmless as it will naturally revisit the range during periodic scans after async writeback completes. Signed-off-by: Shivank Garg --- include/trace/events/huge_memory.h | 3 ++- mm/khugepaged.c | 8 +++++--- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h index 4cde53b45a85..1caf24b951e1 100644 --- a/include/trace/events/huge_memory.h +++ b/include/trace/events/huge_memory.h @@ -37,7 +37,8 @@ EM( SCAN_PAGE_HAS_PRIVATE, "page_has_private") \ EM( SCAN_STORE_FAILED, "store_failed") \ EM( SCAN_COPY_MC, "copy_poisoned_page") \ - EMe(SCAN_PAGE_FILLED, "page_filled") + EM( SCAN_PAGE_FILLED, "page_filled") \ + EMe(SCAN_PAGE_NOT_CLEAN, "page_not_clean") #undef EM #undef EMe diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 066a332c76ad..282b413d17e8 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -59,6 +59,7 @@ enum scan_result { SCAN_STORE_FAILED, SCAN_COPY_MC, SCAN_PAGE_FILLED, + SCAN_PAGE_NOT_CLEAN, }; #define CREATE_TRACE_POINTS @@ -1968,11 +1969,11 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, */ xas_unlock_irq(&xas); filemap_flush(mapping); - result = SCAN_FAIL; + result = SCAN_PAGE_NOT_CLEAN; goto xa_unlocked; } else if (folio_test_writeback(folio)) { xas_unlock_irq(&xas); - result = SCAN_FAIL; + result = SCAN_PAGE_NOT_CLEAN; goto xa_unlocked; } else if (folio_trylock(folio)) { folio_get(folio); @@ -2019,7 +2020,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, * folio is dirty because it hasn't been flushed * since first write. */ - result = SCAN_FAIL; + result = SCAN_PAGE_NOT_CLEAN; goto out_unlock; } @@ -2748,6 +2749,7 @@ static int madvise_collapse_errno(enum scan_result r) case SCAN_PAGE_LRU: case SCAN_DEL_PAGE_LRU: case SCAN_PAGE_FILLED: + case SCAN_PAGE_NOT_CLEAN: return -EAGAIN; /* * Other: Trying again likely not to succeed / error intrinsic to -- 2.43.0