Initialize nr_pages to 1 at the start of each loop iteration, like folio_referenced_one() does. Without this, nr_pages computed by a previous folio_unmap_pte_batch() call can be reused on a later iteration that does not run folio_unmap_pte_batch() again. I don’t think this is causing a bug today, but it is fragile. A real bug would require this sequence within the same try_to_unmap_one() call: 1. Hit the pte_present(pteval) branch and set nr_pages > 1. 2. Later hit the else branch and do pte_clear() for device-exclusive PTE, and execute rest of the code with nr_pages > 1. Executing the above would imply a lazyfree folio is mapped by a mix of present PTEs and device-exclusive PTEs. In practice, device-exclusive PTEs imply a GUP pin on the folio, and lazyfree unmapping aborts try_to_unmap_one() when it detects that condition. So today this likely does not manifest, but initializing nr_pages per-iteration is still the correct and safer behavior. Signed-off-by: Dev Jain --- mm/rmap.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index 78b7fb5f367ce..62a8c912fd788 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1991,7 +1991,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, struct page *subpage; struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; - unsigned long nr_pages = 1, end_addr; + unsigned long nr_pages; + unsigned long end_addr; unsigned long pfn; unsigned long hsz = 0; int ptes = 0; @@ -2030,6 +2031,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, mmu_notifier_invalidate_range_start(&range); while (page_vma_mapped_walk(&pvmw)) { + nr_pages = 1; /* * If the folio is in an mlock()d vma, we must not swap it out. */ -- 2.34.1