From: Barry Song If the current do_swap_page() took the per-VMA lock and we dropped it only to wait for I/O completion (e.g., use folio_wait_locked()), then when do_swap_page() is retried after the I/O completes, it should still qualify for the per-VMA-lock path. Cc: David Hildenbrand Cc: Lorenzo Stoakes Cc: Liam R. Howlett Cc: Vlastimil Babka Cc: Mike Rapoport Cc: Suren Baghdasaryan Cc: Michal Hocko Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Alexandre Ghiti Cc: Russell King Cc: Catalin Marinas Cc: Will Deacon Cc: Huacai Chen Cc: WANG Xuerui Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: Alexander Gordeev Cc: Gerald Schaefer Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Christian Borntraeger Cc: Sven Schnelle Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: x86@kernel.org Cc: H. Peter Anvin Cc: Matthew Wilcox Cc: Pedro Falcato Cc: Jarkko Sakkinen Cc: Oscar Salvador Cc: Kuninori Morimoto Cc: Oven Liyang Cc: Mark Rutland Cc: Ada Couprie Diaz Cc: Robin Murphy Cc: Kristina Martšenko Cc: Kevin Brodsky Cc: Yeoreum Yun Cc: Wentao Guan Cc: Thorsten Blum Cc: Steven Rostedt Cc: Yunhui Cui Cc: Nam Cao Cc: Chris Li Cc: Kairui Song Cc: Kemeng Shi Cc: Nhat Pham Cc: Baoquan He Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: loongarch@lists.linux.dev Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-riscv@lists.infradead.org Cc: linux-s390@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-fsdevel@vger.kernel.org Signed-off-by: Barry Song --- mm/memory.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 4f933fedd33e..7f70f0324dcf 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4654,6 +4654,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) unsigned long page_idx; unsigned long address; pte_t *ptep; + bool retry_by_vma_lock = false; if (!pte_unmap_same(vmf)) goto out; @@ -4758,8 +4759,13 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) swapcache = folio; ret |= folio_lock_or_retry(folio, vmf); - if (ret & VM_FAULT_RETRY) + if (ret & VM_FAULT_RETRY) { + if (fault_flag_allow_retry_first(vmf->flags) && + !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT) && + (vmf->flags & FAULT_FLAG_VMA_LOCK)) + retry_by_vma_lock = true; goto out_release; + } page = folio_file_page(folio, swp_offset(entry)); /* @@ -5044,7 +5050,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) } if (si) put_swap_device(si); - return ret; + return ret | (retry_by_vma_lock ? VM_FAULT_RETRY_VMA : 0); } static bool pte_range_none(pte_t *pte, int nr_pages) -- 2.39.3 (Apple Git-146)