If two or more threads of an application faulting on the same folio, the mmap_miss counter can be decreased multiple times. It breaks the mmap_miss heuristic and keeps the readahead enabled even under extreme levels of memory pressure. It happens often if file folios backing a multi-threaded application are getting evicted and re-faulted. Fix it by skipping decreasing mmap_miss if the folio is locked. This change was evaluated on several hundred thousands hosts in Google's production over a couple of weeks. The number of containers being stuck in a vicious reclaim cycle for a long time was reduced several fold (~10-20x), as well as the overall fleet-wide cpu time spent in direct memory reclaim was meaningfully reduced. No regressions were observed. Signed-off-by: Roman Gushchin Cc: Matthew Wilcox (Oracle) Cc: Jan Kara Cc: linux-mm@kvack.org --- mm/filemap.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index c21e98657e0b..983ba1019674 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3324,9 +3324,17 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, if (vmf->vma->vm_flags & VM_RAND_READ || !ra->ra_pages) return fpin; - mmap_miss = READ_ONCE(ra->mmap_miss); - if (mmap_miss) - WRITE_ONCE(ra->mmap_miss, --mmap_miss); + /* + * If the folio is locked, we're likely racing against another fault. + * Don't touch the mmap_miss counter to avoid decreasing it multiple + * times for a single folio and break the balance with mmap_miss + * increase in do_sync_mmap_readahead(). + */ + if (likely(!folio_test_locked(folio))) { + mmap_miss = READ_ONCE(ra->mmap_miss); + if (mmap_miss) + WRITE_ONCE(ra->mmap_miss, --mmap_miss); + } if (folio_test_readahead(folio)) { fpin = maybe_unlock_mmap_for_io(vmf, fpin); -- 2.50.1