From: Lance Yang When a new THP is faulted in or collapsed, it is unconditionally added to the deferred split queue. If this THP is subsequently mlocked, it remains on the queue but is removed from the LRU and marked unevictable. During memory reclaim, deferred_split_scan() will still pick up this large folio. Because it's not partially mapped, it will proceed to call thp_underused() and then attempt to split_folio() to free all zero-filled subpages. This is a pointless waste of CPU cycles. The folio is mlocked and unevictable, so any attempt to reclaim memory from it via splitting is doomed to fail. So, let's add an early folio_test_mlocked() check to skip this case. Signed-off-by: Lance Yang --- mm/huge_memory.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 77f0c3417973..d2e84015d6b4 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4183,6 +4183,9 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, bool underused = false; if (!folio_test_partially_mapped(folio)) { + /* An mlocked folio is not a candidate for the shrinker. */ + if (folio_test_mlocked(folio)) + goto next; underused = thp_underused(folio); if (!underused) goto next; -- 2.49.0