Commit 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()") return false unconditionally after split_huge_pmd_locked() which may fail early during try_to_migrate() for shared thp. This will lead to unexpected folio split failure. One way to reproduce: Create an anonymous thp range and fork 512 children, so we have a thp shared mapped in 513 processes. Then trigger folio split with /sys/kernel/debug/split_huge_pages debugfs to split the thp folio to order 0. Without the above commit, we can successfully split to order 0. With the above commit, the folio is still a large folio. The reason is the above commit return false after split pmd unconditionally in the first process and break try_to_migrate(). The tricky thing in above reproduce method is current debugfs interface leverage function split_huge_pages_pid(), which will iterate the whole pmd range and do folio split on each base page address. This means it will try 512 times, and each time split one pmd from pmd mapped to pte mapped thp. If there are less than 512 shared mapped process, the folio is still split successfully at last. But in real world, we usually try it for once. This patch fixes this by removing the unconditional false return after split_huge_pmd_locked(). Later, we may introduce a true fail early if split_huge_pmd_locked() does fail. Signed-off-by: Wei Yang Fixes: 60fbb14396d5 ("mm/huge_memory: adjust try_to_migrate_one() and split_huge_pmd_locked()") Cc: Gavin Guo Cc: "David Hildenbrand (Red Hat)" Cc: Zi Yan Cc: Baolin Wang Cc: --- mm/rmap.c | 1 - 1 file changed, 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index 618df3385c8b..eed971568d65 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2448,7 +2448,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, if (flags & TTU_SPLIT_HUGE_PMD) { split_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, true); - ret = false; page_vma_mapped_walk_done(&pvmw); break; } -- 2.34.1