In alloc_demote_folio(), mtc->nmask is set to NULL for the first allocation. If that succeeds, it returns without restoring mtc->nmask to allowed_mask. For subsequent allocations from the migrate_pages() batch, mtc->nmask will be NULL. If the target node then becomes full, the fallback allocation will use nmask = NULL, allocating from any node allowed by the task cpuset, which for kswapd is all nodes. To address this issue, restore the mtc->nmask to its original allowed nodemask after the first allocation. Signed-off-by: Bing Jiao --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index cbffc0a27824..b42abd17aee7 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -985,11 +985,11 @@ static struct folio *alloc_demote_folio(struct folio *src, mtc->nmask = NULL; mtc->gfp_mask |= __GFP_THISNODE; dst = alloc_migration_target(src, (unsigned long)mtc); + mtc->nmask = allowed_mask; if (dst) return dst; mtc->gfp_mask &= ~__GFP_THISNODE; - mtc->nmask = allowed_mask; return alloc_migration_target(src, (unsigned long)mtc); } -- 2.53.0.473.g4a7958ca14-goog