From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. To ensure safety, it will only be appropriate to hold the rcu read lock or acquire a reference to the memory cgroup returned by folio_memcg(), thereby preventing it from being released. In the current patch, the rcu read lock is employed to safeguard against the release of the memory cgroup in lru_gen_eviction(). This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng --- mm/workingset.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/mm/workingset.c b/mm/workingset.c index 8cad8ee6dec6a..c4d21c15bad51 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -241,11 +241,14 @@ static void *lru_gen_eviction(struct folio *folio) int refs = folio_lru_refs(folio); bool workingset = folio_test_workingset(folio); int tier = lru_tier_from_refs(refs, workingset); - struct mem_cgroup *memcg = folio_memcg(folio); + struct mem_cgroup *memcg; struct pglist_data *pgdat = folio_pgdat(folio); + unsigned short memcg_id; BUILD_BUG_ON(LRU_GEN_WIDTH + LRU_REFS_WIDTH > BITS_PER_LONG - EVICTION_SHIFT); + rcu_read_lock(); + memcg = folio_memcg(folio); lruvec = mem_cgroup_lruvec(memcg, pgdat); lrugen = &lruvec->lrugen; min_seq = READ_ONCE(lrugen->min_seq[type]); @@ -253,8 +256,10 @@ static void *lru_gen_eviction(struct folio *folio) hist = lru_hist_from_seq(min_seq); atomic_long_add(delta, &lrugen->evicted[hist][type][tier]); + memcg_id = mem_cgroup_id(memcg); + rcu_read_unlock(); - return pack_shadow(mem_cgroup_id(memcg), pgdat, token, workingset); + return pack_shadow(memcg_id, pgdat, token, workingset); } /* -- 2.20.1