Btrfs currently tracks its metadata pages in the page cache, using a fake inode (fs_info->btree_inode) with offsets corresponding to where the metadata is stored in the filesystem's full logical address space. A consequence of this is that when btrfs uses filemap_add_folio(), this usage is charged to the cgroup of whichever task happens to be running at the time. These folios don't belong to any particular user cgroup, so I don't think it makes much sense for them to be charged in that way. Some negative consequences as a result: - A task can be holding some important btrfs locks, then need to lookup some metadata and go into reclaim, extending the duration it holds that lock for, and unfairly pushing its own reclaim pain onto other cgroups. - If that cgroup goes into reclaim, it might reclaim these folios a different non-reclaiming cgroup might need soon. This is naturally offset by LRU reclaim, but still. A very similar proposal to use the root cgroup was previously made by Qu, where he eventually proposed the idea of setting it per address_space. This makes good sense for the btrfs use case, as the uncharged behavior should apply to all use of the address_space, not select allocations. I.e., if someone adds another filemap_add_folio() call using btrfs's btree_inode, we would almost certainly want the uncharged behavior. Link: https://lore.kernel.org/linux-mm/b5fef5372ae454a7b6da4f2f75c427aeab6a07d6.1727498749.git.wqu@suse.com/ Suggested-by: Qu Wenruo Signed-off-by: Boris Burkov --- include/linux/pagemap.h | 1 + mm/filemap.c | 12 ++++++++---- 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 12a12dae727d..1ca63761a13a 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -211,6 +211,7 @@ enum mapping_flags { folio contents */ AS_INACCESSIBLE = 8, /* Do not attempt direct R/W access to the mapping */ AS_WRITEBACK_MAY_DEADLOCK_ON_RECLAIM = 9, + AS_UNCHARGED = 10, /* Do not charge usage to a cgroup */ /* Bits 16-25 are used for FOLIO_ORDER */ AS_FOLIO_ORDER_BITS = 5, AS_FOLIO_ORDER_MIN = 16, diff --git a/mm/filemap.c b/mm/filemap.c index 751838ef05e5..6046e7f27709 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -960,15 +960,19 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio, { void *shadow = NULL; int ret; + bool charge_mem_cgroup = !test_bit(AS_UNCHARGED, &mapping->flags); - ret = mem_cgroup_charge(folio, NULL, gfp); - if (ret) - return ret; + if (charge_mem_cgroup) { + ret = mem_cgroup_charge(folio, NULL, gfp); + if (ret) + return ret; + } __folio_set_locked(folio); ret = __filemap_add_folio(mapping, folio, index, gfp, &shadow); if (unlikely(ret)) { - mem_cgroup_uncharge(folio); + if (charge_mem_cgroup) + mem_cgroup_uncharge(folio); __folio_clear_locked(folio); } else { /* -- 2.50.1 Uncharged pages are tricky to track by their essential "uncharged" nature. To maintain good accounting, introduce a vmstat counter tracking all uncharged pages. Since this is only meaningful when cgroups are configured, only expose the counter when CONFIG_MEMCG is set. Confirmed that these work as expected at a high level by mounting a btrfs using AS_UNCHARGED for metadata pages, and seeing the counter rise with fs usage then go back to a minimal level after drop_caches and finally down to 0 after unmounting the fs. Suggested-by: Shakeel Butt Signed-off-by: Boris Burkov --- include/linux/mmzone.h | 3 +++ mm/filemap.c | 17 +++++++++++++++++ mm/vmstat.c | 3 +++ 3 files changed, 23 insertions(+) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 0c5da9141983..f6d885c97e99 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -245,6 +245,9 @@ enum node_stat_item { NR_HUGETLB, #endif NR_BALLOON_PAGES, +#ifdef CONFIG_MEMCG + NR_UNCHARGED_FILE_PAGES, +#endif NR_VM_NODE_STAT_ITEMS }; diff --git a/mm/filemap.c b/mm/filemap.c index 6046e7f27709..cd5af44a838c 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -146,6 +146,19 @@ static void page_cache_delete(struct address_space *mapping, mapping->nrpages -= nr; } +#ifdef CONFIG_MEMCG +static void filemap_mod_uncharged_vmstat(struct folio *folio, int sign) +{ + long nr = folio_nr_pages(folio) * sign; + + lruvec_stat_mod_folio(folio, NR_UNCHARGED_FILE_PAGES, nr); +} +#else +static void filemap_mod_uncharged_vmstat(struct folio *folio, int sign) +{ +} +#endif + static void filemap_unaccount_folio(struct address_space *mapping, struct folio *folio) { @@ -190,6 +203,8 @@ static void filemap_unaccount_folio(struct address_space *mapping, __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); } + if (test_bit(AS_UNCHARGED, &folio->mapping->flags)) + filemap_mod_uncharged_vmstat(folio, -1); /* * At this point folio must be either written or cleaned by @@ -987,6 +1002,8 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio, if (!(gfp & __GFP_WRITE) && shadow) workingset_refault(folio, shadow); folio_add_lru(folio); + if (!charge_mem_cgroup) + filemap_mod_uncharged_vmstat(folio, 1); } return ret; } diff --git a/mm/vmstat.c b/mm/vmstat.c index 71cd1ceba191..c8c8b5831f2d 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1289,6 +1289,9 @@ const char * const vmstat_text[] = { [I(NR_HUGETLB)] = "nr_hugetlb", #endif [I(NR_BALLOON_PAGES)] = "nr_balloon_pages", +#ifdef CONFIG_MEMCG + [I(NR_UNCHARGED_FILE_PAGES)] = "nr_uncharged_file_pages", +#endif #undef I /* system-wide enum vm_stat_item counters */ -- 2.50.1 extent_buffers are global and shared so their pages should not belong to any particular cgroup (currently whichever cgroups happens to allocate the extent_buffer). Btrfs tree operations should not arbitrarily block on cgroup reclaim or have the shared extent_buffer pages on a cgroup's reclaim lists. Signed-off-by: Boris Burkov --- fs/btrfs/disk-io.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 123c397ca8f8..6a6ed8c1389c 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -1930,6 +1930,7 @@ static int btrfs_init_btree_inode(struct super_block *sb) BTRFS_I(inode)->root = btrfs_grab_root(fs_info->tree_root); set_bit(BTRFS_INODE_DUMMY, &BTRFS_I(inode)->runtime_flags); __insert_inode_hash(inode, hash); + set_bit(AS_UNCHARGED, &inode->i_mapping->flags); fs_info->btree_inode = inode; return 0; -- 2.50.1