Add kernel command line option "count_zero_page" to track anonymous pages have been allocated and mapped to userspace but zero-filled. This feature is mainly used to debug large folio mechanism, which pre-allocates and map more pages than actually needed, leading to memory waste from unaccessed pages. Export the result in /proc/pid/smaps as "AnonZero" field. Link: https://lore.kernel.org/linux-mm/20260210043456.2137482-1-haowenchao22@gmail.com/ Signed-off-by: Wenchao Hao --- Documentation/filesystems/proc.rst | 5 +++++ fs/proc/task_mmu.c | 10 ++++++++++ 2 files changed, 15 insertions(+) diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst index b0c0d1b45b99..573c8b015e39 100644 --- a/Documentation/filesystems/proc.rst +++ b/Documentation/filesystems/proc.rst @@ -545,6 +545,11 @@ replaced by copy-on-write) part of the underlying shmem object out on swap. does not take into account swapped out page of underlying shmem objects. "Locked" indicates whether the mapping is locked in memory or not. +"AnonZero" shows the size of anonymous pages that have never been accessed +after mapping, and it can reflect the memory waste caused by huge pages. +Implemented by scanning the size of zero-filled pages of the VMA. It +is default disabled, and enabled via cmdline param "count_zero_page=true". + "THPeligible" indicates whether the mapping is eligible for allocating naturally aligned THP pages of any currently enabled size. 1 if true, 0 otherwise. diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index dd3b5cf9f0b7..c39ebd015724 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -880,6 +880,7 @@ struct mem_size_stats { u64 pss_dirty; u64 pss_locked; u64 swap_pss; + u64 anon_zero; }; static void smaps_page_accumulate(struct mem_size_stats *mss, @@ -912,6 +913,10 @@ static void smaps_page_accumulate(struct mem_size_stats *mss, } } +/* If scan and count zero-filled pages */ +static bool count_zero_page; +core_param(count_zero_page, count_zero_page, bool, 0644); + static void smaps_account(struct mem_size_stats *mss, struct page *page, bool compound, bool young, bool dirty, bool locked, bool present) @@ -931,6 +936,9 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page, if (!folio_test_swapbacked(folio) && !dirty && !folio_test_dirty(folio)) mss->lazyfree += size; + + if (count_zero_page && pages_identical(page, ZERO_PAGE(0))) + mss->anon_zero += PAGE_SIZE; } if (folio_test_ksm(folio)) @@ -1363,6 +1371,8 @@ static void __show_smap(struct seq_file *m, const struct mem_size_stats *mss, mss->swap_pss >> PSS_SHIFT); SEQ_PUT_DEC(" kB\nLocked: ", mss->pss_locked >> PSS_SHIFT); + if (count_zero_page) + SEQ_PUT_DEC(" kB\nAnonZero: ", mss->anon_zero); seq_puts(m, " kB\n"); } -- 2.45.0