There was recently some confusion around THPs and the interaction with KernelPageSize / MMUPageSize. Historically, these entries always correspond to the smallest size we could encounter, not any current usage of transparent huge pages or larger sizes used by the MMU. Ever since we added THP support many, many years ago, these entries would keep reporting the smallest (fallback) granularity in a VMA. For this reason, they default to PAGE_SIZE for all VMAs except for VMAs where we have the guarantee that the system and the MMU will always use larger page sizes. hugetlb, for example, exposes a custom vm_ops->pagesize callback to handle that. Similarly, dax/device exposes a custom vm_ops->pagesize callback and provides similar guarantees. Let's clarify the historical meaning of KernelPageSize / MMUPageSize, and point at "AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" regarding PMD entries. While at it, document "FilePmdMapped", clarify what the "AnonHugePages" and "ShmemPmdMapped" entries really mean, and make it clear that there are no other entries for other THP/folio sizes or mappings. Link: https://lore.kernel.org/all/20260225232708.87833-1-ak@linux.intel.com/ Cc: Andrew Morton Cc: Lorenzo Stoakes Cc: Zi Yan Cc: Baolin Wang Cc: Liam R. Howlett Cc: Nico Pache Cc: Ryan Roberts Cc: Barry Song Cc: Lance Yang Cc: Jonathan Corbet Cc: Shuah Khan Cc: Usama Arif Cc: Andi Kleen Signed-off-by: David Hildenbrand (Arm) --- Documentation/filesystems/proc.rst | 37 ++++++++++++++++++++++-------- 1 file changed, 27 insertions(+), 10 deletions(-) diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst index b0c0d1b45b99..0f67e47528fc 100644 --- a/Documentation/filesystems/proc.rst +++ b/Documentation/filesystems/proc.rst @@ -464,6 +464,7 @@ Memory Area, or VMA) there is a series of lines such as the following:: KSM: 0 kB LazyFree: 0 kB AnonHugePages: 0 kB + FilePmdMapped: 0 kB ShmemPmdMapped: 0 kB Shared_Hugetlb: 0 kB Private_Hugetlb: 0 kB @@ -477,13 +478,25 @@ Memory Area, or VMA) there is a series of lines such as the following:: The first of these lines shows the same information as is displayed for the mapping in /proc/PID/maps. Following lines show the size of the -mapping (size); the size of each page allocated when backing a VMA -(KernelPageSize), which is usually the same as the size in the page table -entries; the page size used by the MMU when backing a VMA (in most cases, -the same as KernelPageSize); the amount of the mapping that is currently -resident in RAM (RSS); the process's proportional share of this mapping -(PSS); and the number of clean and dirty shared and private pages in the -mapping. +mapping (size); the smallest possible page size allocated when +backing a VMA (KernelPageSize), which is the granularity in which VMA +modifications can be performed; the smallest possible page size that could +be used by the MMU (MMUPageSize) when backing a VMA; the amount of the +mapping that is currently resident in RAM (RSS); the process's proportional +share of this mapping (PSS); and the number of clean and dirty shared and +private pages in the mapping. + +Historically, the "KernelPageSize" always corresponds to the "MMUPageSize", +except when a larger kernel page size is emulated on a system with a smaller +page size used by the MMU, which was the case for PPC64 in the past. +Further, "KernelPageSize" and "MMUPageSize" always correspond to the +smallest possible granularity (fallback) that could be encountered in a +VMA throughout its lifetime. These values are not affected by any current +transparent grouping of pages by Linux (Transparent Huge Pages) or any +current usage of larger MMU page sizes (either through architectural +huge-page mappings or other transparent groupings done by the MMU). +"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" provide insight into +the usage of some architectural huge-page mappings. The "proportional set size" (PSS) of a process is the count of pages it has in memory, where each page is divided by the number of processes sharing it. @@ -528,10 +541,14 @@ pressure if the memory is clean. Please note that the printed value might be lower than the real value due to optimizations used in the current implementation. If this is not desirable please file a bug report. -"AnonHugePages" shows the amount of memory backed by transparent hugepage. +"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" show the amount of +memory backed by transparent hugepages that are currently mapped through +architectural huge-page mappings (PMD). "AnonHugePages" corresponds to memory +that does not belong to a file, "ShmemPmdMapped" to shared memory (shmem/tmpfs) +and "FilePmdMapped" to file-backed memory (excluding shmem/tmpfs). -"ShmemPmdMapped" shows the amount of shared (shmem/tmpfs) memory backed by -huge pages. +There are no dedicated entries for transparent huge pages (or similar concepts) +that are not mapped through architectural huge-page mappings (PMD). "Shared_Hugetlb" and "Private_Hugetlb" show the amounts of memory backed by hugetlbfs page which is *not* counted in "RSS" or "PSS" field for historical -- 2.43.0