From: Hao Jia Currently, zswap writeback can be triggered by either the pool limit being hit or by the proactive writeback mechanism. However, the existing 'zswpwb' metric in memory.stat and /proc/vmstat counts all written back pages, making it difficult to distinguish between pages written back due to the pool limit and those written back proactively. Add a new statistic 'zswpwb_proactive' to memory.stat and /proc/vmstat. This counter tracks the number of pages written back due to proactive writeback. This allows users to better monitor and tune the proactive writeback mechanism. Signed-off-by: Hao Jia --- Documentation/admin-guide/cgroup-v2.rst | 4 ++++ include/linux/vm_event_item.h | 1 + mm/memcontrol.c | 1 + mm/vmstat.c | 1 + mm/zswap.c | 11 +++++++++-- 5 files changed, 16 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index 05b664b3b3e8..29a189b18efc 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1734,6 +1734,10 @@ The following nested keys are defined. zswpwb Number of pages written from zswap to swap. + zswpwb_proactive + Number of pages written from zswap to swap by proactive + writeback. This is a subset of zswpwb. + zswap_incomp Number of incompressible pages currently stored in zswap without compression. These pages could not be compressed to diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 03fe95f5a020..7a5bee0a20b6 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -138,6 +138,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, ZSWPIN, ZSWPOUT, ZSWPWB, + ZSWPWB_PROACTIVE, #endif #ifdef CONFIG_X86 DIRECT_MAP_LEVEL2_SPLIT, diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ba7f7b1954a8..830d895e77c3 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -572,6 +572,7 @@ static const unsigned int memcg_vm_event_stat[] = { ZSWPIN, ZSWPOUT, ZSWPWB, + ZSWPWB_PROACTIVE, #endif #ifdef CONFIG_TRANSPARENT_HUGEPAGE THP_FAULT_ALLOC, diff --git a/mm/vmstat.c b/mm/vmstat.c index f534972f517d..66fd06d1bb01 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1452,6 +1452,7 @@ const char * const vmstat_text[] = { [I(ZSWPIN)] = "zswpin", [I(ZSWPOUT)] = "zswpout", [I(ZSWPWB)] = "zswpwb", + [I(ZSWPWB_PROACTIVE)] = "zswpwb_proactive", #endif #ifdef CONFIG_X86 [I(DIRECT_MAP_LEVEL2_SPLIT)] = "direct_map_level2_splits", diff --git a/mm/zswap.c b/mm/zswap.c index 1173ac6836fa..bf23c46e838e 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1048,7 +1048,8 @@ static bool zswap_decompress(struct zswap_entry *entry, struct folio *folio) * freed. */ static int zswap_writeback_entry(struct zswap_entry *entry, - swp_entry_t swpentry) + swp_entry_t swpentry, + bool proactive) { struct xarray *tree; pgoff_t offset = swp_offset(swpentry); @@ -1108,6 +1109,12 @@ static int zswap_writeback_entry(struct zswap_entry *entry, if (entry->objcg) count_objcg_events(entry->objcg, ZSWPWB, 1); + if (proactive) { + count_vm_event(ZSWPWB_PROACTIVE); + if (entry->objcg) + count_objcg_events(entry->objcg, ZSWPWB_PROACTIVE, 1); + } + zswap_entry_free(entry); /* folio is up to date */ @@ -1223,7 +1230,7 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o */ spin_unlock(&l->lock); - writeback_result = zswap_writeback_entry(entry, swpentry); + writeback_result = zswap_writeback_entry(entry, swpentry, proactive_wb); if (writeback_result) { zswap_reject_reclaim_fail++; -- 2.34.1