Incompressible pages handling logic in zswap_compress() is setting 'dlen' as PAGE_SIZE twice. Once before deciding whether to save the content as is, and once again after it is decided to save it as is. But the value of 'dlen' is used only if it is decided to save the content as is, so the first write is unnecessary. It is not causing real user issues, but making code confusing to read. Remove the unnecessary write operation. Signed-off-by: SeongJae Park --- mm/zswap.c | 1 - 1 file changed, 1 deletion(-) diff --git a/mm/zswap.c b/mm/zswap.c index c1af782e54ec..80619c8589a7 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -894,7 +894,6 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, * to the active LRU list in the case. */ if (comp_ret || !dlen || dlen >= PAGE_SIZE) { - dlen = PAGE_SIZE; if (!mem_cgroup_zswap_writeback_enabled( folio_memcg(page_folio(page)))) { comp_ret = comp_ret ? comp_ret : -EINVAL; -- 2.39.5 As the subject says. Signed-off-by: SeongJae Park --- mm/memcontrol.c | 2 +- mm/zswap.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 69c970554e85..74b1bc2252b6 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5421,7 +5421,7 @@ bool obj_cgroup_may_zswap(struct obj_cgroup *objcg) * @size: size of compressed object * * This forces the charge after obj_cgroup_may_zswap() allowed - * compression and storage in zwap for this cgroup to go ahead. + * compression and storage in zswap for this cgroup to go ahead. */ void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size) { diff --git a/mm/zswap.c b/mm/zswap.c index 80619c8589a7..f6b1c8832a4f 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -879,7 +879,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, * acomp instance, then get those requests done simultaneously. but in this * case, zswap actually does store and load page by page, there is no * existing method to send the second page before the first page is done - * in one thread doing zwap. + * in one thread doing zswap. * but in different threads running on different cpu, we have different * acomp instance, so multiple threads can do (de)compression in parallel. */ @@ -1128,7 +1128,7 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o * * 1. We extract the swp_entry_t to the stack, allowing * zswap_writeback_entry() to pin the swap entry and - * then validate the zwap entry against that swap entry's + * then validate the zswap entry against that swap entry's * tree using pointer value comparison. Only when that * is successful can the entry be dereferenced. * -- 2.39.5 Changes made by commit 796c2c23e14e ("zswap: replace RB tree with xarray") is not reflected on a comment. Update the comment. Signed-off-by: SeongJae Park --- mm/zswap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/zswap.c b/mm/zswap.c index f6b1c8832a4f..5d0f8b13a958 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -175,7 +175,7 @@ static struct shrinker *zswap_shrinker; * This structure contains the metadata for tracking a single compressed * page within zswap. * - * swpentry - associated swap entry, the offset indexes into the red-black tree + * swpentry - associated swap entry, the offset indexes into the xarray * length - the length in bytes of the compressed page data. Needed during * decompression. * referenced - true if the entry recently entered the zswap pool. Unset by the -- 2.39.5 The change from commit 796c2c23e14e ("zswap: replace RB tree with xarray") is not reflected on the document. Update the document. Signed-off-by: SeongJae Park --- Documentation/admin-guide/mm/zswap.rst | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst index 283d77217c6f..2464425c783d 100644 --- a/Documentation/admin-guide/mm/zswap.rst +++ b/Documentation/admin-guide/mm/zswap.rst @@ -59,11 +59,11 @@ returned by the allocation routine and that handle must be mapped before being accessed. The compressed memory pool grows on demand and shrinks as compressed pages are freed. The pool is not preallocated. -When a swap page is passed from swapout to zswap, zswap maintains a mapping -of the swap entry, a combination of the swap type and swap offset, to the -zsmalloc handle that references that compressed swap page. This mapping is -achieved with a red-black tree per swap type. The swap offset is the search -key for the tree nodes. +When a swap page is passed from swapout to zswap, zswap maintains a mapping of +the swap entry, a combination of the swap type and swap offset, to the zsmalloc +handle that references that compressed swap page. This mapping is achieved +with an xarray per swap type. The swap offset is the search key for the xarray +nodes. During a page fault on a PTE that is a swap entry, the swapin code calls the zswap load function to decompress the page into the page allocated by the page -- 2.39.5