Reduce the compression buffer size from 2 * PAGE_SIZE to only one page, as the compression output (in the success case) should not exceed the length of the input. In the past, Chengming tried to reduce the compression buffer size, but ran into issues with the LZO algorithm (see [2]). Herbert Xu reported that the issue has been fixed (see [3]). Now we should have the guarantee that compressors' output should not exceed one page in the success case, and the algorithm will just report failure otherwise. With this patch, we save one page per cpu (per compression algorithm). [1]: https://lore.kernel.org/linux-mm/20231213-zswap-dstmem-v4-1-f228b059dd89@bytedance.com/ [2]: https://lore.kernel.org/lkml/0000000000000b05cd060d6b5511@google.com/ [3]: https://lore.kernel.org/linux-mm/aKUmyl5gUFCdXGn-@gondor.apana.org.au/ Co-developed-by: Chengming Zhou Signed-off-by: Chengming Zhou Signed-off-by: Nhat Pham --- mm/zswap.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 1f1ac043a2d9..5dd282c5b626 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -833,7 +833,7 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) u8 *buffer = NULL; int ret; - buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu)); + buffer = kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_node(cpu)); if (!buffer) { ret = -ENOMEM; goto fail; @@ -960,12 +960,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, sg_init_table(&input, 1); sg_set_page(&input, page, PAGE_SIZE, 0); - /* - * We need PAGE_SIZE * 2 here since there maybe over-compression case, - * and hardware-accelerators may won't check the dst buffer size, so - * giving the dst buffer with enough length to avoid buffer overflow. - */ - sg_init_one(&output, dst, PAGE_SIZE * 2); + sg_init_one(&output, dst, PAGE_SIZE); acomp_request_set_params(acomp_ctx->req, &input, &output, PAGE_SIZE, dlen); /* base-commit: c0e3b3f33ba7b767368de4afabaf7c1ddfdc3872 -- 2.47.3