riscv64-gcc-linux-gnu (v8.5) reports a compile time assert in: r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end), clamp_t(s64, fault_idx + radius, pg.start, pg.end)); where it decides that pg.start > pg.end in: clamp_t(s64, fault_idx + radius, pg.start, pg.end)); where pg comes from: const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1); That does not seem like it could be true. Even for pg.start == pg.end, we would need folio_test_large() to evaluate to false at compile time: static inline unsigned long folio_nr_pages(const struct folio *folio) { if (!folio_test_large(folio)) return 1; return folio_large_nr_pages(folio); } Workaround by open coding the range computation. Also, simplify the type declarations for the relevant variables. Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202601240453.QCjgGdJa-lkp@intel.com/ Signed-off-by: Ankur Arora --- Hi Andrew I'm not certain about linux-next rebasing protocol, but I'm guessing this patch will be squashed in patch-8 ("mm: folio_zero_user: cache neighbouring pages"). The commit message doesn't contain anything needing preserving if it is. Thanks Ankur mm/memory.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index ce933ee4a3dd..e49340f51fa9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -7282,30 +7282,29 @@ static void clear_contig_highpages(struct page *page, unsigned long addr, void folio_zero_user(struct folio *folio, unsigned long addr_hint) { const unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio)); - const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE; const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1); - const int radius = FOLIO_ZERO_LOCALITY_RADIUS; + const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE; + const long radius = FOLIO_ZERO_LOCALITY_RADIUS; struct range r[3]; int i; /* - * Faulting page and its immediate neighbourhood. Will be cleared at the - * end to keep its cachelines hot. + * Faulting page and its immediate neighbourhood. Cleared at the end to + * keep its cachelines hot. */ - r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end), - clamp_t(s64, fault_idx + radius, pg.start, pg.end)); + r[2] = DEFINE_RANGE(fault_idx - radius < (long)pg.start ? pg.start : fault_idx - radius, + fault_idx + radius > (long)pg.end ? pg.end : fault_idx + radius); - /* Region to the left of the fault */ - r[1] = DEFINE_RANGE(pg.start, - clamp_t(s64, r[2].start - 1, pg.start - 1, r[2].start)); + + /* Region to the left of the fault. */ + r[1] = DEFINE_RANGE(pg.start, r[2].start - 1); /* Region to the right of the fault: always valid for the common fault_idx=0 case. */ - r[0] = DEFINE_RANGE(clamp_t(s64, r[2].end + 1, r[2].end, pg.end + 1), - pg.end); + r[0] = DEFINE_RANGE(r[2].end + 1, pg.end); for (i = 0; i < ARRAY_SIZE(r); i++) { const unsigned long addr = base_addr + r[i].start * PAGE_SIZE; - const unsigned int nr_pages = range_len(&r[i]); + const long nr_pages = (long)range_len(&r[i]); struct page *page = folio_page(folio, r[i].start); if (nr_pages > 0) -- 2.31.1