With Large Block Size (LBS) support, ext4 can now use block sizes larger than PAGE_SIZE. The mballoc kunit tests previously skipped three test cases (test_mb_mark_used, test_mb_free_blocks, test_mb_mark_used_cost) under this configuration because the buddy cache inode's folio mapping order was never initialized in the test harness. The real mount path configures s_min_folio_order and s_max_folio_order in ext4_fill_super(), which allows ext4_set_inode_mapping_order() to set up the correct folio order for the buddy cache inode. The kunit test bypasses ext4_fill_super(), so the mapping order stayed at zero and __filemap_get_folio() allocated order-0 folios too small for LBS. Initialize s_min_folio_order and s_max_folio_order in mbt_init_sb_layout() to mirror ext4_fill_super() behavior, enabling properly sized folio allocations and removing the three blocksize > PAGE_SIZE skips. Signed-off-by: Baokun Li --- fs/ext4/mballoc-test.c | 14 ++------------ 1 file changed, 2 insertions(+), 12 deletions(-) diff --git a/fs/ext4/mballoc-test.c b/fs/ext4/mballoc-test.c index 90ed505fa4b1..04bc9f773d63 100644 --- a/fs/ext4/mballoc-test.c +++ b/fs/ext4/mballoc-test.c @@ -206,6 +206,8 @@ static void mbt_init_sb_layout(struct super_block *sb, sbi->s_desc_per_block_bits = sb->s_blocksize_bits - (fls(layout->desc_size) - 1); sbi->s_desc_per_block = 1 << sbi->s_desc_per_block_bits; + sbi->s_min_folio_order = get_order(sb->s_blocksize); + sbi->s_max_folio_order = sbi->s_min_folio_order; es->s_first_data_block = cpu_to_le32(0); es->s_blocks_count_lo = cpu_to_le32(layout->blocks_per_group * @@ -791,10 +793,6 @@ static void test_mb_mark_used(struct kunit *test) struct test_range ranges[TEST_RANGE_COUNT]; int i; - /* buddy cache assumes that each page contains at least one block */ - if (sb->s_blocksize > PAGE_SIZE) - kunit_skip(test, "blocksize exceeds pagesize"); - bitmap = kunit_kzalloc(test, sb->s_blocksize, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bitmap); buddy = kunit_kzalloc(test, sb->s_blocksize, GFP_KERNEL); @@ -858,10 +856,6 @@ static void test_mb_free_blocks(struct kunit *test) int i; struct test_range ranges[TEST_RANGE_COUNT]; - /* buddy cache assumes that each page contains at least one block */ - if (sb->s_blocksize > PAGE_SIZE) - kunit_skip(test, "blocksize exceeds pagesize"); - bitmap = kunit_kzalloc(test, sb->s_blocksize, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bitmap); buddy = kunit_kzalloc(test, sb->s_blocksize, GFP_KERNEL); @@ -905,10 +899,6 @@ static void test_mb_mark_used_cost(struct kunit *test) int i, j; unsigned long start, end, all = 0; - /* buddy cache assumes that each page contains at least one block */ - if (sb->s_blocksize > PAGE_SIZE) - kunit_skip(test, "blocksize exceeds pagesize"); - ret = ext4_mb_load_buddy_test(sb, TEST_GOAL_GROUP, &e4b); KUNIT_ASSERT_EQ(test, ret, 0); -- 2.43.7