This driver assumes that bio vectors are memory aligned to the logical block size, so set the queue limit to reflect that. Unless we set up the limit based on the logical block size, we will go out of page bounds in copy_to_nullb / copy_from_nullb. Apparently this wasn't noticed so far because none of the tests generate such buffers, but since commit 851c4c96db00 ("xfs: implement XFS_IOC_DIOINFO in terms of vfs_getattr") xfstests generates unaligned I/O, which now lead to memory corruption when using null_blk devices with 4k block size. Fixes: bf8d08532bc1 ("iomap: add support for dma aligned direct-io") Fixes: b1a000d3b8ec ("block: relax direct io memory alignment") Reviewed-by: Christoph Hellwig Reviewed-by: Keith Busch Signed-off-by: Hans Holmberg --- Pardon the churn. Changes in v4: * Added reviewed-bys from Keith and Christoph Changes in v3: * Improved the commit message, providing more background and hilighting the severty of the issue (as suggested by Christoph, thanks!) Changes in v2: * Added fixes tags from Christoph v1: https://lore.kernel.org/all/20251029133956.19554-1-hans.holmberg@wdc.com/ drivers/block/null_blk/main.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index f982027e8c85..0ee55f889cfd 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -1949,6 +1949,7 @@ static int null_add_dev(struct nullb_device *dev) .logical_block_size = dev->blocksize, .physical_block_size = dev->blocksize, .max_hw_sectors = dev->max_sectors, + .dma_alignment = dev->blocksize - 1, }; struct nullb *nullb; -- 2.34.1