iomap zero range has a wart in that it also flushes dirty pagecache over hole mappings (rather than only unwritten mappings). This was included to accommodate a quirk in XFS where COW fork preallocation can exist over a hole in the data fork, and the associated range is reported as a hole. This is because the range actually is a hole, but XFS also has an optimization where if COW fork blocks exist for a range being written to, those blocks are used regardless of whether the data fork blocks are shared or not. For zeroing, COW fork blocks over a data fork hole are only relevant if the range is dirty in pagecache, otherwise the range is already considered zeroed. The easiest way to deal with this corner case is to flush the pagecache to trigger COW remapping into the data fork, and then operate on the updated on-disk state. The problem is that ext4 cannot accommodate a flush from this context due to being a transaction deadlock vector. Outside of the hole quirk, ext4 can avoid the flush for zero range by using the recently introduced folio batch lookup mechanism for unwritten mappings. Therefore, take the next logical step and lift the hole handling logic into the XFS iomap_begin handler. iomap will still flush on unwritten mappings without a folio batch, and XFS will flush and retry mapping lookups in the case where it would otherwise report a hole with dirty pagecache during a zero range. Note that this is intended to be a fairly straightforward lift and otherwise not change behavior. Now that the flush exists within XFS, follow on patches can further optimize it. Signed-off-by: Brian Foster --- fs/iomap/buffered-io.c | 2 +- fs/xfs/xfs_iomap.c | 25 ++++++++++++++++++++++--- 2 files changed, 23 insertions(+), 4 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 6beb876658c0..807384d72311 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1620,7 +1620,7 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, srcmap->type == IOMAP_UNWRITTEN)) { s64 status; - if (range_dirty) { + if (range_dirty && srcmap->type == IOMAP_UNWRITTEN) { range_dirty = false; status = iomap_zero_iter_flush_and_stale(&iter); } else { diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 37a1b33e9045..896d0dd07613 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -1790,6 +1790,7 @@ xfs_buffered_write_iomap_begin( if (error) return error; +restart: error = xfs_ilock_for_iomap(ip, flags, &lockmode); if (error) return error; @@ -1817,9 +1818,27 @@ xfs_buffered_write_iomap_begin( if (eof) imap.br_startoff = end_fsb; /* fake hole until the end */ - /* We never need to allocate blocks for zeroing or unsharing a hole. */ - if ((flags & (IOMAP_UNSHARE | IOMAP_ZERO)) && - imap.br_startoff > offset_fsb) { + /* We never need to allocate blocks for unsharing a hole. */ + if ((flags & IOMAP_UNSHARE) && imap.br_startoff > offset_fsb) { + xfs_hole_to_iomap(ip, iomap, offset_fsb, imap.br_startoff); + goto out_unlock; + } + + /* + * We may need to zero over a hole in the data fork if it's fronted by + * COW blocks and dirty pagecache. To make sure zeroing occurs, force + * writeback to remap pending blocks and restart the lookup. + */ + if ((flags & IOMAP_ZERO) && imap.br_startoff > offset_fsb) { + if (filemap_range_needs_writeback(inode->i_mapping, offset, + offset + count - 1)) { + xfs_iunlock(ip, lockmode); + error = filemap_write_and_wait_range(inode->i_mapping, + offset, offset + count - 1); + if (error) + return error; + goto restart; + } xfs_hole_to_iomap(ip, iomap, offset_fsb, imap.br_startoff); goto out_unlock; } -- 2.52.0