When dma_iova_link() fails partway through mapping a request's bvec list, the function breaks out of the loop without cleaning up already mapped segments. Similarly, if dma_iova_sync() fails after linking all segments, no cleanup is performed. This leaves partial IOVA mappings in place. The completion path attempts to unmap the full expected size via dma_iova_destroy() or nvme_unmap_data(), but only a partial size was actually mapped, leading to incorrect unmap operations. Add an out_unlink error path that calls dma_iova_destroy() to clean up partial mappings before returning failure. The dma_iova_destroy() function handles both partial unlink and IOVA space freeing. It correctly handles the mapped_len == 0 case (first dma_iova_link() failure) by only freeing the IOVA allocation without attempting to unmap. Signed-off-by: Chaitanya Kulkarni --- block/blk-mq-dma.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index 3c87779cdc19..bfdb9ed70741 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -121,17 +121,20 @@ static bool blk_rq_dma_map_iova(struct request *req, struct device *dma_dev, error = dma_iova_link(dma_dev, state, vec->paddr, mapped, vec->len, dir, attrs); if (error) - break; + goto out_unlink; mapped += vec->len; } while (blk_map_iter_next(req, &iter->iter, vec)); error = dma_iova_sync(dma_dev, state, 0, mapped); - if (error) { - iter->status = errno_to_blk_status(error); - return false; - } + if (error) + goto out_unlink; return true; + +out_unlink: + dma_iova_destroy(dma_dev, state, mapped, dir, attrs); + iter->status = errno_to_blk_status(error); + return false; } static inline void blk_rq_map_iter_init(struct request *rq, -- 2.39.5