hugetlb_vmdelete_list() uses trylock to acquire VMA locks during truncate operations. As per the original design in commit 40549ba8f8e0 ("hugetlb: use new vma_lock for pmd sharing synchronization"), if the trylock fails or the VMA has no lock, it should skip that VMA. Any remaining mapped pages are handled by remove_inode_hugepages() which is called after hugetlb_vmdelete_list() and uses proper lock ordering to guarantee unmapping success. Currently, when hugetlb_vma_trylock_write() returns success (1) for VMAs without shareable locks, the code proceeds to call unmap_hugepage_range(). This causes assertion failures in huge_pmd_unshare() → hugetlb_vma_assert_locked() because no lock is actually held: WARNING: CPU: 1 PID: 6594 Comm: syz.0.28 Not tainted Call Trace: hugetlb_vma_assert_locked+0x1dd/0x250 huge_pmd_unshare+0x2c8/0x540 __unmap_hugepage_range+0x6e3/0x1aa0 unmap_hugepage_range+0x32e/0x410 hugetlb_vmdelete_list+0x189/0x1f0 Fix by using goto to ensure locks acquired by trylock are always released, even when skipping VMAs without shareable locks. Reported-by: syzbot+f26d7c75c26ec19790e7@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?extid=f26d7c75c26ec19790e7 Fixes: 40549ba8f8e0 ("hugetlb: use new vma_lock for pmd sharing synchronization") Suggested-by: Andrew Morton Signed-off-by: Deepanshu Kartikey --- Changes in v2: - Use goto to unlock after trylock, avoiding lock leaks (Andrew Morton) - Add comment explaining why non-shareable VMAs are skipped (Andrew Morton) --- fs/hugetlbfs/inode.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 9e0625167517..9fa7c72ac1a6 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -488,6 +488,14 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end, if (!hugetlb_vma_trylock_write(vma)) continue; + /* + * Skip VMAs without shareable locks. Per the design in commit + * 40549ba8f8e0, these will be handled by remove_inode_hugepages() + * called after this function with proper locking. + */ + if (!__vma_shareable_lock(vma)) + goto skip; + v_start = vma_offset_start(vma, start); v_end = vma_offset_end(vma, end); @@ -498,7 +506,8 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end, * vmas. Therefore, lock is not held when calling * unmap_hugepage_range for private vmas. */ - hugetlb_vma_unlock_write(vma); +skip: + hugetlb_vma_unlock_write(vma); } } -- 2.43.0