From: Lance Yang The hugetlb VMA unmap path contains a potential deadlock, as reported by syzbot. In __hugetlb_zap_begin(), vma_lock is acquired before i_mmap_lock. This lock ordering conflicts with the page fault path in hugetlb_fault(), which acquires i_mmap_lock first, establishing the correct dependency as i_mmap_lock -> vma_lock. Chain exists of: &hugetlbfs_i_mmap_rwsem_key --> &hugetlb_fault_mutex_table[i] --> &vma_lock->rw_sema Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&vma_lock->rw_sema); lock(&hugetlb_fault_mutex_table[i]); lock(&vma_lock->rw_sema); lock(&hugetlbfs_i_mmap_rwsem_key); Resolve the deadlock by reordering the locks in __hugetlb_zap_begin() to follow the established i_mmap_lock -> vma_lock order. Reported-by: syzbot+3f5f9a0d292454409ca6@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-mm/69113a97.a70a0220.22f260.00ca.GAE@google.com/ Signed-off-by: Lance Yang --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b1f47b87ae65..2719995af18e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5327,9 +5327,9 @@ void __hugetlb_zap_begin(struct vm_area_struct *vma, return; adjust_range_if_pmd_sharing_possible(vma, start, end); - hugetlb_vma_lock_write(vma); if (vma->vm_file) i_mmap_lock_write(vma->vm_file->f_mapping); + hugetlb_vma_lock_write(vma); } void __hugetlb_zap_end(struct vm_area_struct *vma, -- 2.49.0