If we get a signal, we need to restore the vm_refcnt. The wrinkle in that is that we might be the last reference. If that happens, fix the refcount to look like we weren't interrupted by a fatal signal. Reported-by: syzbot+5b19bad23ac7f44bf8b8@syzkaller.appspotmail.com Fixes: 2197bb60f890 ("mm: add vma_start_write_killable()") Signed-off-by: Matthew Wilcox (Oracle) Cc: Suren Baghdasaryan Cc: Liam R. Howlett Cc: Vlastimil Babka Cc: Lorenzo Stoakes --- Andrew, since the vma_start_write_killable() patch is in mm-stable, I don't think you can put this in as a fixup, right? Suren, Liam, Vlastimil, Lorenzo ... none of you spotted this bug. Any other stupid thing I've done? And am I doing the right thing with refcount_set()? mm/mmap_lock.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c index e6e5570d1ec7..71af7f0a5fe1 100644 --- a/mm/mmap_lock.c +++ b/mm/mmap_lock.c @@ -74,9 +74,18 @@ static inline int __vma_enter_locked(struct vm_area_struct *vma, refcount_read(&vma->vm_refcnt) == tgt_refcnt, state); if (err) { + if (refcount_sub_and_test(VMA_LOCK_OFFSET, &vma->vm_refcnt)) { + /* Oh cobblers. While we got a fatal signal, we + * raced with the last user. Pretend we didn't notice + * the signal + */ + refcount_set(&vma->vm_refcnt, VMA_LOCK_OFFSET); + goto acquired; + } rwsem_release(&vma->vmlock_dep_map, _RET_IP_); return err; } +acquired: lock_acquired(&vma->vmlock_dep_map, _RET_IP_); return 1; -- 2.47.2