We introduce vma_is_read_locked(), which must deal with the case in which VMA write lock sets refcnt to VMA_LOCK_OFFSET or VMA_LOCK_OFFSET + 1. Luckily is_vma_writer_only() already exists which we can use to check this. We then try to make vma_assert_locked() use lockdep as far as we can. Unfortunately the VMA lock implementation does not even try to track VMA write locks using lockdep, so we cannot track the lock this way. This is less egregious than it might seem as VMA write locks are predicated on mmap write locks, which we do lockdep assert. vma_assert_write_locked() already asserts the mmap write lock is taken so we get that checked implicitly. However for read locks we do indeed use lockdup, via rwsem_acquire_read() called in vma_start_read() and rwsem_release_read() called in vma_refcount_put() called in turn by vma_end_read(). Therefore we perform a lockdep assertion if the VMA is known to be read-locked. If it is write-locked, we assert the mmap lock instead, with a lockdep check if lockdep is enabled. If lockdep is not enabled, we just check that locks are in place. Signed-off-by: Lorenzo Stoakes --- include/linux/mmap_lock.h | 34 ++++++++++++++++++++++++++++++---- 1 file changed, 30 insertions(+), 4 deletions(-) diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index b50416fbba20..6979222882f1 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -236,6 +236,13 @@ int vma_start_write_killable(struct vm_area_struct *vma) return __vma_start_write(vma, mm_lock_seq, TASK_KILLABLE); } +static inline bool vma_is_read_locked(const struct vm_area_struct *vma) +{ + const unsigned int refcnt = refcount_read(&vma->vm_refcnt); + + return refcnt > 1 && !is_vma_writer_only(refcnt); +} + static inline void vma_assert_write_locked(struct vm_area_struct *vma) { unsigned int mm_lock_seq; @@ -243,12 +250,31 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma); } +/** + * vma_assert_locked() - Assert that @vma is either read or write locked and + * that we have ownership of that lock (if lockdep is enabled). + * @vma: The VMA we assert. + * + * If lockdep is enabled, we ensure ownership of the VMA lock. Otherwise we + * assert that we are VMA write-locked, which implicitly asserts that we hold + * the mmap write lock. + */ static inline void vma_assert_locked(struct vm_area_struct *vma) { - unsigned int mm_lock_seq; - - VM_BUG_ON_VMA(refcount_read(&vma->vm_refcnt) <= 1 && - !__is_vma_write_locked(vma, &mm_lock_seq), vma); + /* + * VMA locks currently only utilise lockdep for read locks, as + * vma_end_write_all() releases an unknown number of VMA write locks and + * we don't currently walk the maple tree to identify which locks are + * released even under CONFIG_LOCKDEP. + * + * However, VMA write locks are predicated on an mmap write lock, which + * we DO track under lockdep, and which vma_assert_write_locked() + * asserts. + */ + if (vma_is_read_locked(vma)) + lockdep_assert(lock_is_held(&vma->vmlock_dep_map)); + else + vma_assert_write_locked(vma); } static inline bool vma_is_attached(struct vm_area_struct *vma) -- 2.52.0