While fuzzing KVM on RISC-V, a use-after-free was observed in kvm_riscv_gstage_get_leaf(), where ptep_get() dereferences a freed gstage page table page during gfn unmap. The crash manifests as: use-after-free in ptep_get include/linux/pgtable.h:340 [inline] use-after-free in kvm_riscv_gstage_get_leaf arch/riscv/kvm/gstage.c:89 Call Trace: ptep_get include/linux/pgtable.h:340 [inline] kvm_riscv_gstage_get_leaf+0x2ea/0x358 arch/riscv/kvm/gstage.c:89 kvm_riscv_gstage_unmap_range+0xf0/0x308 arch/riscv/kvm/gstage.c:265 kvm_unmap_gfn_range+0x168/0x1fc arch/riscv/kvm/mmu.c:256 kvm_mmu_unmap_gfn_range virt/kvm/kvm_main.c:724 [inline] page last free pid 808 tgid 808 stack trace: kvm_riscv_mmu_free_pgd+0x1b6/0x26a arch/riscv/kvm/mmu.c:457 kvm_arch_flush_shadow_all+0x1a/0x24 arch/riscv/kvm/mmu.c:134 kvm_flush_shadow_all virt/kvm/kvm_main.c:344 [inline] The UAF is caused by gstage page table walks running concurrently with gstage pgd teardown. In particular, kvm_unmap_gfn_range() can traverse gstage page tables while kvm_arch_flush_shadow_all() frees the pgd, leading to use-after-free of page table pages. Fix the issue by serializing gstage unmap and pgd teardown with kvm->mmu_lock. Holding mmu_lock ensures that gstage page tables remain valid for the duration of unmap operations and prevents concurrent frees. This matches existing RISC-V KVM usage of mmu_lock to protect gstage map/unmap operations, e.g. kvm_riscv_mmu_iounmap. Fixes: dd82e35638d67f ("RISC-V: KVM: Factor-out g-stage page table management") Signed-off-by: Jiakai Xu Signed-off-by: Jiakai Xu --- arch/riscv/kvm/mmu.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index a1c3b2ec1dde5..08316e433d729 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -128,7 +128,9 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) void kvm_arch_flush_shadow_all(struct kvm *kvm) { + spin_lock(&kvm->mmu_lock); kvm_riscv_mmu_free_pgd(kvm); + spin_unlock(&kvm->mmu_lock); } void kvm_arch_flush_shadow_memslot(struct kvm *kvm, @@ -268,9 +270,11 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) gstage.flags = 0; gstage.vmid = READ_ONCE(kvm->arch.vmid.vmid); gstage.pgd = kvm->arch.pgd; + spin_lock(&kvm->mmu_lock); kvm_riscv_gstage_unmap_range(&gstage, range->start << PAGE_SHIFT, (range->end - range->start) << PAGE_SHIFT, range->may_block); + spin_unlock(&kvm->mmu_lock); return false; } -- 2.34.1