Assert that slots_lock is held when the TDX codes accesses the number of premapped pfns, as KVM relies on calls to tdx_vcpu_init_mem_region() being serialized to prevent double-population of gmem and false negatives on the consumption of a "premapped" pfn. In addition to helping document how the TDX code works, this will allow converting "nr_premapped" to a non-atomic variable, as all usage asserts that slots_lock is held. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/tdx.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index e4b70c0dbda3..27941defb62e 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1634,6 +1634,8 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, * to prevent running the TD with uninitialized memory. */ if (unlikely(kvm_tdx->state != TD_STATE_RUNNABLE)) { + lockdep_assert_held(&kvm->slots_lock); + if (KVM_BUG_ON(kvm->arch.pre_fault_allowed, kvm)) return -EIO; @@ -1767,6 +1769,8 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn, tdx_no_vcpus_enter_stop(kvm); } if (tdx_is_sept_zap_err_due_to_premap(kvm_tdx, err, entry, level)) { + lockdep_assert_held(&kvm->slots_lock); + if (KVM_BUG_ON(atomic64_dec_return(&kvm_tdx->nr_premapped) < 0, kvm)) return -EIO; @@ -3132,6 +3136,8 @@ static int tdx_gmem_post_populate(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, struct page *src_page; int ret, i; + lockdep_assert_held(&kvm->slots_lock); + /* * Get the source page if it has been faulted in. Return failure if the * source page has been swapped out or unmapped in primary memory. -- 2.51.0.268.g9569e192d0-goog