KVM flushes the APF queue completely when the asynchronous pagefault is disabled, therefore this case should not occur. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/x86.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4b8138bd4857..22024de00cbd 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13896,7 +13896,7 @@ void kvm_arch_async_page_present_queued(struct kvm_vcpu *vcpu) bool kvm_arch_can_dequeue_async_page_present(struct kvm_vcpu *vcpu) { - if (!kvm_pv_async_pf_enabled(vcpu)) + if (WARN_ON_ONCE(!kvm_pv_async_pf_enabled(vcpu))) return true; else return kvm_lapic_enabled(vcpu) && apf_pageready_slot_free(vcpu); -- 2.49.0 Fix a semi theoretical race condition related to a lack of memory barriers when dealing with vcpu->arch.apf.pageready_pending. We have the following lockless code implementing the sleep/wake pattern: kvm_arch_async_page_present_queued() running in workqueue context: kvm_make_request(KVM_REQ_APF_READY, vcpu); /* memory barrier is missing here*/ if (!vcpu->arch.apf.pageready_pending) kvm_vcpu_kick(vcpu); And vCPU code running: kvm_set_msr_common() vcpu->arch.apf.pageready_pending = false; /* memory barrier is missing here*/ And later, the vcpu_enter_guest(): if (kvm_check_request(KVM_REQ_APF_READY, vcpu)) kvm_check_async_pf_completion(vcpu) Add missing full memory barriers in both cases to avoid theoretical case of not kicking the vCPU thread. Note that the bug is mostly theoretical because kvm_make_request already uses an atomic bit operation which is already serializing on x86, requiring only for documentation purposes the smp_mb__after_atomic() after it, which is NOP. The second missing barrier, between kvm_set_msr_common and vcpu_enter_guest isn't strictly needed because KVM executes several barriers in between calling these functions, however it still makes sense to have an explicit barrier to be on the safe side. Finally, also use READ_ONCE/WRITE_ONCE. Thanks a lot to Paolo for the help with this patch. Suggested-by: Paolo Bonzini Signed-off-by: Maxim Levitsky --- arch/x86/kvm/x86.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 22024de00cbd..0fc7171ced26 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4184,7 +4184,10 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF_INT)) return 1; if (data & 0x1) { - vcpu->arch.apf.pageready_pending = false; + + /* Pairs with a memory barrier in kvm_arch_async_page_present_queued */ + smp_store_mb(vcpu->arch.apf.pageready_pending, false); + kvm_check_async_pf_completion(vcpu); } break; @@ -13879,7 +13882,7 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu, if ((work->wakeup_all || work->notpresent_injected) && kvm_pv_async_pf_enabled(vcpu) && !apf_put_user_ready(vcpu, work->arch.token)) { - vcpu->arch.apf.pageready_pending = true; + WRITE_ONCE(vcpu->arch.apf.pageready_pending, true); kvm_apic_set_irq(vcpu, &irq, NULL); } @@ -13890,7 +13893,11 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu, void kvm_arch_async_page_present_queued(struct kvm_vcpu *vcpu) { kvm_make_request(KVM_REQ_APF_READY, vcpu); - if (!vcpu->arch.apf.pageready_pending) + + /* Pairs with smp_store_mb in kvm_set_msr_common */ + smp_mb__after_atomic(); + + if (!READ_ONCE(vcpu->arch.apf.pageready_pending)) kvm_vcpu_kick(vcpu); } -- 2.49.0 Currently a #SMI can cause KVM to drop an #APF ready event and subsequently causes the guest to never resume the task that is waiting for it. This can result in tasks becoming permanently stuck within the guest. This happens because KVM flushes the APF queue without notifying the guest of completed APF requests when the guest exits to real mode. And the SMM exit code calls kvm_set_cr0 with CR.PE == 0, which triggers this code. It must be noted that while this flush is reasonable to do for the actual real mode entry, it is actually achieves nothing because it is too late to flush this queue on SMM exit. To fix this, avoid doing this flush altogether, and handle the real mode entry/exits in the same way KVM already handles the APIC enable/disable events: APF completion events are not injected while APIC is disabled, and once APIC is re-enabled, KVM raises the KVM_REQ_APF_READY request which causes the first pending #APF ready event to be injected prior to entry to the guest mode. This change also has the side benefit of preserving #APF events if the guest temporarily enters real mode - for example, to call firmware - although such usage should be extermery rare in modern operating systems. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/x86.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 0fc7171ced26..ec96328634ed 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1046,6 +1046,13 @@ bool kvm_require_dr(struct kvm_vcpu *vcpu, int dr) } EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_require_dr); +static bool kvm_pv_async_pf_enabled(struct kvm_vcpu *vcpu) +{ + u64 mask = KVM_ASYNC_PF_ENABLED | KVM_ASYNC_PF_DELIVERY_AS_INT; + + return (vcpu->arch.apf.msr_en_val & mask) == mask; +} + static inline u64 pdptr_rsvd_bits(struct kvm_vcpu *vcpu) { return vcpu->arch.reserved_gpa_bits | rsvd_bits(5, 8) | rsvd_bits(1, 2); @@ -1138,15 +1145,17 @@ void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned lon } if ((cr0 ^ old_cr0) & X86_CR0_PG) { - kvm_clear_async_pf_completion_queue(vcpu); - kvm_async_pf_hash_reset(vcpu); - /* * Clearing CR0.PG is defined to flush the TLB from the guest's * perspective. */ if (!(cr0 & X86_CR0_PG)) kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu); + /* + * Re-check APF completion events, when the guest re-enables paging. + */ + else if (kvm_pv_async_pf_enabled(vcpu)) + kvm_make_request(KVM_REQ_APF_READY, vcpu); } if ((cr0 ^ old_cr0) & KVM_MMU_CR0_ROLE_BITS) @@ -3651,13 +3660,6 @@ static int set_msr_mce(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 0; } -static inline bool kvm_pv_async_pf_enabled(struct kvm_vcpu *vcpu) -{ - u64 mask = KVM_ASYNC_PF_ENABLED | KVM_ASYNC_PF_DELIVERY_AS_INT; - - return (vcpu->arch.apf.msr_en_val & mask) == mask; -} - static int kvm_pv_enable_async_pf(struct kvm_vcpu *vcpu, u64 data) { gpa_t gpa = data & ~0x3f; -- 2.49.0