Clearing IRQ window inhibit today relies on interrupt window interception, but that is not always reachable when nested guests are involved. If L1 is intercepting IRQs, then interrupt_window_interception() will never be reached while L2 is active, because the only reason KVM would set the V_IRQ intercept in vmcb02 would be on behalf of L1, i.e. because of vmcb12. svm_clear_vintr() always operates on (at least) vmcb01, and VMRUN unconditionally sets GIF=1, which means that enter_svm_guest_mode() will always do svm_clear_vintr() via svm_set_gif(svm, true). I.e. KVM will keep the VM-wide inhibit set until control transfers back to L1 *and* an interrupt window is triggered. If L1 is not intercepting IRQs, KVM may immediately inject L1's ExtINT into L2 if IRQs are enabled in L2 without taking an interrupt window interception. Address this by clearing the IRQ window inhibit when KVM actually injects an interrupt and there are no further injectable interrupts. That way, if L1 isn't intercepting IRQs, KVM will drop the inhibit as soon as an interrupt is injected into L2. And if L1 is intercepting IRQs, KVM will keep the inhibit until the IRQ is injected into L2. So, AVIC won't be left inhibited. Note, somewhat blindly invoking kvm_clear_apicv_inhibit() is both wrong and suboptimal. If the IRQWIN inhibit isn't set, then the vCPU will unnecessarily take apicv_update_lock for write. And if a _different_ vCPU has an injectable IRQ, clearing IRQWIN may block that vCPU's ability to inject its IRQ. Defer fixing both issues to a future commit, as fixing one problem without also fixing the other would also leave KVM in a temporarily bad state, as would fixing both issues without fixing _this_ bug. I.e. it's not feasible to fix each bug independently without there being some remaining flaw in KVM. Co-developed-by: Naveen N Rao (AMD) Signed-off-by: Naveen N Rao (AMD) Tested-by: Naveen N Rao (AMD) Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 7803d2781144..24b9c2275821 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3130,20 +3130,6 @@ static int interrupt_window_interception(struct kvm_vcpu *vcpu) kvm_make_request(KVM_REQ_EVENT, vcpu); svm_clear_vintr(to_svm(vcpu)); - /* - * If not running nested, for AVIC, the only reason to end up here is ExtINTs. - * In this case AVIC was temporarily disabled for - * requesting the IRQ window and we have to re-enable it. - * - * If running nested, still remove the VM wide AVIC inhibit to - * support case in which the interrupt window was requested when the - * vCPU was not running nested. - - * All vCPUs which run still run nested, will remain to have their - * AVIC still inhibited due to per-cpu AVIC inhibition. - */ - kvm_clear_apicv_inhibit(vcpu->kvm, APICV_INHIBIT_REASON_IRQWIN); - ++vcpu->stat.irq_window_exits; return 1; } @@ -3732,6 +3718,20 @@ static void svm_inject_irq(struct kvm_vcpu *vcpu, bool reinjected) type = SVM_EVTINJ_TYPE_INTR; } + /* + * If AVIC was inhibited in order to detect an IRQ window, and there's + * no other injectable interrupts pending or L2 is active (see below), + * then drop the inhibit as the window has served its purpose. + * + * If L2 is active, this path is reachable if L1 is not intercepting + * IRQs, i.e. if KVM is injecting L1 IRQs into L2. AVIC is locally + * inhibited while L2 is active; drop the VM-wide inhibit to optimize + * the case in which the interrupt window was requested while L1 was + * active (the vCPU was not running nested). + */ + if (!kvm_cpu_has_injectable_intr(vcpu) || is_guest_mode(vcpu)) + kvm_clear_apicv_inhibit(vcpu->kvm, APICV_INHIBIT_REASON_IRQWIN); + trace_kvm_inj_virq(intr->nr, intr->soft, reinjected); ++vcpu->stat.irq_injections; -- 2.52.0.457.g6b5491de43-goog IRQ window inhibits can be requested by multiple vCPUs at the same time for injecting interrupts meant for different vCPUs. However, AVIC inhibition is VM-wide and hence it is possible for the inhibition to be cleared prematurely by the first vCPU that obtains the IRQ window even though a second vCPU is still waiting for its IRQ window. This is likely not a functional issue since the other vCPU will again see that interrupts are pending to be injected (due to KVM_REQ_EVENT), and will again request for an IRQ window inhibition. However, this can result in AVIC being rapidly toggled resulting in high contention on apicv_update_lock and degrading performance of the guest. Address this by maintaining a VM-wide count of the number of vCPUs that have requested for an IRQ window. Set/clear the inhibit reason when the count transitions between 0 and 1. This ensures that the inhibit reason is not cleared as long as there are some vCPUs still waiting for an IRQ window. Co-developed-by: Paolo Bonzini Signed-off-by: Paolo Bonzini Co-developed-by: Naveen N Rao (AMD) Signed-off-by: Naveen N Rao (AMD) Tested-by: Naveen N Rao (AMD) Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 19 ++++++++++++++++- arch/x86/kvm/svm/svm.c | 36 +++++++++++++++++++++++---------- arch/x86/kvm/svm/svm.h | 1 + arch/x86/kvm/x86.c | 19 +++++++++++++++++ 4 files changed, 63 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e441f270f354..b08baeff98b2 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1427,6 +1427,7 @@ struct kvm_arch { struct kvm_pit *vpit; #endif atomic_t vapics_in_nmi_mode; + struct mutex apic_map_lock; struct kvm_apic_map __rcu *apic_map; atomic_t apic_map_dirty; @@ -1434,9 +1435,13 @@ struct kvm_arch { bool apic_access_memslot_enabled; bool apic_access_memslot_inhibited; - /* Protects apicv_inhibit_reasons */ + /* + * Protects apicv_inhibit_reasons and apicv_nr_irq_window_req (with an + * asterisk, see kvm_inc_or_dec_irq_window_inhibit() for details). + */ struct rw_semaphore apicv_update_lock; unsigned long apicv_inhibit_reasons; + atomic_t apicv_nr_irq_window_req; gpa_t wall_clock; @@ -2309,6 +2314,18 @@ static inline void kvm_clear_apicv_inhibit(struct kvm *kvm, kvm_set_or_clear_apicv_inhibit(kvm, reason, false); } +void kvm_inc_or_dec_irq_window_inhibit(struct kvm *kvm, bool inc); + +static inline void kvm_inc_apicv_irq_window_req(struct kvm *kvm) +{ + kvm_inc_or_dec_irq_window_inhibit(kvm, true); +} + +static inline void kvm_dec_apicv_irq_window_req(struct kvm *kvm) +{ + kvm_inc_or_dec_irq_window_inhibit(kvm, false); +} + int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code, void *insn, int insn_len); void kvm_mmu_print_sptes(struct kvm_vcpu *vcpu, gpa_t gpa, const char *msg); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 24b9c2275821..559e8fa76b7e 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3729,8 +3729,11 @@ static void svm_inject_irq(struct kvm_vcpu *vcpu, bool reinjected) * the case in which the interrupt window was requested while L1 was * active (the vCPU was not running nested). */ - if (!kvm_cpu_has_injectable_intr(vcpu) || is_guest_mode(vcpu)) - kvm_clear_apicv_inhibit(vcpu->kvm, APICV_INHIBIT_REASON_IRQWIN); + if (svm->avic_irq_window && + (!kvm_cpu_has_injectable_intr(vcpu) || is_guest_mode(vcpu))) { + svm->avic_irq_window = false; + kvm_dec_apicv_irq_window_req(svm->vcpu.kvm); + } trace_kvm_inj_virq(intr->nr, intr->soft, reinjected); ++vcpu->stat.irq_injections; @@ -3932,17 +3935,28 @@ static void svm_enable_irq_window(struct kvm_vcpu *vcpu) */ if (vgif || gif_set(svm)) { /* - * IRQ window is not needed when AVIC is enabled, - * unless we have pending ExtINT since it cannot be injected - * via AVIC. In such case, KVM needs to temporarily disable AVIC, - * and fallback to injecting IRQ via V_IRQ. + * KVM only enables IRQ windows when AVIC is enabled if there's + * pending ExtINT since it cannot be injected via AVIC (ExtINT + * bypasses the local APIC). V_IRQ is ignored by hardware when + * AVIC is enabled, and so KVM needs to temporarily disable + * AVIC in order to detect when it's ok to inject the ExtINT. * - * If running nested, AVIC is already locally inhibited - * on this vCPU, therefore there is no need to request - * the VM wide AVIC inhibition. + * If running nested, AVIC is already locally inhibited on this + * vCPU (L2 vCPUs use a different MMU that never maps the AVIC + * backing page), therefore there is no need to increment the + * VM-wide AVIC inhibit. KVM will re-evaluate events when the + * vCPU exits to L1 and enable an IRQ window if the ExtINT is + * still pending. + * + * Note, the IRQ window inhibit needs to be updated even if + * AVIC is inhibited for a different reason, as KVM needs to + * keep AVIC inhibited if the other reason is cleared and there + * is still an injectable interrupt pending. */ - if (!is_guest_mode(vcpu)) - kvm_set_apicv_inhibit(vcpu->kvm, APICV_INHIBIT_REASON_IRQWIN); + if (enable_apicv && !svm->avic_irq_window && !is_guest_mode(vcpu)) { + svm->avic_irq_window = true; + kvm_inc_apicv_irq_window_req(vcpu->kvm); + } svm_set_vintr(svm); } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index ebd7b36b1ceb..68675b25ef8e 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -333,6 +333,7 @@ struct vcpu_svm { bool guest_state_loaded; + bool avic_irq_window; bool x2avic_msrs_intercepted; bool lbr_msrs_intercepted; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8acfdfc583a1..2528dfffb42b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10994,6 +10994,25 @@ void kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, } EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_or_clear_apicv_inhibit); +void kvm_inc_or_dec_irq_window_inhibit(struct kvm *kvm, bool inc) +{ + int add = inc ? 1 : -1; + + if (!enable_apicv) + return; + + /* + * Strictly speaking, the lock is only needed if going 0->1 or 1->0, + * a la atomic_dec_and_mutex_lock. However, ExtINTs are rare and + * only target a single CPU, so that is the common case; do not + * bother eliding the down_write()/up_write() pair. + */ + guard(rwsem_write)(&kvm->arch.apicv_update_lock); + if (atomic_add_return(add, &kvm->arch.apicv_nr_irq_window_req) == inc) + __kvm_set_or_clear_apicv_inhibit(kvm, APICV_INHIBIT_REASON_IRQWIN, inc); +} +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_inc_or_dec_irq_window_inhibit); + static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu) { if (!kvm_apic_present(vcpu)) -- 2.52.0.457.g6b5491de43-goog IRQ windows represent times during which an IRQ can be injected into a vCPU, and thus represent times when a vCPU is running with RFLAGS.IF=1 and GIF enabled (TPR/PPR don't matter since KVM controls interrupt injection and it only injects one interrupt at a time). On SVM, when emulating the local APIC (i.e., AVIC disabled), KVM detects IRQ windows by injecting a dummy virtual interrupt through VMCB.V_IRQ and intercepting virtual interrupts (INTERCEPT_VINTR). This intercept triggers as soon as the guest enables interrupts and is about to take the dummy interrupt, at which point the actual interrupt can be injected through VMCB.EVENTINJ. When AVIC is enabled, VMCB.V_IRQ is ignored by the hardware and so detecting IRQ windows requires AVIC to be inhibited. However, this is only necessary for ExtINTs since all other interrupts can be injected either by directly setting IRR in the APIC backing page and letting the AVIC hardware inject the interrupt into the guest, or via VMCB.V_NMI for NMIs. If AVIC is enabled but inhibited for some other reason, KVM has to request for IRQ window inhibits every time it has to inject an interrupt into the guest. This is because APICv inhibits are dynamic in nature, so KVM has to be sure that AVIC is inhibited for purposes of discovering an IRQ window even if the other inhibit is cleared in the meantime. This is particularly problematic with APICV_INHIBIT_REASON_PIT_REINJ which stays set throughout the life of the guest and results in KVM rapidly toggling IRQ window inhibit resulting in contention on apicv_update_lock. Address this by setting and clearing APICV_INHIBIT_REASON_PIT_REINJ lazily: if some other inhibit reason is already set, just increment the IRQ window request count and do not update apicv_inhibit_reasons immediately. If any other inhibit reason is set/cleared in the meantime, re-evaluate APICV_INHIBIT_REASON_PIT_REINJ by checking the IRQ window request count and update apicv_inhibit_reasons appropriately. Otherwise, just the IRQ window request count is incremented/decremented each time an IRQ window is requested. This reduces much of the contention on the apicv_update_lock semaphore and does away with much of the performance degradation. Co-developed-by: Paolo Bonzini Signed-off-by: Paolo Bonzini Co-developed-by: Naveen N Rao (AMD) Signed-off-by: Naveen N Rao (AMD) Tested-by: Naveen N Rao (AMD) Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 26 +++++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2528dfffb42b..822644d23933 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10953,7 +10953,11 @@ void __kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, old = new = kvm->arch.apicv_inhibit_reasons; - set_or_clear_apicv_inhibit(&new, reason, set); + if (reason != APICV_INHIBIT_REASON_IRQWIN) + set_or_clear_apicv_inhibit(&new, reason, set); + + set_or_clear_apicv_inhibit(&new, APICV_INHIBIT_REASON_IRQWIN, + atomic_read(&kvm->arch.apicv_nr_irq_window_req)); if (!!old != !!new) { /* @@ -11001,6 +11005,26 @@ void kvm_inc_or_dec_irq_window_inhibit(struct kvm *kvm, bool inc) if (!enable_apicv) return; + /* + * IRQ windows are requested either because of ExtINT injections, or + * because APICv is already disabled/inhibited for another reason. + * While ExtINT injections are rare and should not happen while the + * vCPU is running its actual workload, it's worth avoiding thrashing + * if the IRQ window is being requested because APICv is already + * inhibited. So, toggle the actual inhibit (which requires taking + * the lock for write) if and only if there's no other inhibit. + * kvm_set_or_clear_apicv_inhibit() always evaluates the IRQ window + * count; thus the IRQ window inhibit call _will_ be lazily updated on + * the next call, if it ever happens. + */ + if (READ_ONCE(kvm->arch.apicv_inhibit_reasons) & ~BIT(APICV_INHIBIT_REASON_IRQWIN)) { + guard(rwsem_read)(&kvm->arch.apicv_update_lock); + if (READ_ONCE(kvm->arch.apicv_inhibit_reasons) & ~BIT(APICV_INHIBIT_REASON_IRQWIN)) { + atomic_add(add, &kvm->arch.apicv_nr_irq_window_req); + return; + } + } + /* * Strictly speaking, the lock is only needed if going 0->1 or 1->0, * a la atomic_dec_and_mutex_lock. However, ExtINTs are rare and -- 2.52.0.457.g6b5491de43-goog Force apicv_update_lock and apicv_nr_irq_window_req to reside in their own cacheline to avoid generating significant contention due to false sharing when KVM is contantly creating IRQ windows. E.g. apicv_inhibit_reasons is read on every VM-Enter; disabled_exits is read on page faults, on PAUSE exits, if a vCPU is scheduled out, etc.; kvmclock_offset is read every time a vCPU needs to refresh kvmclock, and so on and so forth. Isolating the write-mostly fields from all other (read-mostly) fields improves performance by 7-8% when running netperf TCP_RR between two guests on the same physical host when using an in-kernel PIT in re-inject mode. Reported-by: Naveen N Rao (AMD) Closes: https://lore.kernel.org/all/yrxhngndj37edud6tj5y3vunaf7nirwor4n63yf4275wdocnd3@c77ujgialc6r Tested-by: Naveen N Rao (AMD) Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index b08baeff98b2..8a9f797b6a68 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1435,13 +1435,23 @@ struct kvm_arch { bool apic_access_memslot_enabled; bool apic_access_memslot_inhibited; + /* + * Force apicv_update_lock and apicv_nr_irq_window_req to reside in a + * dedicated cacheline. They are write-mostly, whereas most everything + * else in kvm_arch is read-mostly. Note that apicv_inhibit_reasons is + * read-mostly: toggling VM-wide inhibits is rare; _checking_ for + * inhibits is common. + */ + ____cacheline_aligned /* * Protects apicv_inhibit_reasons and apicv_nr_irq_window_req (with an * asterisk, see kvm_inc_or_dec_irq_window_inhibit() for details). */ struct rw_semaphore apicv_update_lock; - unsigned long apicv_inhibit_reasons; atomic_t apicv_nr_irq_window_req; + ____cacheline_aligned + + unsigned long apicv_inhibit_reasons; gpa_t wall_clock; -- 2.52.0.457.g6b5491de43-goog