When emitting an Indirect Branch Prediction Barrier to isolate different guest security domains (different vCPUs or L1 vs. L2 in the same vCPU), defer the IBPB until VM-Enter is imminent to avoid redundant and/or unnecessary IBPBs. E.g. if a vCPU is loaded on a CPU without ever doing VM-Enter, then _KVM_ isn't responsible for doing an IBPB as KVM's job is purely to mitigate guests<=>guest attacks; guest=>host attacks are covered by IBRS. Cc: stable@vger.kernel.org Cc: Yosry Ahmed Cc: Jim Mattson Cc: David Kaplan Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/x86.c | 7 ++++++- arch/x86/kvm/x86.h | 2 +- 3 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e441f270f354..76bbc80a2d1d 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -826,6 +826,7 @@ struct kvm_vcpu_arch { u64 smbase; u64 smi_count; bool at_instruction_boundary; + bool need_ibpb; bool tpr_access_reporting; bool xfd_no_write_intercept; u64 microcode_version; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8acfdfc583a1..e5ae655702b4 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5187,7 +5187,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) * is handled on the nested VM-Exit path. */ if (static_branch_likely(&switch_vcpu_ibpb)) - indirect_branch_prediction_barrier(); + vcpu->arch.need_ibpb = true; per_cpu(last_vcpu, cpu) = vcpu; } @@ -11315,6 +11315,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) kvm_make_request(KVM_REQ_EVENT, vcpu); } + if (unlikely(vcpu->arch.need_ibpb)) { + indirect_branch_prediction_barrier(); + vcpu->arch.need_ibpb = false; + } + fpregs_assert_state_consistent(); if (test_thread_flag(TIF_NEED_FPU_LOAD)) switch_fpu_return(); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 70e81f008030..6708142d051d 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -169,7 +169,7 @@ static inline void kvm_nested_vmexit_handle_ibrs(struct kvm_vcpu *vcpu) if (guest_cpu_cap_has(vcpu, X86_FEATURE_SPEC_CTRL) || guest_cpu_cap_has(vcpu, X86_FEATURE_AMD_IBRS)) - indirect_branch_prediction_barrier(); + vcpu->arch.need_ibpb = true; } /* -- 2.52.0.457.g6b5491de43-goog