The AMD APM states that VMRUN, VMLOAD, VMSAVE, CLGI, VMMCALL, and INVLPGA instructions should generate a #UD when EFER.SVME is cleared. Currently, when VMLOAD, VMSAVE, or CLGI are executed in L1 with EFER.SVME cleared, no #UD is generated in certain cases. This is because the intercepts for these instructions are cleared based on whether or not vls or vgif is enabled. The #UD fails to be generated when the intercepts are absent. INVLPGA is always intercepted, but there is no call to nested_svm_check_permissions() which is responsible for checking EFER.SVME and queuing the #UD exception. Fix the missing #UD generation by ensuring that all relevant instructions have intercepts set when SVME.EFER is disabled and that the exit handlers contain the necessary checks. VMMCALL is special because KVM's ABI is that VMCALL/VMMCALL are always supported for L1 and never fault. Signed-off-by: Kevin Cheng --- arch/x86/kvm/svm/svm.c | 27 +++++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 24d59ccfa40d9..fc1b8707bb00c 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -228,6 +228,14 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) if (!is_smm(vcpu)) svm_free_nested(svm); + /* + * If EFER.SVME is being cleared, we must intercept these + * instructions to ensure #UD is generated. + */ + svm_set_intercept(svm, INTERCEPT_CLGI); + svm_set_intercept(svm, INTERCEPT_VMSAVE); + svm_set_intercept(svm, INTERCEPT_VMLOAD); + svm->vmcb->control.virt_ext &= ~VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK; } else { int ret = svm_allocate_nested(svm); @@ -242,6 +250,15 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) */ if (svm_gp_erratum_intercept && !sev_guest(vcpu->kvm)) set_exception_intercept(svm, GP_VECTOR); + + if (vgif) + svm_clr_intercept(svm, INTERCEPT_CLGI); + + if (vls) { + svm_clr_intercept(svm, INTERCEPT_VMSAVE); + svm_clr_intercept(svm, INTERCEPT_VMLOAD); + svm->vmcb->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK; + } } } @@ -2291,8 +2308,14 @@ static int clgi_interception(struct kvm_vcpu *vcpu) static int invlpga_interception(struct kvm_vcpu *vcpu) { - gva_t gva = kvm_rax_read(vcpu); - u32 asid = kvm_rcx_read(vcpu); + gva_t gva; + u32 asid; + + if (nested_svm_check_permissions(vcpu)) + return 1; + + gva = kvm_rax_read(vcpu); + asid = kvm_rcx_read(vcpu); /* FIXME: Handle an address size prefix. */ if (!is_long_mode(vcpu)) -- 2.52.0.351.gbe84eed79e-goog The AMD APM states that if VMMCALL instruction is not intercepted, the instruction raises a #UD exception. Create a vmmcall exit handler that generates a #UD if a VMMCALL exit from L2 is being handled by L0, which means that L1 did not intercept the VMMCALL instruction. Co-developed-by: Sean Christopherson Co-developed-by: Yosry Ahmed Signed-off-by: Kevin Cheng --- arch/x86/kvm/svm/svm.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index fc1b8707bb00c..482495ad72d22 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3179,6 +3179,20 @@ static int bus_lock_exit(struct kvm_vcpu *vcpu) return 0; } +static int vmmcall_interception(struct kvm_vcpu *vcpu) +{ + /* + * If VMMCALL from L2 is not intercepted by L1, the instruction raises a + * #UD exception + */ + if (is_guest_mode(vcpu)) { + kvm_queue_exception(vcpu, UD_VECTOR); + return 1; + } + + return kvm_emulate_hypercall(vcpu); +} + static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = { [SVM_EXIT_READ_CR0] = cr_interception, [SVM_EXIT_READ_CR3] = cr_interception, @@ -3229,7 +3243,7 @@ static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = { [SVM_EXIT_TASK_SWITCH] = task_switch_interception, [SVM_EXIT_SHUTDOWN] = shutdown_interception, [SVM_EXIT_VMRUN] = vmrun_interception, - [SVM_EXIT_VMMCALL] = kvm_emulate_hypercall, + [SVM_EXIT_VMMCALL] = vmmcall_interception, [SVM_EXIT_VMLOAD] = vmload_interception, [SVM_EXIT_VMSAVE] = vmsave_interception, [SVM_EXIT_STGI] = stgi_interception, -- 2.52.0.351.gbe84eed79e-goog