While we currently track that we are emulating a nested ERET from L1 to L2, we currently don't track the reverse direction (an exception going from L2 to L1). Add a new vcpu state flag for this purpose, which will see some use shortly. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 3 ++- arch/arm64/kvm/emulate-nested.c | 4 ++++ 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 65eead8362e0b..c79747d5f4dd1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1112,7 +1112,8 @@ struct kvm_vcpu_arch { #define IN_NESTED_ERET __vcpu_single_flag(sflags, BIT(7)) /* SError pending for nested guest */ #define NESTED_SERROR_PENDING __vcpu_single_flag(sflags, BIT(8)) - +/* KVM is currently emulating an L2 to L1 exception */ +#define IN_NESTED_EXCEPTION __vcpu_single_flag(sflags, BIT(9)) /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c index dba7ced74ca5e..15c691a6266d5 100644 --- a/arch/arm64/kvm/emulate-nested.c +++ b/arch/arm64/kvm/emulate-nested.c @@ -2862,6 +2862,8 @@ static int kvm_inject_nested(struct kvm_vcpu *vcpu, u64 esr_el2, preempt_disable(); + vcpu_set_flag(vcpu, IN_NESTED_EXCEPTION); + /* * We may have an exception or PC update in the EL0/EL1 context. * Commit it before entering EL2. @@ -2884,6 +2886,8 @@ static int kvm_inject_nested(struct kvm_vcpu *vcpu, u64 esr_el2, __kvm_adjust_pc(vcpu); kvm_arch_vcpu_load(vcpu, smp_processor_id()); + vcpu_clear_flag(vcpu, IN_NESTED_EXCEPTION); + preempt_enable(); if (kvm_vcpu_has_pmu(vcpu)) -- 2.47.3 When switching between L1 and L2, we diligently use a non-preemptible put/load sequence in order to make sure that the old state is saved, while the new state is brought in. Crucially, this includes the FP registers. However, this is a bit silly. The FP registers are completely shared between the various ELs (just like the GPRs, really), and eagerly save/restoring those in a non-preemptible section is just overhead. Not to mention that the next access will end-up trapping, something that becomes exponentially expensive as we nest deeper. The temptation is therefore to completely drop this save/restore thing. Why is it valid to do so? By analogy, the hypervisor doesn't try to poloce things between EL1 and EL0, or between EL2 and EL0. Why should it do so between EL2 and EL1 (or EL2 and L2 EL0)? Once you admit that the FP (and by extension SVE) registers are EL-agnostic, the things that matter are: - the trap controls: the effective values are recomputed on each entry into the guest to take the EL into account and merge the L0 and L1 configuration if in a nested context, or directly use the L0 configuration in non-nested context (see __activate_traps()). - the VL settings: the effective values are are also recomputed on each entry into the guest (see fpsimd_lazy_switch_to_guest()). Since we appear to cover all bases, use the vcpu flags indicating the handling of a nested ERET or exception delivery to avoid the whole FP save/restore shenanigans. For an EL1 L3 guest where L1 and L2 have this optimisation, this results in at least a 10% wall clock reduction when running an I/O heavy workload, generating a high rate of nested exceptions. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/fpsimd.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 15e17aca1dec0..73eda0f46b127 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -28,6 +28,10 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) if (!system_supports_fpsimd()) return; + if (vcpu_get_flag(vcpu, IN_NESTED_ERET) || + vcpu_get_flag(vcpu, IN_NESTED_EXCEPTION)) + return; + /* * Ensure that any host FPSIMD/SVE/SME state is saved and unbound such * that the host kernel is responsible for restoring this state upon @@ -102,6 +106,10 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) { unsigned long flags; + if (vcpu_get_flag(vcpu, IN_NESTED_ERET) || + vcpu_get_flag(vcpu, IN_NESTED_EXCEPTION)) + return; + local_irq_save(flags); if (guest_owns_fp_regs()) { -- 2.47.3