From: Yury Norov (NVIDIA) Before calling wbnoinvd_on_cpus_mask(), the function checks the cpumask for emptiness. It's useless, as the following wbnoinvd_on_cpus_mask() ends up with smp_call_function_many_cond(), which handles empty cpumask correctly. While there, move function-wide comment on top of the function. Fixes: 6f38f8c57464 ("KVM: SVM: Flush cache only on CPUs running SEV guest") Signed-off-by: Yury Norov (NVIDIA) --- arch/x86/kvm/svm/sev.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 2fbdebf79fbb..49d7557de8bc 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -716,15 +716,12 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages) } } +/* + * The caller is responsible for ensuring correctness if the mask + * can be modified, e.g. if a CPU could be doing VMRUN. + */ static void sev_writeback_caches(struct kvm *kvm) { - /* - * Note, the caller is responsible for ensuring correctness if the mask - * can be modified, e.g. if a CPU could be doing VMRUN. - */ - if (cpumask_empty(to_kvm_sev_info(kvm)->have_run_cpus)) - return; - /* * Ensure that all dirty guest tagged cache entries are written back * before releasing the pages back to the system for use. CLFLUSH will -- 2.43.0 Testing cpumask for a CPU to be cleared just before setting the exact same CPU is useless because the end result is always the same: CPU is set. While there, switch CPU setter to a non-atomic version. Atomicity is useless here because sev_writeback_caches() ends up with a plain for_each_cpu() loop in smp_call_function_many_cond(), which is not atomic by nature. Fixes: 6f38f8c57464 ("KVM: SVM: Flush cache only on CPUs running SEV guest") Signed-off-by: Yury Norov --- arch/x86/kvm/svm/sev.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 49d7557de8bc..8170674d39c1 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -3498,8 +3498,7 @@ int pre_sev_run(struct vcpu_svm *svm, int cpu) * have encrypted, dirty data in the cache, and flush caches only for * CPUs that have entered the guest. */ - if (!cpumask_test_cpu(cpu, to_kvm_sev_info(kvm)->have_run_cpus)) - cpumask_set_cpu(cpu, to_kvm_sev_info(kvm)->have_run_cpus); + __cpumask_set_cpu(cpu, to_kvm_sev_info(kvm)->have_run_cpus); /* Assign the asid allocated with this SEV guest */ svm->asid = asid; -- 2.43.0