Previous patches have introduced a mechanism to prevent kernel text updates from inducing interference on isolated CPUs. A similar action is required for kernel-range TLB flushes in order to silence the biggest remaining cause of isolated CPU IPI interference. These flushes are mostly caused by vmalloc manipulations - e.g. on x86 with CONFIG_VMAP_STACK, spawning enough processes will easily trigger flushes. Unfortunately, the newly added context_tracking IPI deferral mechanism cannot be leveraged for TLB flushes, as the deferred work would be executed too late. Consider the following execution flow: !interrupt! SWITCH_TO_KERNEL_CR3 // vmalloc range becomes accessible idtentry_func_foo() irqentry_enter() irqentry_enter_from_user_mode() enter_from_user_mode() [...] ct_kernel_enter_state() ct_work_flush() // deferred flush would be done here Since there is no sane way to assert no stale entry is accessed during kernel entry, any code executed between SWITCH_TO_KERNEL_CR3 and ct_work_flush() is at risk of accessing a stale entry. Dave had suggested hacking up something within SWITCH_TO_KERNEL_CR3 itself, which is what has been implemented in the previous patches. Make kernel-range TLB flush deferral available via CONFIG_COALESCE_TLBI. Signed-off-by: Valentin Schneider --- arch/x86/Kconfig | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 3f1557b7acd8f..390e1dbe5d4de 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2188,6 +2188,23 @@ config ADDRESS_MASKING The capability can be used for efficient address sanitizers (ASAN) implementation and for optimizations in JITs. +config COALESCE_TLBI + def_bool n + prompt "Coalesce kernel TLB flushes for NOHZ-full CPUs" + depends on X86_64 && MITIGATION_PAGE_TABLE_ISOLATION && NO_HZ_FULL + help + TLB flushes for kernel addresses can lead to IPIs being sent to + NOHZ-full CPUs, thus kicking them out of userspace. + + This option coalesces kernel-range TLB flushes for NOHZ-full CPUs into + a single flush executed at kernel entry, right after switching to the + kernel page table. Note that this flush is unconditionnal, even if no + remote flush was issued during the previous userspace execution window. + + This obviously makes the user->kernel transition overhead even worse. + + If unsure, say N. + config HOTPLUG_CPU def_bool y depends on SMP -- 2.51.0