The lazy_mmu API now allows nested sections to be handled by arch code: enter() can return a flag if called inside another lazy_mmu section, so that the matching call to leave() leaves any optimisation enabled. This patch implements that new logic for sparc: if there is an active batch, then enter() returns LAZY_MMU_NESTED and the matching leave() leaves batch->active set. The preempt_{enable,disable} calls are left untouched as they already handle nesting themselves. TLB flushing is still done in leave() regardless of the nesting level, as the caller may rely on it whether nesting is occurring or not. Signed-off-by: Kevin Brodsky --- arch/sparc/mm/tlb.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index bf5094b770af..fdc33438b85f 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -56,9 +56,13 @@ lazy_mmu_state_t arch_enter_lazy_mmu_mode(void) preempt_disable(); tb = this_cpu_ptr(&tlb_batch); - tb->active = 1; - return LAZY_MMU_DEFAULT; + if (!tb->active) { + tb->active = 1; + return LAZY_MMU_DEFAULT; + } else { + return LAZY_MMU_NESTED; + } } void arch_leave_lazy_mmu_mode(lazy_mmu_state_t state) @@ -67,7 +71,10 @@ void arch_leave_lazy_mmu_mode(lazy_mmu_state_t state) if (tb->tlb_nr) flush_tlb_pending(); - tb->active = 0; + + if (state != LAZY_MMU_NESTED) + tb->active = 0; + preempt_enable(); } -- 2.47.0