The OOM reaper can quickly reap a process's memory when the system encounters OOM, helping the system recover. If the victim process is frozen and cannot be unfrozen in time, the reaper delayed by two seconds will cause the system to fail to recover quickly from the OOM state. When an OOM occurs, if the victim is not unfrozen, delaying the OOM reaper will keep the system in a bad state for two seconds. Before scheduling the oom_reaper task, check whether the victim is in a frozen state. If the victim is frozen, do not delay the OOM reaper. Signed-off-by: zhongjinji --- mm/oom_kill.c | 40 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 39 insertions(+), 1 deletion(-) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 25923cfec9c6..4b4d73b1e00d 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -683,6 +683,41 @@ static void wake_oom_reaper(struct timer_list *timer) wake_up(&oom_reaper_wait); } +/* + * When the victim is frozen, the OOM reaper should not be delayed, because + * if the victim cannot be unfrozen promptly, it may block the system from + * quickly recovering from the OOM state. + */ +static bool should_delay_oom_reap(struct task_struct *tsk) +{ + struct mm_struct *mm = tsk->mm; + struct task_struct *p; + bool ret; + + if (!mm) + return true; + + if (!frozen(tsk)) + return true; + + if (atomic_read(&mm->mm_users) <= 1) + return false; + + rcu_read_lock(); + for_each_process(p) { + if (!process_shares_mm(p, mm)) + continue; + if (same_thread_group(tsk, p)) + continue; + ret = !frozen(p); + if (ret) + break; + } + rcu_read_unlock(); + + return ret; +} + /* * Give the OOM victim time to exit naturally before invoking the oom_reaping. * The timers timeout is arbitrary... the longer it is, the longer the worst @@ -694,13 +729,16 @@ static void wake_oom_reaper(struct timer_list *timer) #define OOM_REAPER_DELAY (2*HZ) static void queue_oom_reaper(struct task_struct *tsk) { + bool delay; + /* mm is already queued? */ if (test_and_set_bit(MMF_OOM_REAP_QUEUED, &tsk->signal->oom_mm->flags)) return; get_task_struct(tsk); + delay = should_delay_oom_reap(tsk); timer_setup(&tsk->oom_reaper_timer, wake_oom_reaper, 0); - tsk->oom_reaper_timer.expires = jiffies + OOM_REAPER_DELAY; + tsk->oom_reaper_timer.expires = jiffies + (delay ? OOM_REAPER_DELAY : 0); add_timer(&tsk->oom_reaper_timer); } -- 2.17.1 When a process is OOM killed without reaper delay, the oom reaper and the exit_mmap() thread likely run simultaneously. They traverse the vma's maple tree along the same path and may easily unmap the same vma, causing them to compete for the pte spinlock. When a process exits, exit_mmap() traverses the vma's maple tree from low to high addresses. To reduce the chance of unmapping the same vma simultaneously, the OOM reaper should traverse the vma's tree from high to low address. Signed-off-by: zhongjinji --- mm/oom_kill.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 4b4d73b1e00d..a0650da9ec9c 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -516,7 +516,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm) { struct vm_area_struct *vma; bool ret = true; - VMA_ITERATOR(vmi, mm, 0); + MA_STATE(mas, &mm->mm_mt, ULONG_MAX, 0); /* * Tell all users of get_user/copy_from_user etc... that the content @@ -526,7 +526,12 @@ static bool __oom_reap_task_mm(struct mm_struct *mm) */ set_bit(MMF_UNSTABLE, &mm->flags); - for_each_vma(vmi, vma) { + /* + * When two tasks unmap the same vma at the same time, they may contend for the + * pte spinlock. To reduce the probability of them unmapping the same vma, the + * oom reaper traverse the vma maple tree in reverse order. + */ + while ((vma = mas_find_rev(&mas, 0)) != NULL) { if (vma->vm_flags & (VM_HUGETLB|VM_PFNMAP)) continue; -- 2.17.1