From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755849AbcBCNPR (ORCPT ); Wed, 3 Feb 2016 08:15:17 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:36318 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752251AbcBCNOJ (ORCPT ); Wed, 3 Feb 2016 08:14:09 -0500 From: Michal Hocko To: Andrew Morton Cc: David Rientjes , Mel Gorman , Tetsuo Handa , Oleg Nesterov , Linus Torvalds , Hugh Dickins , Andrea Argangeli , Rik van Riel , , LKML , Michal Hocko Subject: [PATCH 2/5] oom reaper: handle mlocked pages Date: Wed, 3 Feb 2016 14:13:57 +0100 Message-Id: <1454505240-23446-3-git-send-email-mhocko@kernel.org> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1454505240-23446-1-git-send-email-mhocko@kernel.org> References: <1454505240-23446-1-git-send-email-mhocko@kernel.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Michal Hocko __oom_reap_vmas current skips over all mlocked vmas because they need a special treatment before they are unmapped. This is primarily done for simplicity. There is no reason to skip over them and reduce the amount of reclaimed memory. This is safe from the semantic point of view because try_to_unmap_one during rmap walk would keep tell the reclaim to cull the page back and mlock it again. munlock_vma_pages_all is also safe to be called from the oom reaper context because it doesn't sit on any locks but mmap_sem (for read). Signed-off-by: Michal Hocko --- mm/oom_kill.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 9a0e4e5f50b4..840e03986497 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -443,13 +443,6 @@ static bool __oom_reap_vmas(struct mm_struct *mm) continue; /* - * mlocked VMAs require explicit munlocking before unmap. - * Let's keep it simple here and skip such VMAs. - */ - if (vma->vm_flags & VM_LOCKED) - continue; - - /* * Only anonymous pages have a good chance to be dropped * without additional steps which we cannot afford as we * are OOM already. @@ -459,9 +452,12 @@ static bool __oom_reap_vmas(struct mm_struct *mm) * we do not want to block exit_mmap by keeping mm ref * count elevated without a good reason. */ - if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) + if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) { + if (vma->vm_flags & VM_LOCKED) + munlock_vma_pages_all(vma); unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end, &details); + } } tlb_finish_mmu(&tlb, 0, -1); up_read(&mm->mmap_sem); -- 2.7.0