From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756853AbcBWBgN (ORCPT ); Mon, 22 Feb 2016 20:36:13 -0500 Received: from mail-pf0-f178.google.com ([209.85.192.178]:33866 "EHLO mail-pf0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756755AbcBWBgJ (ORCPT ); Mon, 22 Feb 2016 20:36:09 -0500 Date: Mon, 22 Feb 2016 17:36:07 -0800 (PST) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Michal Hocko cc: Andrew Morton , Mel Gorman , Tetsuo Handa , Oleg Nesterov , Linus Torvalds , Hugh Dickins , Andrea Argangeli , Rik van Riel , linux-mm@kvack.org, LKML , Michal Hocko Subject: Re: [PATCH 2/5] oom reaper: handle mlocked pages In-Reply-To: <1454505240-23446-3-git-send-email-mhocko@kernel.org> Message-ID: References: <1454505240-23446-1-git-send-email-mhocko@kernel.org> <1454505240-23446-3-git-send-email-mhocko@kernel.org> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 3 Feb 2016, Michal Hocko wrote: > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > index 9a0e4e5f50b4..840e03986497 100644 > --- a/mm/oom_kill.c > +++ b/mm/oom_kill.c > @@ -443,13 +443,6 @@ static bool __oom_reap_vmas(struct mm_struct *mm) > continue; > > /* > - * mlocked VMAs require explicit munlocking before unmap. > - * Let's keep it simple here and skip such VMAs. > - */ > - if (vma->vm_flags & VM_LOCKED) > - continue; > - > - /* > * Only anonymous pages have a good chance to be dropped > * without additional steps which we cannot afford as we > * are OOM already. > @@ -459,9 +452,12 @@ static bool __oom_reap_vmas(struct mm_struct *mm) > * we do not want to block exit_mmap by keeping mm ref > * count elevated without a good reason. > */ > - if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) > + if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) { > + if (vma->vm_flags & VM_LOCKED) > + munlock_vma_pages_all(vma); > unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end, > &details); > + } > } > tlb_finish_mmu(&tlb, 0, -1); > up_read(&mm->mmap_sem); Are we concerned about munlock_vma_pages_all() taking lock_page() and perhaps stalling forever, the same way it would stall in exit_mmap() for VM_LOCKED vmas, if another thread has locked the same page and is doing an allocation? I'm wondering if in that case it would be better to do a best-effort munlock_vma_pages_all() with trylock_page() and just give up on releasing memory from that particular vma. In that case, there may be other memory that can be freed with unmap_page_range() that would handle this livelock.