LKML Archive mirror
 help / color / mirror / Atom feed
From: Hugh Dickins <hughd@google.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>,
	Mike Rapoport <rppt@kernel.org>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Matthew Wilcox <willy@infradead.org>,
	David Hildenbrand <david@redhat.com>,
	Suren Baghdasaryan <surenb@google.com>,
	Qi Zheng <zhengqi.arch@bytedance.com>,
	Yang Shi <shy828301@gmail.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Peter Xu <peterx@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Will Deacon <will@kernel.org>, Yu Zhao <yuzhao@google.com>,
	Alistair Popple <apopple@nvidia.com>,
	Ralph Campbell <rcampbell@nvidia.com>,
	Ira Weiny <ira.weiny@intel.com>,
	Steven Price <steven.price@arm.com>,
	SeongJae Park <sj@kernel.org>,
	Lorenzo Stoakes <lstoakes@gmail.com>,
	Huang Ying <ying.huang@intel.com>,
	Naoya Horiguchi <naoya.horiguchi@nec.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	Zack Rusin <zackr@vmware.com>, Jason Gunthorpe <jgg@ziepe.ca>,
	Axel Rasmussen <axelrasmussen@google.com>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Pasha Tatashin <pasha.tatashin@soleen.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Minchan Kim <minchan@kernel.org>,
	Christoph Hellwig <hch@infradead.org>, Song Liu <song@kernel.org>,
	Thomas Hellstrom <thomas.hellstrom@linux.intel.com>,
	Ryan Roberts <ryan.roberts@arm.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: [PATCH v2 29/32] mm/memory: handle_pte_fault() use pte_offset_map_nolock()
Date: Thu, 8 Jun 2023 18:45:05 -0700 (PDT)	[thread overview]
Message-ID: <c1107654-3929-60ac-223e-6877cbb86065@google.com> (raw)
In-Reply-To: <c1c9a74a-bc5b-15ea-e5d2-8ec34bc921d@google.com>

handle_pte_fault() use pte_offset_map_nolock() to get the vmf.ptl which
corresponds to vmf.pte, instead of pte_lockptr() being used later, when
there's a chance that the pmd entry might have changed, perhaps to none,
or to a huge pmd, with no split ptlock in its struct page.

Remove its pmd_devmap_trans_unstable() call: pte_offset_map_nolock()
will handle that case by failing.  Update the "morph" comment above,
looking forward to when shmem or file collapse to THP may not take
mmap_lock for write (or not at all).

do_numa_page() use the vmf->ptl from handle_pte_fault() at first, but
refresh it when refreshing vmf->pte.

do_swap_page()'s pte_unmap_same() (the thing that takes ptl to verify a
two-part PAE orig_pte) use the vmf->ptl from handle_pte_fault() too; but
do_swap_page() is also used by anon THP's __collapse_huge_page_swapin(),
so adjust that to set vmf->ptl by pte_offset_map_nolock().

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 mm/khugepaged.c |  6 ++++--
 mm/memory.c     | 38 +++++++++++++-------------------------
 2 files changed, 17 insertions(+), 27 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 49cfa7cdfe93..c11db2e78e95 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1005,6 +1005,7 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
 	unsigned long address, end = haddr + (HPAGE_PMD_NR * PAGE_SIZE);
 	int result;
 	pte_t *pte = NULL;
+	spinlock_t *ptl;
 
 	for (address = haddr; address < end; address += PAGE_SIZE) {
 		struct vm_fault vmf = {
@@ -1016,7 +1017,7 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
 		};
 
 		if (!pte++) {
-			pte = pte_offset_map(pmd, address);
+			pte = pte_offset_map_nolock(mm, pmd, address, &ptl);
 			if (!pte) {
 				mmap_read_unlock(mm);
 				result = SCAN_PMD_NULL;
@@ -1024,11 +1025,12 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
 			}
 		}
 
-		vmf.orig_pte = *pte;
+		vmf.orig_pte = ptep_get_lockless(pte);
 		if (!is_swap_pte(vmf.orig_pte))
 			continue;
 
 		vmf.pte = pte;
+		vmf.ptl = ptl;
 		ret = do_swap_page(&vmf);
 		/* Which unmaps pte (after perhaps re-checking the entry) */
 		pte = NULL;
diff --git a/mm/memory.c b/mm/memory.c
index c7b920291a72..4ec46eecefd3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2786,10 +2786,9 @@ static inline int pte_unmap_same(struct vm_fault *vmf)
 	int same = 1;
 #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPTION)
 	if (sizeof(pte_t) > sizeof(unsigned long)) {
-		spinlock_t *ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
-		spin_lock(ptl);
+		spin_lock(vmf->ptl);
 		same = pte_same(*vmf->pte, vmf->orig_pte);
-		spin_unlock(ptl);
+		spin_unlock(vmf->ptl);
 	}
 #endif
 	pte_unmap(vmf->pte);
@@ -4696,7 +4695,6 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	 * validation through pte_unmap_same(). It's of NUMA type but
 	 * the pfn may be screwed if the read is non atomic.
 	 */
-	vmf->ptl = pte_lockptr(vma->vm_mm, vmf->pmd);
 	spin_lock(vmf->ptl);
 	if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
 		pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -4767,8 +4765,10 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 		flags |= TNF_MIGRATED;
 	} else {
 		flags |= TNF_MIGRATE_FAIL;
-		vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
-		spin_lock(vmf->ptl);
+		vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
+					       vmf->address, &vmf->ptl);
+		if (unlikely(!vmf->pte))
+			goto out;
 		if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
 			pte_unmap_unlock(vmf->pte, vmf->ptl);
 			goto out;
@@ -4897,27 +4897,16 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
 		vmf->pte = NULL;
 		vmf->flags &= ~FAULT_FLAG_ORIG_PTE_VALID;
 	} else {
-		/*
-		 * If a huge pmd materialized under us just retry later.  Use
-		 * pmd_trans_unstable() via pmd_devmap_trans_unstable() instead
-		 * of pmd_trans_huge() to ensure the pmd didn't become
-		 * pmd_trans_huge under us and then back to pmd_none, as a
-		 * result of MADV_DONTNEED running immediately after a huge pmd
-		 * fault in a different thread of this mm, in turn leading to a
-		 * misleading pmd_trans_huge() retval. All we have to ensure is
-		 * that it is a regular pmd that we can walk with
-		 * pte_offset_map() and we can do that through an atomic read
-		 * in C, which is what pmd_trans_unstable() provides.
-		 */
-		if (pmd_devmap_trans_unstable(vmf->pmd))
-			return 0;
 		/*
 		 * A regular pmd is established and it can't morph into a huge
-		 * pmd from under us anymore at this point because we hold the
-		 * mmap_lock read mode and khugepaged takes it in write mode.
-		 * So now it's safe to run pte_offset_map().
+		 * pmd by anon khugepaged, since that takes mmap_lock in write
+		 * mode; but shmem or file collapse to THP could still morph
+		 * it into a huge pmd: just retry later if so.
 		 */
-		vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
+		vmf->pte = pte_offset_map_nolock(vmf->vma->vm_mm, vmf->pmd,
+						 vmf->address, &vmf->ptl);
+		if (unlikely(!vmf->pte))
+			return 0;
 		vmf->orig_pte = ptep_get_lockless(vmf->pte);
 		vmf->flags |= FAULT_FLAG_ORIG_PTE_VALID;
 
@@ -4936,7 +4925,6 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
 	if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma))
 		return do_numa_page(vmf);
 
-	vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
 	spin_lock(vmf->ptl);
 	entry = vmf->orig_pte;
 	if (unlikely(!pte_same(*vmf->pte, entry))) {
-- 
2.35.3


  parent reply	other threads:[~2023-06-09  1:45 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-09  0:54 [PATCH v2 00/32] mm: allow pte_offset_map[_lock]() to fail Hugh Dickins
2023-06-09  1:06 ` [PATCH v2 01/32] mm: use pmdp_get_lockless() without surplus barrier() Hugh Dickins
2023-06-09  1:08 ` [PATCH v2 02/32] mm/migrate: remove cruft from migration_entry_wait()s Hugh Dickins
2023-06-09  1:09 ` [PATCH v2 03/32] mm/pgtable: kmap_local_page() instead of kmap_atomic() Hugh Dickins
2023-06-09  1:10 ` [PATCH v2 04/32] mm/pgtable: allow pte_offset_map[_lock]() to fail Hugh Dickins
2023-07-11  1:23   ` Zi Yan
2023-07-28 13:53   ` Yongqin Liu
2023-07-28 14:05     ` Matthew Wilcox
2023-07-28 16:58       ` Hugh Dickins
2023-08-05 16:06         ` Yongqin Liu
2023-08-05 17:07           ` Matthew Wilcox
2023-08-08  0:29             ` John Hubbard
2023-06-09  1:11 ` [PATCH v2 05/32] mm/filemap: allow pte_offset_map_lock() " Hugh Dickins
2023-07-11  1:34   ` Zi Yan
2023-07-11  5:21     ` Hugh Dickins
2023-06-09  1:12 ` [PATCH v2 06/32] mm/page_vma_mapped: delete bogosity in page_vma_mapped_walk() Hugh Dickins
2023-07-11  1:47   ` Zi Yan
2023-06-09  1:14 ` [PATCH v2 07/32] mm/page_vma_mapped: reformat map_pte() with less indentation Hugh Dickins
2023-07-11  1:56   ` Zi Yan
2023-06-09  1:15 ` [PATCH v2 08/32] mm/page_vma_mapped: pte_offset_map_nolock() not pte_lockptr() Hugh Dickins
2023-06-09  1:17 ` [PATCH v2 09/32] mm/pagewalkers: ACTION_AGAIN if pte_offset_map_lock() fails Hugh Dickins
2023-06-09  1:18 ` [PATCH v2 10/32] mm/pagewalk: walk_pte_range() allow for pte_offset_map() Hugh Dickins
2023-06-09  1:20 ` [PATCH v2 11/32] mm/vmwgfx: simplify pmd & pud mapping dirty helpers Hugh Dickins
2023-06-09  1:21 ` [PATCH v2 12/32] mm/vmalloc: vmalloc_to_page() use pte_offset_kernel() Hugh Dickins
2023-07-10 14:42   ` Mark Brown
2023-07-10 17:18     ` Lorenzo Stoakes
2023-07-10 17:33       ` Mark Brown
2023-07-11  4:34         ` Hugh Dickins
2023-07-11 15:34           ` Mark Brown
2023-07-11 16:13             ` Hugh Dickins
2023-07-11 16:34               ` Mark Brown
2023-07-11 17:57               ` Mark Brown
2023-07-13 11:19                 ` Linux regression tracking #update (Thorsten Leemhuis)
2023-07-20 10:32                 ` Will Deacon
2023-07-20 12:06                   ` Mark Brown
2023-08-08  5:52                     ` Linux regression tracking (Thorsten Leemhuis)
2023-08-08 11:09                       ` Mark Brown
2023-08-11  8:00                         ` Linux regression tracking #update (Thorsten Leemhuis)
2023-07-11 14:48     ` Linux regression tracking #adding (Thorsten Leemhuis)
2023-06-09  1:23 ` [PATCH v2 13/32] mm/hmm: retry if pte_offset_map() fails Hugh Dickins
2023-06-09  1:24 ` [PATCH v2 14/32] mm/userfaultfd: " Hugh Dickins
2023-06-09  1:26 ` [PATCH v2 15/32] mm/userfaultfd: allow pte_offset_map_lock() to fail Hugh Dickins
2023-06-09  1:27 ` [PATCH v2 16/32] mm/debug_vm_pgtable,page_table_check: warn pte map fails Hugh Dickins
2023-06-09  1:29 ` [PATCH v2 17/32] mm/various: give up if pte_offset_map[_lock]() fails Hugh Dickins
2023-06-09  1:30 ` [PATCH v2 18/32] mm/mprotect: delete pmd_none_or_clear_bad_unless_trans_huge() Hugh Dickins
2023-06-09  1:32 ` [PATCH v2 19/32] mm/mremap: retry if either pte_offset_map_*lock() fails Hugh Dickins
2023-06-09  1:34 ` [PATCH v2 20/32] mm/madvise: clean up pte_offset_map_lock() scans Hugh Dickins
2023-06-09  1:35 ` [PATCH v2 21/32] mm/madvise: clean up force_shm_swapin_readahead() Hugh Dickins
2023-06-09  1:36 ` [PATCH v2 22/32] mm/swapoff: allow pte_offset_map[_lock]() to fail Hugh Dickins
2023-06-09  1:37 ` [PATCH v2 23/32] mm/mglru: allow pte_offset_map_nolock() " Hugh Dickins
2023-06-09  1:38 ` [PATCH v2 24/32] mm/migrate_device: allow pte_offset_map_lock() " Hugh Dickins
2023-06-09  1:39 ` [PATCH v2 25/32] mm/gup: remove FOLL_SPLIT_PMD use of pmd_trans_unstable() Hugh Dickins
2023-06-09 18:24   ` Yang Shi
2023-06-09  1:41 ` [PATCH v2 26/32] mm/huge_memory: split huge pmd under one pte_offset_map() Hugh Dickins
2023-06-09  1:42 ` [PATCH v2 27/32] mm/khugepaged: allow pte_offset_map[_lock]() to fail Hugh Dickins
2023-06-09  1:43 ` [PATCH v2 28/32] mm/memory: " Hugh Dickins
2023-06-09 20:06   ` Andrew Morton
2023-06-09 20:11     ` Hugh Dickins
2023-06-12  9:10       ` Ryan Roberts
2023-06-15 23:10   ` [PATCH v2 28/32 fix] mm/memory: allow pte_offset_map[_lock]() to fail: fix Hugh Dickins
2023-06-09  1:45 ` Hugh Dickins [this message]
2023-06-09  1:50 ` [PATCH v2 30/32] mm/pgtable: delete pmd_trans_unstable() and friends Hugh Dickins
2023-06-09  1:52 ` [PATCH v2 31/32] mm/swap: swap_vma_readahead() do the pte_offset_map() Hugh Dickins
2023-06-12  8:03   ` Huang, Ying
2023-06-14  3:58     ` Hugh Dickins
2023-06-09  1:53 ` [PATCH v2 32/32] perf/core: Allow pte_offset_map() to fail Hugh Dickins
2023-06-20  6:50 ` [PATCH] mm/swapfile: delete outdated pte_offset_map() comment Hugh Dickins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c1107654-3929-60ac-223e-6877cbb86065@google.com \
    --to=hughd@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=apopple@nvidia.com \
    --cc=axelrasmussen@google.com \
    --cc=christophe.leroy@csgroup.eu \
    --cc=david@redhat.com \
    --cc=hch@infradead.org \
    --cc=ira.weiny@intel.com \
    --cc=jgg@ziepe.ca \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lstoakes@gmail.com \
    --cc=mgorman@techsingularity.net \
    --cc=mike.kravetz@oracle.com \
    --cc=minchan@kernel.org \
    --cc=naoya.horiguchi@nec.com \
    --cc=pasha.tatashin@soleen.com \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rcampbell@nvidia.com \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=sj@kernel.org \
    --cc=song@kernel.org \
    --cc=steven.price@arm.com \
    --cc=surenb@google.com \
    --cc=thomas.hellstrom@linux.intel.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=yuzhao@google.com \
    --cc=zackr@vmware.com \
    --cc=zhengqi.arch@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).