Linux-mm Archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/10] mm: migrate: support poison recover from migrate folio
@ 2024-04-24 13:59 Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 01/10] mm: migrate: simplify __buffer_migrate_folio() Kefeng Wang
                   ` (9 more replies)
  0 siblings, 10 replies; 11+ messages in thread
From: Kefeng Wang @ 2024-04-24 13:59 UTC (permalink / raw
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Miaohe Lin, Naoya Horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Vishal Moola, Kefeng Wang

The folio migration is widely used in kernel, memory compaction, memory
hotplug, soft offline page, numa balance, memory demote/promotion, etc,
but once access a poisoned source folio when migrating, the kerenl will
panic.

There is a mechanism in the kernel to recover from uncorrectable memory
errors, ARCH_HAS_COPY_MC(Machine Check Safe Memory Copy), which is already
used in NVDIMM or core-mm paths(eg, CoW, khugepaged, coredump, ksm copy),
see copy_mc_to_{user,kernel}, copy_mc_{user_}highpage callers.

This series of patches provide the recovery mechanism from folio copy for
the widely used folio migration. Please note, because folio migration is
no guarantee of success, so we could chose to make folio migration tolerant
of memory failures, adding folio_mc_copy() which is a #MC versions of
folio_copy(), once accessing a poisoned source folio, we could return error
and make the folio migration fail, and this could avoid the similar panic
shown below.

  CPU: 1 PID: 88343 Comm: test_softofflin Kdump: loaded Not tainted 6.6.0
  pc : copy_page+0x10/0xc0
  lr : copy_highpage+0x38/0x50
  ...
  Call trace:
   copy_page+0x10/0xc0
   folio_copy+0x78/0x90
   migrate_folio_extra+0x54/0xa0
   move_to_new_folio+0xd8/0x1f0
   migrate_folio_move+0xb8/0x300
   migrate_pages_batch+0x528/0x788
   migrate_pages_sync+0x8c/0x258
   migrate_pages+0x440/0x528
   soft_offline_in_use_page+0x2ec/0x3c0
   soft_offline_page+0x238/0x310
   soft_offline_page_store+0x6c/0xc0
   dev_attr_store+0x20/0x40
   sysfs_kf_write+0x4c/0x68
   kernfs_fop_write_iter+0x130/0x1c8
   new_sync_write+0xa4/0x138
   vfs_write+0x238/0x2d8
   ksys_write+0x74/0x110

v2:
- remove patch11 since fio don't support large folio
- add RB
- rebased on next-20240424

v1:
- no change, resend and rebased on 6.9-rc1

rfcv2:
- Separate __migrate_device_pages() cleanup from patch "remove 
  migrate_folio_extra()", suggested by Matthew
- Split folio_migrate_mapping(), move refcount check/freeze out
  of folio_migrate_mapping(), suggested by Matthew
- add RB

Kefeng Wang (10):
  mm: migrate: simplify __buffer_migrate_folio()
  mm: migrate_device: use more folio in __migrate_device_pages()
  mm: migrate_device: unify migrate folio for MIGRATE_SYNC_NO_COPY
  mm: migrate: remove migrate_folio_extra()
  mm: remove MIGRATE_SYNC_NO_COPY mode
  mm: migrate: split folio_migrate_mapping()
  mm: add folio_mc_copy()
  mm: migrate: support poisoned recover from migrate folio
  fs: hugetlbfs: support poison recover from hugetlbfs_migrate_folio()
  mm: migrate: remove folio_migrate_copy()

 fs/aio.c                     |  15 +---
 fs/hugetlbfs/inode.c         |   5 +-
 include/linux/migrate.h      |   3 -
 include/linux/migrate_mode.h |   5 --
 include/linux/mm.h           |   1 +
 mm/balloon_compaction.c      |   8 --
 mm/migrate.c                 | 157 +++++++++++++++++------------------
 mm/migrate_device.c          |  28 +++----
 mm/util.c                    |  20 +++++
 mm/zsmalloc.c                |   8 --
 10 files changed, 113 insertions(+), 137 deletions(-)

-- 
2.27.0



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 01/10] mm: migrate: simplify __buffer_migrate_folio()
  2024-04-24 13:59 [PATCH v2 00/10] mm: migrate: support poison recover from migrate folio Kefeng Wang
@ 2024-04-24 13:59 ` Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 02/10] mm: migrate_device: use more folio in __migrate_device_pages() Kefeng Wang
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Kefeng Wang @ 2024-04-24 13:59 UTC (permalink / raw
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Miaohe Lin, Naoya Horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Vishal Moola, Kefeng Wang

Use filemap_migrate_folio() helper to simplify __buffer_migrate_folio().

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 4fbc7ff39da3..9cc5a3e1d97c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -803,24 +803,16 @@ static int __buffer_migrate_folio(struct address_space *mapping,
 		}
 	}
 
-	rc = folio_migrate_mapping(mapping, dst, src, 0);
+	rc = filemap_migrate_folio(mapping, dst, src, mode);
 	if (rc != MIGRATEPAGE_SUCCESS)
 		goto unlock_buffers;
 
-	folio_attach_private(dst, folio_detach_private(src));
-
 	bh = head;
 	do {
 		folio_set_bh(bh, dst, bh_offset(bh));
 		bh = bh->b_this_page;
 	} while (bh != head);
 
-	if (mode != MIGRATE_SYNC_NO_COPY)
-		folio_migrate_copy(dst, src);
-	else
-		folio_migrate_flags(dst, src);
-
-	rc = MIGRATEPAGE_SUCCESS;
 unlock_buffers:
 	if (check_refs)
 		spin_unlock(&mapping->i_private_lock);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 02/10] mm: migrate_device: use more folio in __migrate_device_pages()
  2024-04-24 13:59 [PATCH v2 00/10] mm: migrate: support poison recover from migrate folio Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 01/10] mm: migrate: simplify __buffer_migrate_folio() Kefeng Wang
@ 2024-04-24 13:59 ` Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 03/10] mm: migrate_device: unify migrate folio for MIGRATE_SYNC_NO_COPY Kefeng Wang
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Kefeng Wang @ 2024-04-24 13:59 UTC (permalink / raw
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Miaohe Lin, Naoya Horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Vishal Moola, Kefeng Wang

Use newfolio/folio for migrate_folio_extra()/migrate_folio() to
save four compound_head() calls.

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate_device.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index a1f87aada1bd..1b6658519f64 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -690,6 +690,7 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 		struct page *newpage = migrate_pfn_to_page(dst_pfns[i]);
 		struct page *page = migrate_pfn_to_page(src_pfns[i]);
 		struct address_space *mapping;
+		struct folio *newfolio, *folio;
 		int r;
 
 		if (!newpage) {
@@ -724,14 +725,13 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 			continue;
 		}
 
-		mapping = page_mapping(page);
+		newfolio = page_folio(newpage);
+		folio = page_folio(page);
+		mapping = folio_mapping(folio);
 
-		if (is_device_private_page(newpage) ||
-		    is_device_coherent_page(newpage)) {
+		if (folio_is_device_private(newfolio) ||
+		    folio_is_device_coherent(newfolio)) {
 			if (mapping) {
-				struct folio *folio;
-
-				folio = page_folio(page);
 
 				/*
 				 * For now only support anonymous memory migrating to
@@ -745,7 +745,7 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 					continue;
 				}
 			}
-		} else if (is_zone_device_page(newpage)) {
+		} else if (folio_is_zone_device(newfolio)) {
 			/*
 			 * Other types of ZONE_DEVICE page are not supported.
 			 */
@@ -754,12 +754,11 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 		}
 
 		if (migrate && migrate->fault_page == page)
-			r = migrate_folio_extra(mapping, page_folio(newpage),
-						page_folio(page),
+			r = migrate_folio_extra(mapping, newfolio, folio,
 						MIGRATE_SYNC_NO_COPY, 1);
 		else
-			r = migrate_folio(mapping, page_folio(newpage),
-					page_folio(page), MIGRATE_SYNC_NO_COPY);
+			r = migrate_folio(mapping, newfolio, folio,
+					  MIGRATE_SYNC_NO_COPY);
 		if (r != MIGRATEPAGE_SUCCESS)
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
 	}
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 03/10] mm: migrate_device: unify migrate folio for MIGRATE_SYNC_NO_COPY
  2024-04-24 13:59 [PATCH v2 00/10] mm: migrate: support poison recover from migrate folio Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 01/10] mm: migrate: simplify __buffer_migrate_folio() Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 02/10] mm: migrate_device: use more folio in __migrate_device_pages() Kefeng Wang
@ 2024-04-24 13:59 ` Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 04/10] mm: migrate: remove migrate_folio_extra() Kefeng Wang
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Kefeng Wang @ 2024-04-24 13:59 UTC (permalink / raw
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Miaohe Lin, Naoya Horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Vishal Moola, Kefeng Wang

The __migrate_device_pages() won't copy page so MIGRATE_SYNC_NO_COPY
passed into migrate_folio()/migrate_folio_extra(), actually a easy
way is just to call folio_migrate_mapping()/folio_migrate_flags(),
converting it to unify and simplify the migrate device pages, which
also remove the only call for MIGRATE_SYNC_NO_COPY.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate_device.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 1b6658519f64..5c0237b7f26b 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -691,7 +691,7 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 		struct page *page = migrate_pfn_to_page(src_pfns[i]);
 		struct address_space *mapping;
 		struct folio *newfolio, *folio;
-		int r;
+		int r, extra_cnt = 0;
 
 		if (!newpage) {
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
@@ -753,14 +753,15 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 			continue;
 		}
 
+		BUG_ON(folio_test_writeback(folio));
+
 		if (migrate && migrate->fault_page == page)
-			r = migrate_folio_extra(mapping, newfolio, folio,
-						MIGRATE_SYNC_NO_COPY, 1);
-		else
-			r = migrate_folio(mapping, newfolio, folio,
-					  MIGRATE_SYNC_NO_COPY);
+			extra_cnt = 1;
+		r = folio_migrate_mapping(mapping, newfolio, folio, extra_cnt);
 		if (r != MIGRATEPAGE_SUCCESS)
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
+		else
+			folio_migrate_flags(newfolio, folio);
 	}
 
 	if (notified)
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 04/10] mm: migrate: remove migrate_folio_extra()
  2024-04-24 13:59 [PATCH v2 00/10] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (2 preceding siblings ...)
  2024-04-24 13:59 ` [PATCH v2 03/10] mm: migrate_device: unify migrate folio for MIGRATE_SYNC_NO_COPY Kefeng Wang
@ 2024-04-24 13:59 ` Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 05/10] mm: remove MIGRATE_SYNC_NO_COPY mode Kefeng Wang
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Kefeng Wang @ 2024-04-24 13:59 UTC (permalink / raw
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Miaohe Lin, Naoya Horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Vishal Moola, Kefeng Wang

The migrate_folio_extra() only called in migrate.c now, convert it
a static function and take a new src_private argument which could
be shared by migrate_folio() and filemap_migrate_folio() to simplify
code a bit.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/migrate.h |  2 --
 mm/migrate.c            | 33 +++++++++++----------------------
 2 files changed, 11 insertions(+), 24 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 938efa2fd6d7..535d1a5561c4 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -63,8 +63,6 @@ extern const char *migrate_reason_names[MR_TYPES];
 #ifdef CONFIG_MIGRATION
 
 void putback_movable_pages(struct list_head *l);
-int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
-		struct folio *src, enum migrate_mode mode, int extra_count);
 int migrate_folio(struct address_space *mapping, struct folio *dst,
 		struct folio *src, enum migrate_mode mode);
 int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
diff --git a/mm/migrate.c b/mm/migrate.c
index 9cc5a3e1d97c..ce4142ac8565 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -684,18 +684,19 @@ EXPORT_SYMBOL(folio_migrate_copy);
  *                    Migration functions
  ***********************************************************/
 
-int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
-		struct folio *src, enum migrate_mode mode, int extra_count)
+static int __migrate_folio(struct address_space *mapping, struct folio *dst,
+			   struct folio *src, void *src_private,
+			   enum migrate_mode mode)
 {
 	int rc;
 
-	BUG_ON(folio_test_writeback(src));	/* Writeback must be complete */
-
-	rc = folio_migrate_mapping(mapping, dst, src, extra_count);
-
+	rc = folio_migrate_mapping(mapping, dst, src, 0);
 	if (rc != MIGRATEPAGE_SUCCESS)
 		return rc;
 
+	if (src_private)
+		folio_attach_private(dst, folio_detach_private(src));
+
 	if (mode != MIGRATE_SYNC_NO_COPY)
 		folio_migrate_copy(dst, src);
 	else
@@ -716,9 +717,10 @@ int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
  * Folios are locked upon entry and exit.
  */
 int migrate_folio(struct address_space *mapping, struct folio *dst,
-		struct folio *src, enum migrate_mode mode)
+		  struct folio *src, enum migrate_mode mode)
 {
-	return migrate_folio_extra(mapping, dst, src, mode, 0);
+	BUG_ON(folio_test_writeback(src));	/* Writeback must be complete */
+	return __migrate_folio(mapping, dst, src, NULL, mode);
 }
 EXPORT_SYMBOL(migrate_folio);
 
@@ -872,20 +874,7 @@ EXPORT_SYMBOL_GPL(buffer_migrate_folio_norefs);
 int filemap_migrate_folio(struct address_space *mapping,
 		struct folio *dst, struct folio *src, enum migrate_mode mode)
 {
-	int ret;
-
-	ret = folio_migrate_mapping(mapping, dst, src, 0);
-	if (ret != MIGRATEPAGE_SUCCESS)
-		return ret;
-
-	if (folio_get_private(src))
-		folio_attach_private(dst, folio_detach_private(src));
-
-	if (mode != MIGRATE_SYNC_NO_COPY)
-		folio_migrate_copy(dst, src);
-	else
-		folio_migrate_flags(dst, src);
-	return MIGRATEPAGE_SUCCESS;
+	return __migrate_folio(mapping, dst, src, folio_get_private(src), mode);
 }
 EXPORT_SYMBOL_GPL(filemap_migrate_folio);
 
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 05/10] mm: remove MIGRATE_SYNC_NO_COPY mode
  2024-04-24 13:59 [PATCH v2 00/10] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (3 preceding siblings ...)
  2024-04-24 13:59 ` [PATCH v2 04/10] mm: migrate: remove migrate_folio_extra() Kefeng Wang
@ 2024-04-24 13:59 ` Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 06/10] mm: migrate: split folio_migrate_mapping() Kefeng Wang
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Kefeng Wang @ 2024-04-24 13:59 UTC (permalink / raw
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Miaohe Lin, Naoya Horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Vishal Moola, Kefeng Wang

Commit 2916ecc0f9d4 ("mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY")
introduce a new MIGRATE_SYNC_NO_COPY mode to allow to offload the copy to
a device DMA engine, which is only used __migrate_device_pages() to decide
whether or not copy the old page, and the MIGRATE_SYNC_NO_COPY mode only
set in hmm, as the MIGRATE_SYNC_NO_COPY set is removed by previous cleanup,
it seems that we could remove the unnecessary MIGRATE_SYNC_NO_COPY.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 fs/aio.c                     | 12 +-----------
 fs/hugetlbfs/inode.c         |  5 +----
 include/linux/migrate_mode.h |  5 -----
 mm/balloon_compaction.c      |  8 --------
 mm/migrate.c                 |  8 +-------
 mm/zsmalloc.c                |  8 --------
 6 files changed, 3 insertions(+), 43 deletions(-)

diff --git a/fs/aio.c b/fs/aio.c
index 6ed5507cd330..dc7a10f2a6e2 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -410,17 +410,7 @@ static int aio_migrate_folio(struct address_space *mapping, struct folio *dst,
 	struct kioctx *ctx;
 	unsigned long flags;
 	pgoff_t idx;
-	int rc;
-
-	/*
-	 * We cannot support the _NO_COPY case here, because copy needs to
-	 * happen under the ctx->completion_lock. That does not work with the
-	 * migration workflow of MIGRATE_SYNC_NO_COPY.
-	 */
-	if (mode == MIGRATE_SYNC_NO_COPY)
-		return -EINVAL;
-
-	rc = 0;
+	int rc = 0;
 
 	/* mapping->i_private_lock here protects against the kioctx teardown.  */
 	spin_lock(&mapping->i_private_lock);
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 412f295acebe..6df794ed4066 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -1128,10 +1128,7 @@ static int hugetlbfs_migrate_folio(struct address_space *mapping,
 		hugetlb_set_folio_subpool(src, NULL);
 	}
 
-	if (mode != MIGRATE_SYNC_NO_COPY)
-		folio_migrate_copy(dst, src);
-	else
-		folio_migrate_flags(dst, src);
+	folio_migrate_copy(dst, src);
 
 	return MIGRATEPAGE_SUCCESS;
 }
diff --git a/include/linux/migrate_mode.h b/include/linux/migrate_mode.h
index f37cc03f9369..9fb482bb7323 100644
--- a/include/linux/migrate_mode.h
+++ b/include/linux/migrate_mode.h
@@ -7,16 +7,11 @@
  *	on most operations but not ->writepage as the potential stall time
  *	is too significant
  * MIGRATE_SYNC will block when migrating pages
- * MIGRATE_SYNC_NO_COPY will block when migrating pages but will not copy pages
- *	with the CPU. Instead, page copy happens outside the migratepage()
- *	callback and is likely using a DMA engine. See migrate_vma() and HMM
- *	(mm/hmm.c) for users of this mode.
  */
 enum migrate_mode {
 	MIGRATE_ASYNC,
 	MIGRATE_SYNC_LIGHT,
 	MIGRATE_SYNC,
-	MIGRATE_SYNC_NO_COPY,
 };
 
 enum migrate_reason {
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index 22c96fed70b5..6597ebea8ae2 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -234,14 +234,6 @@ static int balloon_page_migrate(struct page *newpage, struct page *page,
 {
 	struct balloon_dev_info *balloon = balloon_page_device(page);
 
-	/*
-	 * We can not easily support the no copy case here so ignore it as it
-	 * is unlikely to be used with balloon pages. See include/linux/hmm.h
-	 * for a user of the MIGRATE_SYNC_NO_COPY mode.
-	 */
-	if (mode == MIGRATE_SYNC_NO_COPY)
-		return -EINVAL;
-
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
 
diff --git a/mm/migrate.c b/mm/migrate.c
index ce4142ac8565..6a9bb4af2595 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -697,10 +697,7 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst,
 	if (src_private)
 		folio_attach_private(dst, folio_detach_private(src));
 
-	if (mode != MIGRATE_SYNC_NO_COPY)
-		folio_migrate_copy(dst, src);
-	else
-		folio_migrate_flags(dst, src);
+	folio_migrate_copy(dst, src);
 	return MIGRATEPAGE_SUCCESS;
 }
 
@@ -929,7 +926,6 @@ static int fallback_migrate_folio(struct address_space *mapping,
 		/* Only writeback folios in full synchronous migration */
 		switch (mode) {
 		case MIGRATE_SYNC:
-		case MIGRATE_SYNC_NO_COPY:
 			break;
 		default:
 			return -EBUSY;
@@ -1187,7 +1183,6 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
 		 */
 		switch (mode) {
 		case MIGRATE_SYNC:
-		case MIGRATE_SYNC_NO_COPY:
 			break;
 		default:
 			rc = -EBUSY;
@@ -1398,7 +1393,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
 			goto out;
 		switch (mode) {
 		case MIGRATE_SYNC:
-		case MIGRATE_SYNC_NO_COPY:
 			break;
 		default:
 			goto out;
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index b42d3545ca85..6e7967853477 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1752,14 +1752,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	unsigned long old_obj, new_obj;
 	unsigned int obj_idx;
 
-	/*
-	 * We cannot support the _NO_COPY case here, because copy needs to
-	 * happen under the zs lock, which does not work with
-	 * MIGRATE_SYNC_NO_COPY workflow.
-	 */
-	if (mode == MIGRATE_SYNC_NO_COPY)
-		return -EINVAL;
-
 	VM_BUG_ON_PAGE(!PageIsolated(page), page);
 
 	/* The page is locked, so this pointer must remain valid */
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 06/10] mm: migrate: split folio_migrate_mapping()
  2024-04-24 13:59 [PATCH v2 00/10] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (4 preceding siblings ...)
  2024-04-24 13:59 ` [PATCH v2 05/10] mm: remove MIGRATE_SYNC_NO_COPY mode Kefeng Wang
@ 2024-04-24 13:59 ` Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 07/10] mm: add folio_mc_copy() Kefeng Wang
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Kefeng Wang @ 2024-04-24 13:59 UTC (permalink / raw
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Miaohe Lin, Naoya Horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Vishal Moola, Kefeng Wang

The folio_migrate_mapping() function is splitted into two parts,
folio_refs_check_and_freeze() and folio_replace_mapping_and_unfreeze(),
also update comment from page to folio.

Note, the folio_ref_freeze() is moved out of xas_lock_irq(), since the
folio is already isolated and locked during migration, so suppose that
there is no functional change.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate.c | 74 +++++++++++++++++++++++++++++-----------------------
 1 file changed, 42 insertions(+), 32 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 6a9bb4af2595..b27c66af385d 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -419,50 +419,49 @@ static int folio_expected_refs(struct address_space *mapping,
 }
 
 /*
- * Replace the page in the mapping.
- *
  * The number of remaining references must be:
- * 1 for anonymous pages without a mapping
- * 2 for pages with a mapping
- * 3 for pages with a mapping and PagePrivate/PagePrivate2 set.
+ * 1 for anonymous folios without a mapping
+ * 2 for folios with a mapping
+ * 3 for folios with a mapping and PagePrivate/PagePrivate2 set.
  */
-int folio_migrate_mapping(struct address_space *mapping,
-		struct folio *newfolio, struct folio *folio, int extra_count)
+static int folio_refs_check_and_freeze(struct address_space *mapping,
+				       struct folio *folio, int expected_cnt)
+{
+	if (!mapping) {
+		if (folio_ref_count(folio) != expected_cnt)
+			return -EAGAIN;
+	} else {
+		if (!folio_ref_freeze(folio, expected_cnt))
+			return -EAGAIN;
+	}
+
+	return 0;
+}
+
+/* The folio refcount must be freezed if folio with a mapping */
+static void folio_replace_mapping_and_unfreeze(struct address_space *mapping,
+		struct folio *newfolio, struct folio *folio, int expected_cnt)
 {
 	XA_STATE(xas, &mapping->i_pages, folio_index(folio));
 	struct zone *oldzone, *newzone;
-	int dirty;
-	int expected_count = folio_expected_refs(mapping, folio) + extra_count;
 	long nr = folio_nr_pages(folio);
 	long entries, i;
+	int dirty;
 
 	if (!mapping) {
-		/* Anonymous page without mapping */
-		if (folio_ref_count(folio) != expected_count)
-			return -EAGAIN;
-
-		/* No turning back from here */
+		/* Anonymous folio without mapping */
 		newfolio->index = folio->index;
 		newfolio->mapping = folio->mapping;
 		if (folio_test_swapbacked(folio))
 			__folio_set_swapbacked(newfolio);
-
-		return MIGRATEPAGE_SUCCESS;
+		return;
 	}
 
 	oldzone = folio_zone(folio);
 	newzone = folio_zone(newfolio);
 
+	/* Now we know that no one else is looking at the folio */
 	xas_lock_irq(&xas);
-	if (!folio_ref_freeze(folio, expected_count)) {
-		xas_unlock_irq(&xas);
-		return -EAGAIN;
-	}
-
-	/*
-	 * Now we know that no one else is looking at the folio:
-	 * no turning back from here.
-	 */
 	newfolio->index = folio->index;
 	newfolio->mapping = folio->mapping;
 	folio_ref_add(newfolio, nr); /* add cache reference */
@@ -478,7 +477,7 @@ int folio_migrate_mapping(struct address_space *mapping,
 		entries = 1;
 	}
 
-	/* Move dirty while page refs frozen and newpage not yet exposed */
+	/* Move dirty while folio refs frozen and newfolio not yet exposed */
 	dirty = folio_test_dirty(folio);
 	if (dirty) {
 		folio_clear_dirty(folio);
@@ -492,22 +491,22 @@ int folio_migrate_mapping(struct address_space *mapping,
 	}
 
 	/*
-	 * Drop cache reference from old page by unfreezing
-	 * to one less reference.
+	 * Since old folio's refcount freezed, now drop cache reference from
+	 * old folio by unfreezing to one less reference.
 	 * We know this isn't the last reference.
 	 */
-	folio_ref_unfreeze(folio, expected_count - nr);
+	folio_ref_unfreeze(folio, expected_cnt - nr);
 
 	xas_unlock(&xas);
 	/* Leave irq disabled to prevent preemption while updating stats */
 
 	/*
 	 * If moved to a different zone then also account
-	 * the page for that zone. Other VM counters will be
+	 * the folio for that zone. Other VM counters will be
 	 * taken care of when we establish references to the
-	 * new page and drop references to the old page.
+	 * new folio and drop references to the old folio.
 	 *
-	 * Note that anonymous pages are accounted for
+	 * Note that anonymous folios are accounted for
 	 * via NR_FILE_PAGES and NR_ANON_MAPPED if they
 	 * are mapped to swap space.
 	 */
@@ -544,7 +543,18 @@ int folio_migrate_mapping(struct address_space *mapping,
 		}
 	}
 	local_irq_enable();
+}
+
+int folio_migrate_mapping(struct address_space *mapping, struct folio *newfolio,
+			  struct folio *folio, int extra_count)
+{
+	int ret, expected = folio_expected_refs(mapping, folio) + extra_count;
+
+	ret = folio_refs_check_and_freeze(mapping, folio, expected);
+	if (ret)
+		return ret;
 
+	folio_replace_mapping_and_unfreeze(mapping, newfolio, folio, expected);
 	return MIGRATEPAGE_SUCCESS;
 }
 EXPORT_SYMBOL(folio_migrate_mapping);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 07/10] mm: add folio_mc_copy()
  2024-04-24 13:59 [PATCH v2 00/10] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (5 preceding siblings ...)
  2024-04-24 13:59 ` [PATCH v2 06/10] mm: migrate: split folio_migrate_mapping() Kefeng Wang
@ 2024-04-24 13:59 ` Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 08/10] mm: migrate: support poisoned recover from migrate folio Kefeng Wang
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Kefeng Wang @ 2024-04-24 13:59 UTC (permalink / raw
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Miaohe Lin, Naoya Horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Vishal Moola, Kefeng Wang

Add a variant of folio_copy() which use copy_mc_highpage() to support
machine check safe copy when folio copy.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h |  1 +
 mm/util.c          | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 326a4ce0cff8..89bbb2064a97 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1317,6 +1317,7 @@ void put_pages_list(struct list_head *pages);
 
 void split_page(struct page *page, unsigned int order);
 void folio_copy(struct folio *dst, struct folio *src);
+int folio_mc_copy(struct folio *dst, struct folio *src);
 
 unsigned long nr_free_buffer_pages(void);
 
diff --git a/mm/util.c b/mm/util.c
index c9e519e6811f..9462dbf7ce02 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -828,6 +828,26 @@ void folio_copy(struct folio *dst, struct folio *src)
 }
 EXPORT_SYMBOL(folio_copy);
 
+int folio_mc_copy(struct folio *dst, struct folio *src)
+{
+	long nr = folio_nr_pages(src);
+	long i = 0;
+	int ret = 0;
+
+	for (;;) {
+		if (copy_mc_highpage(folio_page(dst, i), folio_page(src, i))) {
+			ret = -EFAULT;
+			break;
+		}
+		if (++i == nr)
+			break;
+		cond_resched();
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(folio_mc_copy);
+
 int sysctl_overcommit_memory __read_mostly = OVERCOMMIT_GUESS;
 int sysctl_overcommit_ratio __read_mostly = 50;
 unsigned long sysctl_overcommit_kbytes __read_mostly;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 08/10] mm: migrate: support poisoned recover from migrate folio
  2024-04-24 13:59 [PATCH v2 00/10] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (6 preceding siblings ...)
  2024-04-24 13:59 ` [PATCH v2 07/10] mm: add folio_mc_copy() Kefeng Wang
@ 2024-04-24 13:59 ` Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 09/10] fs: hugetlbfs: support poison recover from hugetlbfs_migrate_folio() Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 10/10] mm: migrate: remove folio_migrate_copy() Kefeng Wang
  9 siblings, 0 replies; 11+ messages in thread
From: Kefeng Wang @ 2024-04-24 13:59 UTC (permalink / raw
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Miaohe Lin, Naoya Horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Vishal Moola, Kefeng Wang

The folio migration is widely used in kernel, memory compaction, memory
hotplug, soft offline page, numa balance, memory demote/promotion, etc,
but once access a poisoned source folio when migrating, the kerenl will
panic.

There is a mechanism in the kernel to recover from uncorrectable memory
errors, ARCH_HAS_COPY_MC, which is already used in other core-mm paths,
eg, CoW, khugepaged, coredump, ksm copy, see copy_mc_to_{user,kernel},
copy_mc_{user_}highpage callers.

In order to support poisoned folio copy recover from migrate folio, we
chose to make folio migration tolerant of memory failures and return
error for folio migration, because folio migration is no guarantee
of success, this could avoid the similar panic shown below.

  CPU: 1 PID: 88343 Comm: test_softofflin Kdump: loaded Not tainted 6.6.0
  pc : copy_page+0x10/0xc0
  lr : copy_highpage+0x38/0x50
  ...
  Call trace:
   copy_page+0x10/0xc0
   folio_copy+0x78/0x90
   migrate_folio_extra+0x54/0xa0
   move_to_new_folio+0xd8/0x1f0
   migrate_folio_move+0xb8/0x300
   migrate_pages_batch+0x528/0x788
   migrate_pages_sync+0x8c/0x258
   migrate_pages+0x440/0x528
   soft_offline_in_use_page+0x2ec/0x3c0
   soft_offline_page+0x238/0x310
   soft_offline_page_store+0x6c/0xc0
   dev_attr_store+0x20/0x40
   sysfs_kf_write+0x4c/0x68
   kernfs_fop_write_iter+0x130/0x1c8
   new_sync_write+0xa4/0x138
   vfs_write+0x238/0x2d8
   ksys_write+0x74/0x110

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index b27c66af385d..60d4e29c5186 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -698,16 +698,25 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst,
 			   struct folio *src, void *src_private,
 			   enum migrate_mode mode)
 {
-	int rc;
+	int rc, expected_cnt = folio_expected_refs(mapping, src);
 
-	rc = folio_migrate_mapping(mapping, dst, src, 0);
-	if (rc != MIGRATEPAGE_SUCCESS)
+	rc = folio_refs_check_and_freeze(mapping, src, expected_cnt);
+	if (rc)
 		return rc;
 
+	rc = folio_mc_copy(dst, src);
+	if (rc) {
+		if (mapping)
+			folio_ref_unfreeze(src, expected_cnt);
+		return rc;
+	}
+
+	folio_replace_mapping_and_unfreeze(mapping, dst, src, expected_cnt);
+
 	if (src_private)
 		folio_attach_private(dst, folio_detach_private(src));
 
-	folio_migrate_copy(dst, src);
+	folio_migrate_flags(dst, src);
 	return MIGRATEPAGE_SUCCESS;
 }
 
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 09/10] fs: hugetlbfs: support poison recover from hugetlbfs_migrate_folio()
  2024-04-24 13:59 [PATCH v2 00/10] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (7 preceding siblings ...)
  2024-04-24 13:59 ` [PATCH v2 08/10] mm: migrate: support poisoned recover from migrate folio Kefeng Wang
@ 2024-04-24 13:59 ` Kefeng Wang
  2024-04-24 13:59 ` [PATCH v2 10/10] mm: migrate: remove folio_migrate_copy() Kefeng Wang
  9 siblings, 0 replies; 11+ messages in thread
From: Kefeng Wang @ 2024-04-24 13:59 UTC (permalink / raw
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Miaohe Lin, Naoya Horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Vishal Moola, Kefeng Wang

This is similar to __migrate_folio(), use folio_mc_copy() in HugeTLB
folio migration to avoid panic when copy from poisoned folio.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 fs/hugetlbfs/inode.c |  2 +-
 mm/migrate.c         | 14 +++++++++-----
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 6df794ed4066..1107e5aa8343 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -1128,7 +1128,7 @@ static int hugetlbfs_migrate_folio(struct address_space *mapping,
 		hugetlb_set_folio_subpool(src, NULL);
 	}
 
-	folio_migrate_copy(dst, src);
+	folio_migrate_flags(dst, src);
 
 	return MIGRATEPAGE_SUCCESS;
 }
diff --git a/mm/migrate.c b/mm/migrate.c
index 60d4e29c5186..4493ef57c99f 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -567,15 +567,19 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
 				   struct folio *dst, struct folio *src)
 {
 	XA_STATE(xas, &mapping->i_pages, folio_index(src));
-	int expected_count;
+	int rc, expected_count = folio_expected_refs(mapping, src);
 
-	xas_lock_irq(&xas);
-	expected_count = folio_expected_refs(mapping, src);
-	if (!folio_ref_freeze(src, expected_count)) {
-		xas_unlock_irq(&xas);
+	if (!folio_ref_freeze(src, expected_count))
 		return -EAGAIN;
+
+	rc = folio_mc_copy(dst, src);
+	if (rc) {
+		folio_ref_unfreeze(src, expected_count);
+		return rc;
 	}
 
+	xas_lock_irq(&xas);
+
 	dst->index = src->index;
 	dst->mapping = src->mapping;
 
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 10/10] mm: migrate: remove folio_migrate_copy()
  2024-04-24 13:59 [PATCH v2 00/10] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (8 preceding siblings ...)
  2024-04-24 13:59 ` [PATCH v2 09/10] fs: hugetlbfs: support poison recover from hugetlbfs_migrate_folio() Kefeng Wang
@ 2024-04-24 13:59 ` Kefeng Wang
  9 siblings, 0 replies; 11+ messages in thread
From: Kefeng Wang @ 2024-04-24 13:59 UTC (permalink / raw
  To: Andrew Morton, linux-mm
  Cc: Tony Luck, Miaohe Lin, Naoya Horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	linux-aio, linux-fsdevel, Zi Yan, Jiaqi Yan, Hugh Dickins,
	Vishal Moola, Kefeng Wang

The folio_migrate_copy() is just a wrapper of folio_copy() and
folio_migrate_flags(), it is simple and only aio use it for now,
unfold it and remove folio_migrate_copy().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 fs/aio.c                | 3 ++-
 include/linux/migrate.h | 1 -
 mm/migrate.c            | 7 -------
 3 files changed, 2 insertions(+), 9 deletions(-)

diff --git a/fs/aio.c b/fs/aio.c
index dc7a10f2a6e2..ea7b956fd900 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -455,7 +455,8 @@ static int aio_migrate_folio(struct address_space *mapping, struct folio *dst,
 	 * events from being lost.
 	 */
 	spin_lock_irqsave(&ctx->completion_lock, flags);
-	folio_migrate_copy(dst, src);
+	folio_copy(dst, src);
+	folio_migrate_flags(dst, src);
 	BUG_ON(ctx->ring_folios[idx] != src);
 	ctx->ring_folios[idx] = dst;
 	spin_unlock_irqrestore(&ctx->completion_lock, flags);
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 535d1a5561c4..b9966d4355d0 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -76,7 +76,6 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
 void migration_entry_wait_on_locked(swp_entry_t entry, spinlock_t *ptl)
 		__releases(ptl);
 void folio_migrate_flags(struct folio *newfolio, struct folio *folio);
-void folio_migrate_copy(struct folio *newfolio, struct folio *folio);
 int folio_migrate_mapping(struct address_space *mapping,
 		struct folio *newfolio, struct folio *folio, int extra_count);
 
diff --git a/mm/migrate.c b/mm/migrate.c
index 4493ef57c99f..f393e66813dc 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -687,13 +687,6 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
 }
 EXPORT_SYMBOL(folio_migrate_flags);
 
-void folio_migrate_copy(struct folio *newfolio, struct folio *folio)
-{
-	folio_copy(newfolio, folio);
-	folio_migrate_flags(newfolio, folio);
-}
-EXPORT_SYMBOL(folio_migrate_copy);
-
 /************************************************************
  *                    Migration functions
  ***********************************************************/
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2024-04-24 14:00 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-04-24 13:59 [PATCH v2 00/10] mm: migrate: support poison recover from migrate folio Kefeng Wang
2024-04-24 13:59 ` [PATCH v2 01/10] mm: migrate: simplify __buffer_migrate_folio() Kefeng Wang
2024-04-24 13:59 ` [PATCH v2 02/10] mm: migrate_device: use more folio in __migrate_device_pages() Kefeng Wang
2024-04-24 13:59 ` [PATCH v2 03/10] mm: migrate_device: unify migrate folio for MIGRATE_SYNC_NO_COPY Kefeng Wang
2024-04-24 13:59 ` [PATCH v2 04/10] mm: migrate: remove migrate_folio_extra() Kefeng Wang
2024-04-24 13:59 ` [PATCH v2 05/10] mm: remove MIGRATE_SYNC_NO_COPY mode Kefeng Wang
2024-04-24 13:59 ` [PATCH v2 06/10] mm: migrate: split folio_migrate_mapping() Kefeng Wang
2024-04-24 13:59 ` [PATCH v2 07/10] mm: add folio_mc_copy() Kefeng Wang
2024-04-24 13:59 ` [PATCH v2 08/10] mm: migrate: support poisoned recover from migrate folio Kefeng Wang
2024-04-24 13:59 ` [PATCH v2 09/10] fs: hugetlbfs: support poison recover from hugetlbfs_migrate_folio() Kefeng Wang
2024-04-24 13:59 ` [PATCH v2 10/10] mm: migrate: remove folio_migrate_copy() Kefeng Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).