LKML Archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] mm/damon/paddr: simplify page level access re-check for pageout
@ 2024-04-29 22:44 SeongJae Park
  2024-04-29 22:44 ` [PATCH 1/4] mm/damon/paddr: avoid unnecessary page level access check for pageout DAMOS action SeongJae Park
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: SeongJae Park @ 2024-04-29 22:44 UTC (permalink / raw
  To: Andrew Morton; +Cc: SeongJae Park, damon, linux-mm, linux-kernel

The 'pageout' DAMOS action implementation of 'paddr' asks
reclaim_pages() to do page level access check again.  But the user can
ask 'paddr' to do the page level access check on its own, using DAMOS
filter of 'young page' type.  Meanwhile, 'paddr' is the only user of
reclaim_pages() that asks the page level access check.

Make 'paddr' does the page level access check on its own always, and
simplify reclaim_pages() by removing the page level access check request
handling logic.  As a result of the change for reclaim_pages(),
reclaim_folio_list(), which is called by reclaim_pages(), also no more
need to do the page level access check.  Simplify the function, too.

SeongJae Park (4):
  mm/damon/paddr: avoid unnecessary page level access check for pageout
    DAMOS action
  mm/damon/paddr: do page level access check for pageout DAMOS action on
    its own
  mm/vmscan: remove ignore_references argument of reclaim_pages()
  mm/vmscan: remove ignore_references argument of reclaim_folio_list()

 mm/damon/paddr.c | 20 +++++++++++++++++++-
 mm/internal.h    |  2 +-
 mm/madvise.c     |  4 ++--
 mm/vmscan.c      | 12 +++++-------
 4 files changed, 27 insertions(+), 11 deletions(-)


base-commit: 784e2d5fd3231ad7cad0ac907be4bc3db30520c0
-- 
2.39.2


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/4] mm/damon/paddr: avoid unnecessary page level access check for pageout DAMOS action
  2024-04-29 22:44 [PATCH 0/4] mm/damon/paddr: simplify page level access re-check for pageout SeongJae Park
@ 2024-04-29 22:44 ` SeongJae Park
  2024-04-29 22:44 ` [PATCH 2/4] mm/damon/paddr: do page level access check for pageout DAMOS action on its own SeongJae Park
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: SeongJae Park @ 2024-04-29 22:44 UTC (permalink / raw
  To: Andrew Morton; +Cc: SeongJae Park, damon, linux-mm, linux-kernel

'pageout' DAMOS action implementation of 'paddr' asks reclaim_pages() to
do the page level access check.  User could ask DAMOS to do the page
level access check on its own using 'young page' type DAMOS filter.  In
the case, pageout DAMOS action unnecessarily asks reclaim_pages() to do
the check again.  Ask the page level access check only if the scheme is
not having the filter.

Signed-off-by: SeongJae Park <sj@kernel.org>
---
 mm/damon/paddr.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index 5685ba485097..d5f2f7ddf863 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -244,6 +244,16 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
 {
 	unsigned long addr, applied;
 	LIST_HEAD(folio_list);
+	bool ignore_references = false;
+	struct damos_filter *filter;
+
+	/* respect user's page level reference check handling request */
+	damos_for_each_filter(filter, s) {
+		if (filter->type == DAMOS_FILTER_TYPE_YOUNG) {
+			ignore_references = true;
+			break;
+		}
+	}
 
 	for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
 		struct folio *folio = damon_get_folio(PHYS_PFN(addr));
@@ -265,7 +275,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
 put_folio:
 		folio_put(folio);
 	}
-	applied = reclaim_pages(&folio_list, false);
+	applied = reclaim_pages(&folio_list, ignore_references);
 	cond_resched();
 	return applied * PAGE_SIZE;
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/4] mm/damon/paddr: do page level access check for pageout DAMOS action on its own
  2024-04-29 22:44 [PATCH 0/4] mm/damon/paddr: simplify page level access re-check for pageout SeongJae Park
  2024-04-29 22:44 ` [PATCH 1/4] mm/damon/paddr: avoid unnecessary page level access check for pageout DAMOS action SeongJae Park
@ 2024-04-29 22:44 ` SeongJae Park
  2024-04-29 22:44 ` [PATCH 3/4] mm/vmscan: remove ignore_references argument of reclaim_pages() SeongJae Park
  2024-04-29 22:44 ` [PATCH 4/4] mm/vmscan: remove ignore_references argument of reclaim_folio_list() SeongJae Park
  3 siblings, 0 replies; 5+ messages in thread
From: SeongJae Park @ 2024-04-29 22:44 UTC (permalink / raw
  To: Andrew Morton; +Cc: SeongJae Park, damon, linux-mm, linux-kernel

'pageout' DAMOS action implementation of 'paddr' DAMON operations set
asks reclaim_pages() to do page level access check if the user is not
asking DAMOS to do that on its own.  Simplify the logic by making the
check always be done by 'paddr'.

Signed-off-by: SeongJae Park <sj@kernel.org>
---
 mm/damon/paddr.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index d5f2f7ddf863..974edef1740d 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -244,16 +244,22 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
 {
 	unsigned long addr, applied;
 	LIST_HEAD(folio_list);
-	bool ignore_references = false;
+	bool install_young_filter = true;
 	struct damos_filter *filter;
 
-	/* respect user's page level reference check handling request */
+	/* check access in page level again by default */
 	damos_for_each_filter(filter, s) {
 		if (filter->type == DAMOS_FILTER_TYPE_YOUNG) {
-			ignore_references = true;
+			install_young_filter = false;
 			break;
 		}
 	}
+	if (install_young_filter) {
+		filter = damos_new_filter(DAMOS_FILTER_TYPE_YOUNG, true);
+		if (!filter)
+			return 0;
+		damos_add_filter(s, filter);
+	}
 
 	for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
 		struct folio *folio = damon_get_folio(PHYS_PFN(addr));
@@ -275,7 +281,9 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
 put_folio:
 		folio_put(folio);
 	}
-	applied = reclaim_pages(&folio_list, ignore_references);
+	if (install_young_filter)
+		damos_destroy_filter(filter);
+	applied = reclaim_pages(&folio_list, true);
 	cond_resched();
 	return applied * PAGE_SIZE;
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 3/4] mm/vmscan: remove ignore_references argument of reclaim_pages()
  2024-04-29 22:44 [PATCH 0/4] mm/damon/paddr: simplify page level access re-check for pageout SeongJae Park
  2024-04-29 22:44 ` [PATCH 1/4] mm/damon/paddr: avoid unnecessary page level access check for pageout DAMOS action SeongJae Park
  2024-04-29 22:44 ` [PATCH 2/4] mm/damon/paddr: do page level access check for pageout DAMOS action on its own SeongJae Park
@ 2024-04-29 22:44 ` SeongJae Park
  2024-04-29 22:44 ` [PATCH 4/4] mm/vmscan: remove ignore_references argument of reclaim_folio_list() SeongJae Park
  3 siblings, 0 replies; 5+ messages in thread
From: SeongJae Park @ 2024-04-29 22:44 UTC (permalink / raw
  To: Andrew Morton; +Cc: SeongJae Park, damon, linux-mm, linux-kernel

All reclaim_pages() callers are setting 'ignore_references' parameter
'true'.  In other words, the parameter is not really being used.  Remove
the argument to make it simple.

Signed-off-by: SeongJae Park <sj@kernel.org>
---
 mm/damon/paddr.c | 2 +-
 mm/internal.h    | 2 +-
 mm/madvise.c     | 4 ++--
 mm/vmscan.c      | 6 +++---
 4 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index 974edef1740d..18797c1b419b 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -283,7 +283,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
 	}
 	if (install_young_filter)
 		damos_destroy_filter(filter);
-	applied = reclaim_pages(&folio_list, true);
+	applied = reclaim_pages(&folio_list);
 	cond_resched();
 	return applied * PAGE_SIZE;
 }
diff --git a/mm/internal.h b/mm/internal.h
index c5552d35d995..b2c75b12014e 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1052,7 +1052,7 @@ extern unsigned long  __must_check vm_mmap_pgoff(struct file *, unsigned long,
         unsigned long, unsigned long);
 
 extern void set_pageblock_order(void);
-unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references);
+unsigned long reclaim_pages(struct list_head *folio_list);
 unsigned int reclaim_clean_pages_from_list(struct zone *zone,
 					    struct list_head *folio_list);
 /* The ALLOC_WMARK bits are used as an index to zone->watermark */
diff --git a/mm/madvise.c b/mm/madvise.c
index c49fef453491..a77893462b92 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -423,7 +423,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 huge_unlock:
 		spin_unlock(ptl);
 		if (pageout)
-			reclaim_pages(&folio_list, true);
+			reclaim_pages(&folio_list);
 		return 0;
 	}
 
@@ -547,7 +547,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 		pte_unmap_unlock(start_pte, ptl);
 	}
 	if (pageout)
-		reclaim_pages(&folio_list, true);
+		reclaim_pages(&folio_list);
 	cond_resched();
 
 	return 0;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 49bd94423961..fc9dd9a24739 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2150,7 +2150,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list,
 	return nr_reclaimed;
 }
 
-unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references)
+unsigned long reclaim_pages(struct list_head *folio_list)
 {
 	int nid;
 	unsigned int nr_reclaimed = 0;
@@ -2173,11 +2173,11 @@ unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references
 		}
 
 		nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid),
-						   ignore_references);
+						   true);
 		nid = folio_nid(lru_to_folio(folio_list));
 	} while (!list_empty(folio_list));
 
-	nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), ignore_references);
+	nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), true);
 
 	memalloc_noreclaim_restore(noreclaim_flag);
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 4/4] mm/vmscan: remove ignore_references argument of reclaim_folio_list()
  2024-04-29 22:44 [PATCH 0/4] mm/damon/paddr: simplify page level access re-check for pageout SeongJae Park
                   ` (2 preceding siblings ...)
  2024-04-29 22:44 ` [PATCH 3/4] mm/vmscan: remove ignore_references argument of reclaim_pages() SeongJae Park
@ 2024-04-29 22:44 ` SeongJae Park
  3 siblings, 0 replies; 5+ messages in thread
From: SeongJae Park @ 2024-04-29 22:44 UTC (permalink / raw
  To: Andrew Morton; +Cc: SeongJae Park, linux-mm, linux-kernel

All reclaim_folio_list() callers are passing 'true' for
'ignore_references' parameter.  In other words, the parameter is not
really being used.  Simplify the code by removing the parameter.

Signed-off-by: SeongJae Park <sj@kernel.org>
---
 mm/vmscan.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index fc9dd9a24739..6981a71c8ef0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2126,8 +2126,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 }
 
 static unsigned int reclaim_folio_list(struct list_head *folio_list,
-				      struct pglist_data *pgdat,
-				      bool ignore_references)
+				      struct pglist_data *pgdat)
 {
 	struct reclaim_stat dummy_stat;
 	unsigned int nr_reclaimed;
@@ -2140,7 +2139,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list,
 		.no_demotion = 1,
 	};
 
-	nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &dummy_stat, ignore_references);
+	nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &dummy_stat, true);
 	while (!list_empty(folio_list)) {
 		folio = lru_to_folio(folio_list);
 		list_del(&folio->lru);
@@ -2172,12 +2171,11 @@ unsigned long reclaim_pages(struct list_head *folio_list)
 			continue;
 		}
 
-		nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid),
-						   true);
+		nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid));
 		nid = folio_nid(lru_to_folio(folio_list));
 	} while (!list_empty(folio_list));
 
-	nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), true);
+	nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid));
 
 	memalloc_noreclaim_restore(noreclaim_flag);
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-04-29 22:44 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-04-29 22:44 [PATCH 0/4] mm/damon/paddr: simplify page level access re-check for pageout SeongJae Park
2024-04-29 22:44 ` [PATCH 1/4] mm/damon/paddr: avoid unnecessary page level access check for pageout DAMOS action SeongJae Park
2024-04-29 22:44 ` [PATCH 2/4] mm/damon/paddr: do page level access check for pageout DAMOS action on its own SeongJae Park
2024-04-29 22:44 ` [PATCH 3/4] mm/vmscan: remove ignore_references argument of reclaim_pages() SeongJae Park
2024-04-29 22:44 ` [PATCH 4/4] mm/vmscan: remove ignore_references argument of reclaim_folio_list() SeongJae Park

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).