From: Byungchul Park <byungchul@sk.com>
To: linux-kernel@vger.kernel.org, linux-mm@kvack.org
Cc: kernel_team@skhynix.com, akpm@linux-foundation.org,
ying.huang@intel.com, vernhao@tencent.com,
mgorman@techsingularity.net, hughd@google.com,
willy@infradead.org, david@redhat.com, peterz@infradead.org,
luto@kernel.org, tglx@linutronix.de, mingo@redhat.com,
bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com
Subject: [PATCH v9 rebase on mm-unstable 3/8] mm/rmap: recognize read-only tlb entries during batched tlb flush
Date: Thu, 18 Apr 2024 15:15:31 +0900 [thread overview]
Message-ID: <20240418061536.11645-4-byungchul@sk.com> (raw)
In-Reply-To: <20240418061536.11645-1-byungchul@sk.com>
Functionally, no change. This is a preparation for migrc mechanism that
requires to recognize read-only tlb entries and handle them in a
different way. The newly introduced API, fold_ubc(), will be used by
migrc mechanism.
Signed-off-by: Byungchul Park <byungchul@sk.com>
---
include/linux/sched.h | 1 +
mm/internal.h | 4 ++++
mm/rmap.c | 31 ++++++++++++++++++++++++++++++-
3 files changed, 35 insertions(+), 1 deletion(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4118b3f959c3..f9f8091f354f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1339,6 +1339,7 @@ struct task_struct {
#endif
struct tlbflush_unmap_batch tlb_ubc;
+ struct tlbflush_unmap_batch tlb_ubc_ro;
/* Cache last used pipe for splice(): */
struct pipe_inode_info *splice_pipe;
diff --git a/mm/internal.h b/mm/internal.h
index c6483f73ec13..b34d9e627132 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1100,6 +1100,7 @@ extern struct workqueue_struct *mm_percpu_wq;
void try_to_unmap_flush(void);
void try_to_unmap_flush_dirty(void);
void flush_tlb_batched_pending(struct mm_struct *mm);
+void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src);
#else
static inline void try_to_unmap_flush(void)
{
@@ -1110,6 +1111,9 @@ static inline void try_to_unmap_flush_dirty(void)
static inline void flush_tlb_batched_pending(struct mm_struct *mm)
{
}
+static inline void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src)
+{
+}
#endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
extern const struct trace_print_flags pageflag_names[];
diff --git a/mm/rmap.c b/mm/rmap.c
index 2608c40dffad..c37ff1648cf1 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -635,6 +635,28 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
}
#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+
+void fold_ubc(struct tlbflush_unmap_batch *dst,
+ struct tlbflush_unmap_batch *src)
+{
+ if (!src->flush_required)
+ return;
+
+ /*
+ * Fold src to dst.
+ */
+ arch_tlbbatch_fold(&dst->arch, &src->arch);
+ dst->writable = dst->writable || src->writable;
+ dst->flush_required = true;
+
+ /*
+ * Reset src.
+ */
+ arch_tlbbatch_clear(&src->arch);
+ src->flush_required = false;
+ src->writable = false;
+}
+
/*
* Flush TLB entries for recently unmapped pages from remote CPUs. It is
* important if a PTE was dirty when it was unmapped that it's flushed
@@ -644,7 +666,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
void try_to_unmap_flush(void)
{
struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc_ro = ¤t->tlb_ubc_ro;
+ fold_ubc(tlb_ubc, tlb_ubc_ro);
if (!tlb_ubc->flush_required)
return;
@@ -675,13 +699,18 @@ void try_to_unmap_flush_dirty(void)
static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval,
unsigned long uaddr)
{
- struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc;
+ struct tlbflush_unmap_batch *tlb_ubc;
int batch;
bool writable = pte_dirty(pteval);
if (!pte_accessible(mm, pteval))
return;
+ if (pte_write(pteval) || writable)
+ tlb_ubc = ¤t->tlb_ubc;
+ else
+ tlb_ubc = ¤t->tlb_ubc_ro;
+
arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr);
tlb_ubc->flush_required = true;
--
2.17.1
next prev parent reply other threads:[~2024-04-18 6:16 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-18 6:15 [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Byungchul Park
2024-04-18 6:15 ` [PATCH v9 rebase on mm-unstable 1/8] x86/tlb: add APIs manipulating tlb batch's arch data Byungchul Park
2024-04-18 6:15 ` [PATCH v9 rebase on mm-unstable 2/8] arm64: tlbflush: " Byungchul Park
2024-04-18 6:15 ` Byungchul Park [this message]
2024-04-18 6:15 ` [PATCH v9 rebase on mm-unstable 4/8] x86/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush() Byungchul Park
2024-04-18 6:15 ` [PATCH v9 rebase on mm-unstable 5/8] mm: separate move/undo parts from migrate_pages_batch() Byungchul Park
2024-04-18 6:15 ` [PATCH v9 rebase on mm-unstable 6/8] mm: buddy: make room for a new variable, mgen, in struct page Byungchul Park
2024-04-18 6:15 ` [PATCH v9 rebase on mm-unstable 7/8] mm: add folio_put_mgen() to deliver migrc's generation number to pcp or buddy Byungchul Park
2024-04-18 6:15 ` [PATCH v9 rebase on mm-unstable 8/8] mm: defer tlb flush until the source folios at migration actually get used Byungchul Park
2024-04-18 20:17 ` [PATCH v9 rebase on mm-unstable 0/8] Reduce tlb and interrupt numbers over 90% by improving folio migration Andrew Morton
2024-04-19 6:02 ` Byungchul Park
2024-04-19 6:06 ` Huang, Ying
2024-04-19 6:21 ` Byungchul Park
2024-05-09 7:42 ` Byungchul Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240418061536.11645-4-byungchul@sk.com \
--to=byungchul@sk.com \
--cc=akpm@linux-foundation.org \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=kernel_team@skhynix.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mgorman@techsingularity.net \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rjgolo@gmail.com \
--cc=tglx@linutronix.de \
--cc=vernhao@tencent.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).