From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
To: Gabriele Monaco <gmonaco@redhat.com>,
linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
linux-mm@kvack.org
Cc: Marco Elver <elver@google.com>, Ingo Molnar <mingo@kernel.org>,
"Paul E. McKenney" <paulmck@kernel.org>,
Shuah Khan <shuah@kernel.org>
Subject: Re: [PATCH v6 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity
Date: Thu, 13 Feb 2025 09:56:17 -0500 [thread overview]
Message-ID: <fe1858c9-0cbd-4f76-aa2a-b30b2a3f3cbd@efficios.com> (raw)
In-Reply-To: <20250210153253.460471-2-gmonaco@redhat.com>
On 2025-02-10 10:32, Gabriele Monaco wrote:
> From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
>
Peter, Ingo, this patch has been ready for inclusion for a while. The
rest of this series does not seem to be quite ready yet, but can we at
least merge this patch into tip ?
Thanks,
Mathieu
> When a process reduces its number of threads or clears bits in its CPU
> affinity mask, the mm_cid allocation should eventually converge towards
> smaller values.
>
> However, the change introduced by:
>
> commit 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency
> IDs for intermittent workloads")
>
> adds a per-mm/CPU recent_cid which is never unset unless a thread
> migrates.
>
> This is a tradeoff between:
>
> A) Preserving cache locality after a transition from many threads to few
> threads, or after reducing the hamming weight of the allowed CPU mask.
>
> B) Making the mm_cid upper bounds wrt nr threads and allowed CPU mask
> easy to document and understand.
>
> C) Allowing applications to eventually react to mm_cid compaction after
> reduction of the nr threads or allowed CPU mask, making the tracking
> of mm_cid compaction easier by shrinking it back towards 0 or not.
>
> D) Making sure applications that periodically reduce and then increase
> again the nr threads or allowed CPU mask still benefit from good
> cache locality with mm_cid.
>
> Introduce the following changes:
>
> * After shrinking the number of threads or reducing the number of
> allowed CPUs, reduce the value of max_nr_cid so expansion of CID
> allocation will preserve cache locality if the number of threads or
> allowed CPUs increase again.
>
> * Only re-use a recent_cid if it is within the max_nr_cid upper bound,
> else find the first available CID.
>
> Fixes: 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads")
> Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: Marco Elver <elver@google.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Tested-by: Gabriele Monaco <gmonaco@redhat.com>
> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
> include/linux/mm_types.h | 7 ++++---
> kernel/sched/sched.h | 25 ++++++++++++++++++++++---
> 2 files changed, 26 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 6b27db7f94963..0234f14f2aa6b 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -875,10 +875,11 @@ struct mm_struct {
> */
> unsigned int nr_cpus_allowed;
> /**
> - * @max_nr_cid: Maximum number of concurrency IDs allocated.
> + * @max_nr_cid: Maximum number of allowed concurrency
> + * IDs allocated.
> *
> - * Track the highest number of concurrency IDs allocated for the
> - * mm.
> + * Track the highest number of allowed concurrency IDs
> + * allocated for the mm.
> */
> atomic_t max_nr_cid;
> /**
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 38e0e323dda26..606c96b74ebfa 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -3698,10 +3698,28 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
> {
> struct cpumask *cidmask = mm_cidmask(mm);
> struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
> - int cid = __this_cpu_read(pcpu_cid->recent_cid);
> + int cid, max_nr_cid, allowed_max_nr_cid;
>
> + /*
> + * After shrinking the number of threads or reducing the number
> + * of allowed cpus, reduce the value of max_nr_cid so expansion
> + * of cid allocation will preserve cache locality if the number
> + * of threads or allowed cpus increase again.
> + */
> + max_nr_cid = atomic_read(&mm->max_nr_cid);
> + while ((allowed_max_nr_cid = min_t(int, READ_ONCE(mm->nr_cpus_allowed),
> + atomic_read(&mm->mm_users))),
> + max_nr_cid > allowed_max_nr_cid) {
> + /* atomic_try_cmpxchg loads previous mm->max_nr_cid into max_nr_cid. */
> + if (atomic_try_cmpxchg(&mm->max_nr_cid, &max_nr_cid, allowed_max_nr_cid)) {
> + max_nr_cid = allowed_max_nr_cid;
> + break;
> + }
> + }
> /* Try to re-use recent cid. This improves cache locality. */
> - if (!mm_cid_is_unset(cid) && !cpumask_test_and_set_cpu(cid, cidmask))
> + cid = __this_cpu_read(pcpu_cid->recent_cid);
> + if (!mm_cid_is_unset(cid) && cid < max_nr_cid &&
> + !cpumask_test_and_set_cpu(cid, cidmask))
> return cid;
> /*
> * Expand cid allocation if the maximum number of concurrency
> @@ -3709,8 +3727,9 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
> * and number of threads. Expanding cid allocation as much as
> * possible improves cache locality.
> */
> - cid = atomic_read(&mm->max_nr_cid);
> + cid = max_nr_cid;
> while (cid < READ_ONCE(mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) {
> + /* atomic_try_cmpxchg loads previous mm->max_nr_cid into cid. */
> if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1))
> continue;
> if (!cpumask_test_and_set_cpu(cid, cidmask))
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
next prev parent reply other threads:[~2025-02-13 14:56 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-10 15:32 [PATCH v6 0/3] sched: Restructure task_mm_cid_work for predictability Gabriele Monaco
2025-02-10 15:32 ` [PATCH v6 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Gabriele Monaco
2025-02-13 14:56 ` Mathieu Desnoyers [this message]
2025-02-18 7:53 ` Peter Zijlstra
2025-02-20 15:08 ` [tip: sched/urgent] " tip-bot2 for Mathieu Desnoyers
2025-02-10 15:32 ` [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
2025-02-13 6:52 ` kernel test robot
2025-02-13 13:25 ` Gabriele Monaco
2025-02-13 13:55 ` Mathieu Desnoyers
2025-02-13 14:37 ` Gabriele Monaco
2025-02-13 14:52 ` Mathieu Desnoyers
2025-02-13 14:54 ` Gabriele Monaco
2025-02-13 17:31 ` Mathieu Desnoyers
2025-02-14 6:44 ` Gabriele Monaco
2025-02-10 15:32 ` [PATCH v6 3/3] rseq/selftests: Add test for mm_cid compaction Gabriele Monaco
2025-02-10 16:09 ` [PATCH v6 0/3] sched: Restructure task_mm_cid_work for predictability Mathieu Desnoyers
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fe1858c9-0cbd-4f76-aa2a-b30b2a3f3cbd@efficios.com \
--to=mathieu.desnoyers@efficios.com \
--cc=akpm@linux-foundation.org \
--cc=elver@google.com \
--cc=gmonaco@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@kernel.org \
--cc=mingo@redhat.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=shuah@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.