All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 0/3] sched: Restructure task_mm_cid_work for predictability
@ 2025-02-10 15:32 Gabriele Monaco
  2025-02-10 15:32 ` [PATCH v6 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Gabriele Monaco
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Gabriele Monaco @ 2025-02-10 15:32 UTC (permalink / raw)
  To: linux-kernel, Mathieu Desnoyers, Peter Zijlstra, Ingo Molnar,
	Paul E. McKenney, Shuah Khan
  Cc: Gabriele Monaco

This patchset moves the task_mm_cid_work to a preemptible and migratable
context. This reduces the impact of this task to the scheduling latency
of real time tasks.
The change makes the recurrence of the task a bit more predictable.
We also add optimisation and fixes to make sure the task_mm_cid_work
works as intended.

The behaviour causing latency was introduced in commit 223baf9d17f2
("sched: Fix performance regression introduced by mm_cid") which
introduced a task work tied to the scheduler tick.
That approach presents two possible issues:
* the task work runs before returning to user and causes, in fact, a
  scheduling latency (with order of magnitude significant in PREEMPT_RT)
* periodic tasks with short runtime are less likely to run during the
  tick, hence they might not run the task work at all

Patch 1 allows the mm_cids to be actually compacted when a process
reduces its number of threads, which was not the case since the same
mm_cids were reused to improve cache locality, more details in [4].

Patch 2 contains the main changes, removing the task_work on the
scheduler tick and using a delayed_work instead.
Additionally, we terminate the call immediately if we see that no mm_cid
is actually active, which could happen on processes sleeping for long
time or which exited but whose mm has not been freed yet.

Patch 3 adds a selftest to validate the functionality of the
task_mm_cid_work (i.e. to compact the mm_cids). The test fails if patch
1 is not applied and is flaky without patch 2. We expect it to always
pass with the entire patchset applied.

Changes since V5:
* Punctuation

Changes since V4 [1]:
* Fixes on the selftest
    * Polished memory allocation and cleanup
    * Handle the test failure in main

Changes since V3 [2]:
* Fixes on the selftest
    * Minor style issues in comments and indentation
    * Use of perror where possible
    * Add a barrier to align threads execution
    * Improve test failure and error handling

Changes since V2 [3]:
* Change the order of the patches
* Merge patches changing the main delayed_work logic
* Improved self-test to spawn 1 less thread and use the main one instead

Changes since V1 [4]:
* Re-arm the delayed_work at each invocation
* Cancel the work synchronously at mmdrop
* Remove next scan fields and completely rely on the delayed_work
* Shrink mm_cid allocation with nr thread/affinity (Mathieu Desnoyers)
* Add self test

Overhead comparison in [4]

[1] - https://lore.kernel.org/lkml/20250113074231.61638-4-gmonaco@redhat.com
[2] - https://lore.kernel.org/lkml/20241216130909.240042-1-gmonaco@redhat.com
[3] - https://lore.kernel.org/lkml/20241213095407.271357-1-gmonaco@redhat.com
[4] - https://lore.kernel.org/lkml/20241205083110.180134-2-gmonaco@redhat.com

To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
To: Peter Zijlstra <peterz@infradead.org>
To: Ingo Molnar <mingo@kernel.org>
To: Paul E. McKenney <paulmck@kernel.org>
To: Shuah Khan <shuah@kernel.org>

Gabriele Monaco (2):
  sched: Move task_mm_cid_work to mm delayed work
  rseq/selftests: Add test for mm_cid compaction

Mathieu Desnoyers (1):
  sched: Compact RSEQ concurrency IDs with reduced threads and affinity

 include/linux/mm_types.h                      |  23 +-
 include/linux/sched.h                         |   1 -
 kernel/sched/core.c                           |  66 +-----
 kernel/sched/sched.h                          |  32 ++-
 tools/testing/selftests/rseq/.gitignore       |   1 +
 tools/testing/selftests/rseq/Makefile         |   2 +-
 .../selftests/rseq/mm_cid_compaction_test.c   | 200 ++++++++++++++++++
 7 files changed, 246 insertions(+), 79 deletions(-)
 create mode 100644 tools/testing/selftests/rseq/mm_cid_compaction_test.c


base-commit: a64dcfb451e254085a7daee5fe51bf22959d52d3
-- 
2.48.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v6 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity
  2025-02-10 15:32 [PATCH v6 0/3] sched: Restructure task_mm_cid_work for predictability Gabriele Monaco
@ 2025-02-10 15:32 ` Gabriele Monaco
  2025-02-13 14:56   ` Mathieu Desnoyers
  2025-02-20 15:08   ` [tip: sched/urgent] " tip-bot2 for Mathieu Desnoyers
  2025-02-10 15:32 ` [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 16+ messages in thread
From: Gabriele Monaco @ 2025-02-10 15:32 UTC (permalink / raw)
  To: linux-kernel, Andrew Morton, Ingo Molnar, Peter Zijlstra,
	linux-mm
  Cc: Mathieu Desnoyers, Marco Elver, Ingo Molnar, Gabriele Monaco,
	Paul E. McKenney, Shuah Khan

From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>

When a process reduces its number of threads or clears bits in its CPU
affinity mask, the mm_cid allocation should eventually converge towards
smaller values.

However, the change introduced by:

commit 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency
IDs for intermittent workloads")

adds a per-mm/CPU recent_cid which is never unset unless a thread
migrates.

This is a tradeoff between:

A) Preserving cache locality after a transition from many threads to few
   threads, or after reducing the hamming weight of the allowed CPU mask.

B) Making the mm_cid upper bounds wrt nr threads and allowed CPU mask
   easy to document and understand.

C) Allowing applications to eventually react to mm_cid compaction after
   reduction of the nr threads or allowed CPU mask, making the tracking
   of mm_cid compaction easier by shrinking it back towards 0 or not.

D) Making sure applications that periodically reduce and then increase
   again the nr threads or allowed CPU mask still benefit from good
   cache locality with mm_cid.

Introduce the following changes:

* After shrinking the number of threads or reducing the number of
  allowed CPUs, reduce the value of max_nr_cid so expansion of CID
  allocation will preserve cache locality if the number of threads or
  allowed CPUs increase again.

* Only re-use a recent_cid if it is within the max_nr_cid upper bound,
  else find the first available CID.

Fixes: 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads")
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Marco Elver <elver@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Tested-by: Gabriele Monaco <gmonaco@redhat.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
 include/linux/mm_types.h |  7 ++++---
 kernel/sched/sched.h     | 25 ++++++++++++++++++++++---
 2 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6b27db7f94963..0234f14f2aa6b 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -875,10 +875,11 @@ struct mm_struct {
 		 */
 		unsigned int nr_cpus_allowed;
 		/**
-		 * @max_nr_cid: Maximum number of concurrency IDs allocated.
+		 * @max_nr_cid: Maximum number of allowed concurrency
+		 *              IDs allocated.
 		 *
-		 * Track the highest number of concurrency IDs allocated for the
-		 * mm.
+		 * Track the highest number of allowed concurrency IDs
+		 * allocated for the mm.
 		 */
 		atomic_t max_nr_cid;
 		/**
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 38e0e323dda26..606c96b74ebfa 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3698,10 +3698,28 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
 {
 	struct cpumask *cidmask = mm_cidmask(mm);
 	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-	int cid = __this_cpu_read(pcpu_cid->recent_cid);
+	int cid, max_nr_cid, allowed_max_nr_cid;
 
+	/*
+	 * After shrinking the number of threads or reducing the number
+	 * of allowed cpus, reduce the value of max_nr_cid so expansion
+	 * of cid allocation will preserve cache locality if the number
+	 * of threads or allowed cpus increase again.
+	 */
+	max_nr_cid = atomic_read(&mm->max_nr_cid);
+	while ((allowed_max_nr_cid = min_t(int, READ_ONCE(mm->nr_cpus_allowed),
+					   atomic_read(&mm->mm_users))),
+	       max_nr_cid > allowed_max_nr_cid) {
+		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into max_nr_cid. */
+		if (atomic_try_cmpxchg(&mm->max_nr_cid, &max_nr_cid, allowed_max_nr_cid)) {
+			max_nr_cid = allowed_max_nr_cid;
+			break;
+		}
+	}
 	/* Try to re-use recent cid. This improves cache locality. */
-	if (!mm_cid_is_unset(cid) && !cpumask_test_and_set_cpu(cid, cidmask))
+	cid = __this_cpu_read(pcpu_cid->recent_cid);
+	if (!mm_cid_is_unset(cid) && cid < max_nr_cid &&
+	    !cpumask_test_and_set_cpu(cid, cidmask))
 		return cid;
 	/*
 	 * Expand cid allocation if the maximum number of concurrency
@@ -3709,8 +3727,9 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
 	 * and number of threads. Expanding cid allocation as much as
 	 * possible improves cache locality.
 	 */
-	cid = atomic_read(&mm->max_nr_cid);
+	cid = max_nr_cid;
 	while (cid < READ_ONCE(mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) {
+		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into cid. */
 		if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1))
 			continue;
 		if (!cpumask_test_and_set_cpu(cid, cidmask))
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work
  2025-02-10 15:32 [PATCH v6 0/3] sched: Restructure task_mm_cid_work for predictability Gabriele Monaco
  2025-02-10 15:32 ` [PATCH v6 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Gabriele Monaco
@ 2025-02-10 15:32 ` Gabriele Monaco
  2025-02-13  6:52   ` kernel test robot
  2025-02-10 15:32 ` [PATCH v6 3/3] rseq/selftests: Add test for mm_cid compaction Gabriele Monaco
  2025-02-10 16:09 ` [PATCH v6 0/3] sched: Restructure task_mm_cid_work for predictability Mathieu Desnoyers
  3 siblings, 1 reply; 16+ messages in thread
From: Gabriele Monaco @ 2025-02-10 15:32 UTC (permalink / raw)
  To: linux-kernel, Andrew Morton, Ingo Molnar, Peter Zijlstra,
	linux-mm
  Cc: Gabriele Monaco, Mathieu Desnoyers, Ingo Molnar, Paul E. McKenney,
	Shuah Khan

Currently, the task_mm_cid_work function is called in a task work
triggered by a scheduler tick to frequently compact the mm_cids of each
process. This can delay the execution of the corresponding thread for
the entire duration of the function, negatively affecting the response
in case of real time tasks. In practice, we observe task_mm_cid_work
increasing the latency of 30-35us on a 128 cores system, this order of
magnitude is meaningful under PREEMPT_RT.

Run the task_mm_cid_work in a new delayed work connected to the
mm_struct rather than in the task context before returning to
userspace.

This delayed work is initialised while allocating the mm and disabled
before freeing it, its execution is no longer triggered by scheduler
ticks but run periodically based on the defined MM_CID_SCAN_DELAY.

The main advantage of this change is that the function can be offloaded
to a different CPU and even preempted by RT tasks.

Moreover, this new behaviour could be more predictable with periodic
tasks with short runtime, which may rarely run during a scheduler tick.
Now, the work is always scheduled with the same periodicity for each mm
(though the periodicity is not guaranteed due to interference from other
tasks, but mm_cid compaction is mostly best effort).

To avoid excessively increased runtime, we quickly return from the
function if we have no work to be done (i.e. no mm_cid is allocated).
This is helpful for tasks that sleep for a long time, but also for
terminated task. We are no longer following the process' state, hence
the function continues to run after a process terminates but before its
mm is freed.

Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced by mm_cid")
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
 include/linux/mm_types.h | 16 ++++++----
 include/linux/sched.h    |  1 -
 kernel/sched/core.c      | 66 +++++-----------------------------------
 kernel/sched/sched.h     |  7 -----
 4 files changed, 18 insertions(+), 72 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 0234f14f2aa6b..3aeadb519cac5 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -861,12 +861,6 @@ struct mm_struct {
 		 * runqueue locks.
 		 */
 		struct mm_cid __percpu *pcpu_cid;
-		/*
-		 * @mm_cid_next_scan: Next mm_cid scan (in jiffies).
-		 *
-		 * When the next mm_cid scan is due (in jiffies).
-		 */
-		unsigned long mm_cid_next_scan;
 		/**
 		 * @nr_cpus_allowed: Number of CPUs allowed for mm.
 		 *
@@ -889,6 +883,7 @@ struct mm_struct {
 		 * mm nr_cpus_allowed updates.
 		 */
 		raw_spinlock_t cpus_allowed_lock;
+		struct delayed_work mm_cid_work;
 #endif
 #ifdef CONFIG_MMU
 		atomic_long_t pgtables_bytes;	/* size of all page tables */
@@ -1180,11 +1175,16 @@ static inline void vma_iter_init(struct vma_iterator *vmi,
 
 #ifdef CONFIG_SCHED_MM_CID
 
+#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
+#define MM_CID_SCAN_DELAY	100			/* 100ms */
+
 enum mm_cid_state {
 	MM_CID_UNSET = -1U,		/* Unset state has lazy_put flag set. */
 	MM_CID_LAZY_PUT = (1U << 31),
 };
 
+extern void task_mm_cid_work(struct work_struct *work);
+
 static inline bool mm_cid_is_unset(int cid)
 {
 	return cid == MM_CID_UNSET;
@@ -1257,12 +1257,16 @@ static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct *
 	if (!mm->pcpu_cid)
 		return -ENOMEM;
 	mm_init_cid(mm, p);
+	INIT_DELAYED_WORK(&mm->mm_cid_work, task_mm_cid_work);
+	schedule_delayed_work(&mm->mm_cid_work,
+			      msecs_to_jiffies(MM_CID_SCAN_DELAY));
 	return 0;
 }
 #define mm_alloc_cid(...)	alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__))
 
 static inline void mm_destroy_cid(struct mm_struct *mm)
 {
+	disable_delayed_work_sync(&mm->mm_cid_work);
 	free_percpu(mm->pcpu_cid);
 	mm->pcpu_cid = NULL;
 }
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 9632e3318e0d6..515b15f946cac 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1397,7 +1397,6 @@ struct task_struct {
 	int				last_mm_cid;	/* Most recent cid in mm */
 	int				migrate_from_cpu;
 	int				mm_cid_active;	/* Whether cid bitmap is active */
-	struct callback_head		cid_work;
 #endif
 
 	struct tlbflush_unmap_batch	tlb_ubc;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 165c90ba64ea9..c65003ab8c55b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4524,7 +4524,6 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
 	p->wake_entry.u_flags = CSD_TYPE_TTWU;
 	p->migration_pending = NULL;
 #endif
-	init_sched_mm_cid(p);
 }
 
 DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
@@ -5662,7 +5661,6 @@ void sched_tick(void)
 		resched_latency = cpu_resched_latency(rq);
 	calc_global_load_tick(rq);
 	sched_core_tick(rq);
-	task_tick_mm_cid(rq, donor);
 	scx_tick(rq);
 
 	rq_unlock(rq, &rf);
@@ -10528,38 +10526,17 @@ static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
 	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
 }
 
-static void task_mm_cid_work(struct callback_head *work)
+void task_mm_cid_work(struct work_struct *work)
 {
-	unsigned long now = jiffies, old_scan, next_scan;
-	struct task_struct *t = current;
 	struct cpumask *cidmask;
-	struct mm_struct *mm;
+	struct delayed_work *delayed_work = container_of(work, struct delayed_work, work);
+	struct mm_struct *mm = container_of(delayed_work, struct mm_struct, mm_cid_work);
 	int weight, cpu;
 
-	SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
-
-	work->next = work;	/* Prevent double-add */
-	if (t->flags & PF_EXITING)
-		return;
-	mm = t->mm;
-	if (!mm)
-		return;
-	old_scan = READ_ONCE(mm->mm_cid_next_scan);
-	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
-	if (!old_scan) {
-		unsigned long res;
-
-		res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
-		if (res != old_scan)
-			old_scan = res;
-		else
-			old_scan = next_scan;
-	}
-	if (time_before(now, old_scan))
-		return;
-	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
-		return;
 	cidmask = mm_cidmask(mm);
+	/* Nothing to clear for now */
+	if (cpumask_empty(cidmask))
+		goto out;
 	/* Clear cids that were not recently used. */
 	for_each_possible_cpu(cpu)
 		sched_mm_cid_remote_clear_old(mm, cpu);
@@ -10570,35 +10547,8 @@ static void task_mm_cid_work(struct callback_head *work)
 	 */
 	for_each_possible_cpu(cpu)
 		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
-}
-
-void init_sched_mm_cid(struct task_struct *t)
-{
-	struct mm_struct *mm = t->mm;
-	int mm_users = 0;
-
-	if (mm) {
-		mm_users = atomic_read(&mm->mm_users);
-		if (mm_users == 1)
-			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
-	}
-	t->cid_work.next = &t->cid_work;	/* Protect against double add */
-	init_task_work(&t->cid_work, task_mm_cid_work);
-}
-
-void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
-{
-	struct callback_head *work = &curr->cid_work;
-	unsigned long now = jiffies;
-
-	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
-	    work->next != work)
-		return;
-	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
-		return;
-
-	/* No page allocation under rq lock */
-	task_work_add(curr, work, TWA_RESUME);
+out:
+	schedule_delayed_work(delayed_work, msecs_to_jiffies(MM_CID_SCAN_DELAY));
 }
 
 void sched_mm_cid_exit_signals(struct task_struct *t)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 606c96b74ebfa..fc613d9090bed 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3622,16 +3622,11 @@ extern void sched_dynamic_update(int mode);
 
 #ifdef CONFIG_SCHED_MM_CID
 
-#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
-#define MM_CID_SCAN_DELAY	100			/* 100ms */
-
 extern raw_spinlock_t cid_lock;
 extern int use_cid_lock;
 
 extern void sched_mm_cid_migrate_from(struct task_struct *t);
 extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t);
-extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
-extern void init_sched_mm_cid(struct task_struct *t);
 
 static inline void __mm_cid_put(struct mm_struct *mm, int cid)
 {
@@ -3899,8 +3894,6 @@ static inline void switch_mm_cid(struct rq *rq,
 static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
 static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
 static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { }
-static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
-static inline void init_sched_mm_cid(struct task_struct *t) { }
 #endif /* !CONFIG_SCHED_MM_CID */
 
 extern u64 avg_vruntime(struct cfs_rq *cfs_rq);
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v6 3/3] rseq/selftests: Add test for mm_cid compaction
  2025-02-10 15:32 [PATCH v6 0/3] sched: Restructure task_mm_cid_work for predictability Gabriele Monaco
  2025-02-10 15:32 ` [PATCH v6 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Gabriele Monaco
  2025-02-10 15:32 ` [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
@ 2025-02-10 15:32 ` Gabriele Monaco
  2025-02-10 16:09 ` [PATCH v6 0/3] sched: Restructure task_mm_cid_work for predictability Mathieu Desnoyers
  3 siblings, 0 replies; 16+ messages in thread
From: Gabriele Monaco @ 2025-02-10 15:32 UTC (permalink / raw)
  To: linux-kernel, Mathieu Desnoyers, Peter Zijlstra, Paul E. McKenney,
	Shuah Khan, linux-kselftest
  Cc: Gabriele Monaco, Ingo Molnar

A task in the kernel (task_mm_cid_work) runs somewhat periodically to
compact the mm_cid for each process. Add a test to validate that it runs
correctly and timely.

The test spawns 1 thread pinned to each CPU, then each thread, including
the main one, runs in short bursts for some time. During this period, the
mm_cids should be spanning all numbers between 0 and nproc.

At the end of this phase, a thread with high enough mm_cid (>= nproc/2)
is selected to be the new leader, all other threads terminate.

After some time, the only running thread should see 0 as mm_cid, if that
doesn't happen, the compaction mechanism didn't work and the test fails.

The test never fails if only 1 core is available, in which case, we
cannot test anything as the only available mm_cid is 0.

Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
 tools/testing/selftests/rseq/.gitignore       |   1 +
 tools/testing/selftests/rseq/Makefile         |   2 +-
 .../selftests/rseq/mm_cid_compaction_test.c   | 200 ++++++++++++++++++
 3 files changed, 202 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/rseq/mm_cid_compaction_test.c

diff --git a/tools/testing/selftests/rseq/.gitignore b/tools/testing/selftests/rseq/.gitignore
index 16496de5f6ce4..2c89f97e4f737 100644
--- a/tools/testing/selftests/rseq/.gitignore
+++ b/tools/testing/selftests/rseq/.gitignore
@@ -3,6 +3,7 @@ basic_percpu_ops_test
 basic_percpu_ops_mm_cid_test
 basic_test
 basic_rseq_op_test
+mm_cid_compaction_test
 param_test
 param_test_benchmark
 param_test_compare_twice
diff --git a/tools/testing/selftests/rseq/Makefile b/tools/testing/selftests/rseq/Makefile
index 5a3432fceb586..ce1b38f46a355 100644
--- a/tools/testing/selftests/rseq/Makefile
+++ b/tools/testing/selftests/rseq/Makefile
@@ -16,7 +16,7 @@ OVERRIDE_TARGETS = 1
 
 TEST_GEN_PROGS = basic_test basic_percpu_ops_test basic_percpu_ops_mm_cid_test param_test \
 		param_test_benchmark param_test_compare_twice param_test_mm_cid \
-		param_test_mm_cid_benchmark param_test_mm_cid_compare_twice
+		param_test_mm_cid_benchmark param_test_mm_cid_compare_twice mm_cid_compaction_test
 
 TEST_GEN_PROGS_EXTENDED = librseq.so
 
diff --git a/tools/testing/selftests/rseq/mm_cid_compaction_test.c b/tools/testing/selftests/rseq/mm_cid_compaction_test.c
new file mode 100644
index 0000000000000..7ddde3b657dd6
--- /dev/null
+++ b/tools/testing/selftests/rseq/mm_cid_compaction_test.c
@@ -0,0 +1,200 @@
+// SPDX-License-Identifier: LGPL-2.1
+#define _GNU_SOURCE
+#include <assert.h>
+#include <pthread.h>
+#include <sched.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stddef.h>
+
+#include "../kselftest.h"
+#include "rseq.h"
+
+#define VERBOSE 0
+#define printf_verbose(fmt, ...)                    \
+	do {                                        \
+		if (VERBOSE)                        \
+			printf(fmt, ##__VA_ARGS__); \
+	} while (0)
+
+/* 0.5 s */
+#define RUNNER_PERIOD 500000
+/* Number of runs before we terminate or get the token */
+#define THREAD_RUNS 5
+
+/*
+ * Number of times we check that the mm_cid were compacted.
+ * Checks are repeated every RUNNER_PERIOD.
+ */
+#define MM_CID_COMPACT_TIMEOUT 10
+
+struct thread_args {
+	int cpu;
+	int num_cpus;
+	pthread_mutex_t *token;
+	pthread_barrier_t *barrier;
+	pthread_t *tinfo;
+	struct thread_args *args_head;
+};
+
+static void __noreturn *thread_runner(void *arg)
+{
+	struct thread_args *args = arg;
+	int i, ret, curr_mm_cid;
+	cpu_set_t cpumask;
+
+	CPU_ZERO(&cpumask);
+	CPU_SET(args->cpu, &cpumask);
+	ret = pthread_setaffinity_np(pthread_self(), sizeof(cpumask), &cpumask);
+	if (ret) {
+		errno = ret;
+		perror("Error: failed to set affinity");
+		abort();
+	}
+	pthread_barrier_wait(args->barrier);
+
+	for (i = 0; i < THREAD_RUNS; i++)
+		usleep(RUNNER_PERIOD);
+	curr_mm_cid = rseq_current_mm_cid();
+	/*
+	 * We select one thread with high enough mm_cid to be the new leader.
+	 * All other threads (including the main thread) will terminate.
+	 * After some time, the mm_cid of the only remaining thread should
+	 * converge to 0, if not, the test fails.
+	 */
+	if (curr_mm_cid >= args->num_cpus / 2 &&
+	    !pthread_mutex_trylock(args->token)) {
+		printf_verbose(
+			"cpu%d has mm_cid=%d and will be the new leader.\n",
+			sched_getcpu(), curr_mm_cid);
+		for (i = 0; i < args->num_cpus; i++) {
+			if (args->tinfo[i] == pthread_self())
+				continue;
+			ret = pthread_join(args->tinfo[i], NULL);
+			if (ret) {
+				errno = ret;
+				perror("Error: failed to join thread");
+				abort();
+			}
+		}
+		pthread_barrier_destroy(args->barrier);
+		free(args->tinfo);
+		free(args->token);
+		free(args->barrier);
+		free(args->args_head);
+
+		for (i = 0; i < MM_CID_COMPACT_TIMEOUT; i++) {
+			curr_mm_cid = rseq_current_mm_cid();
+			printf_verbose("run %d: mm_cid=%d on cpu%d.\n", i,
+				       curr_mm_cid, sched_getcpu());
+			if (curr_mm_cid == 0)
+				exit(EXIT_SUCCESS);
+			usleep(RUNNER_PERIOD);
+		}
+		exit(EXIT_FAILURE);
+	}
+	printf_verbose("cpu%d has mm_cid=%d and is going to terminate.\n",
+		       sched_getcpu(), curr_mm_cid);
+	pthread_exit(NULL);
+}
+
+int test_mm_cid_compaction(void)
+{
+	cpu_set_t affinity;
+	int i, j, ret = 0, num_threads;
+	pthread_t *tinfo;
+	pthread_mutex_t *token;
+	pthread_barrier_t *barrier;
+	struct thread_args *args;
+
+	sched_getaffinity(0, sizeof(affinity), &affinity);
+	num_threads = CPU_COUNT(&affinity);
+	tinfo = calloc(num_threads, sizeof(*tinfo));
+	if (!tinfo) {
+		perror("Error: failed to allocate tinfo");
+		return -1;
+	}
+	args = calloc(num_threads, sizeof(*args));
+	if (!args) {
+		perror("Error: failed to allocate args");
+		ret = -1;
+		goto out_free_tinfo;
+	}
+	token = malloc(sizeof(*token));
+	if (!token) {
+		perror("Error: failed to allocate token");
+		ret = -1;
+		goto out_free_args;
+	}
+	barrier = malloc(sizeof(*barrier));
+	if (!barrier) {
+		perror("Error: failed to allocate barrier");
+		ret = -1;
+		goto out_free_token;
+	}
+	if (num_threads == 1) {
+		fprintf(stderr, "Cannot test on a single cpu. "
+				"Skipping mm_cid_compaction test.\n");
+		/* only skipping the test, this is not a failure */
+		goto out_free_barrier;
+	}
+	pthread_mutex_init(token, NULL);
+	ret = pthread_barrier_init(barrier, NULL, num_threads);
+	if (ret) {
+		errno = ret;
+		perror("Error: failed to initialise barrier");
+		goto out_free_barrier;
+	}
+	for (i = 0, j = 0; i < CPU_SETSIZE && j < num_threads; i++) {
+		if (!CPU_ISSET(i, &affinity))
+			continue;
+		args[j].num_cpus = num_threads;
+		args[j].tinfo = tinfo;
+		args[j].token = token;
+		args[j].barrier = barrier;
+		args[j].cpu = i;
+		args[j].args_head = args;
+		if (!j) {
+			/* The first thread is the main one */
+			tinfo[0] = pthread_self();
+			++j;
+			continue;
+		}
+		ret = pthread_create(&tinfo[j], NULL, thread_runner, &args[j]);
+		if (ret) {
+			errno = ret;
+			perror("Error: failed to create thread");
+			abort();
+		}
+		++j;
+	}
+	printf_verbose("Started %d threads.\n", num_threads);
+
+	/* Also main thread will terminate if it is not selected as leader */
+	thread_runner(&args[0]);
+
+	/* only reached in case of errors */
+out_free_barrier:
+	free(barrier);
+out_free_token:
+	free(token);
+out_free_args:
+	free(args);
+out_free_tinfo:
+	free(tinfo);
+
+	return ret;
+}
+
+int main(int argc, char **argv)
+{
+	if (!rseq_mm_cid_available()) {
+		fprintf(stderr, "Error: rseq_mm_cid unavailable\n");
+		return -1;
+	}
+	if (test_mm_cid_compaction())
+		return -1;
+	return 0;
+}
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 0/3] sched: Restructure task_mm_cid_work for predictability
  2025-02-10 15:32 [PATCH v6 0/3] sched: Restructure task_mm_cid_work for predictability Gabriele Monaco
                   ` (2 preceding siblings ...)
  2025-02-10 15:32 ` [PATCH v6 3/3] rseq/selftests: Add test for mm_cid compaction Gabriele Monaco
@ 2025-02-10 16:09 ` Mathieu Desnoyers
  3 siblings, 0 replies; 16+ messages in thread
From: Mathieu Desnoyers @ 2025-02-10 16:09 UTC (permalink / raw)
  To: Gabriele Monaco, linux-kernel, Peter Zijlstra, Ingo Molnar,
	Paul E. McKenney, Shuah Khan

On 2025-02-10 16:32, Gabriele Monaco wrote:
> This patchset moves the task_mm_cid_work to a preemptible and migratable
> context. This reduces the impact of this task to the scheduling latency
> of real time tasks.
> The change makes the recurrence of the task a bit more predictable.
> We also add optimisation and fixes to make sure the task_mm_cid_work
> works as intended.

Ingo, Peter, this series is ready to be pulled into tip.

Thanks!

Mathieu


> 
> The behaviour causing latency was introduced in commit 223baf9d17f2
> ("sched: Fix performance regression introduced by mm_cid") which
> introduced a task work tied to the scheduler tick.
> That approach presents two possible issues:
> * the task work runs before returning to user and causes, in fact, a
>    scheduling latency (with order of magnitude significant in PREEMPT_RT)
> * periodic tasks with short runtime are less likely to run during the
>    tick, hence they might not run the task work at all
> 
> Patch 1 allows the mm_cids to be actually compacted when a process
> reduces its number of threads, which was not the case since the same
> mm_cids were reused to improve cache locality, more details in [4].
> 
> Patch 2 contains the main changes, removing the task_work on the
> scheduler tick and using a delayed_work instead.
> Additionally, we terminate the call immediately if we see that no mm_cid
> is actually active, which could happen on processes sleeping for long
> time or which exited but whose mm has not been freed yet.
> 
> Patch 3 adds a selftest to validate the functionality of the
> task_mm_cid_work (i.e. to compact the mm_cids). The test fails if patch
> 1 is not applied and is flaky without patch 2. We expect it to always
> pass with the entire patchset applied.
> 
> Changes since V5:
> * Punctuation
> 
> Changes since V4 [1]:
> * Fixes on the selftest
>      * Polished memory allocation and cleanup
>      * Handle the test failure in main
> 
> Changes since V3 [2]:
> * Fixes on the selftest
>      * Minor style issues in comments and indentation
>      * Use of perror where possible
>      * Add a barrier to align threads execution
>      * Improve test failure and error handling
> 
> Changes since V2 [3]:
> * Change the order of the patches
> * Merge patches changing the main delayed_work logic
> * Improved self-test to spawn 1 less thread and use the main one instead
> 
> Changes since V1 [4]:
> * Re-arm the delayed_work at each invocation
> * Cancel the work synchronously at mmdrop
> * Remove next scan fields and completely rely on the delayed_work
> * Shrink mm_cid allocation with nr thread/affinity (Mathieu Desnoyers)
> * Add self test
> 
> Overhead comparison in [4]
> 
> [1] - https://lore.kernel.org/lkml/20250113074231.61638-4-gmonaco@redhat.com
> [2] - https://lore.kernel.org/lkml/20241216130909.240042-1-gmonaco@redhat.com
> [3] - https://lore.kernel.org/lkml/20241213095407.271357-1-gmonaco@redhat.com
> [4] - https://lore.kernel.org/lkml/20241205083110.180134-2-gmonaco@redhat.com
> 
> To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> To: Peter Zijlstra <peterz@infradead.org>
> To: Ingo Molnar <mingo@kernel.org>
> To: Paul E. McKenney <paulmck@kernel.org>
> To: Shuah Khan <shuah@kernel.org>
> 
> Gabriele Monaco (2):
>    sched: Move task_mm_cid_work to mm delayed work
>    rseq/selftests: Add test for mm_cid compaction
> 
> Mathieu Desnoyers (1):
>    sched: Compact RSEQ concurrency IDs with reduced threads and affinity
> 
>   include/linux/mm_types.h                      |  23 +-
>   include/linux/sched.h                         |   1 -
>   kernel/sched/core.c                           |  66 +-----
>   kernel/sched/sched.h                          |  32 ++-
>   tools/testing/selftests/rseq/.gitignore       |   1 +
>   tools/testing/selftests/rseq/Makefile         |   2 +-
>   .../selftests/rseq/mm_cid_compaction_test.c   | 200 ++++++++++++++++++
>   7 files changed, 246 insertions(+), 79 deletions(-)
>   create mode 100644 tools/testing/selftests/rseq/mm_cid_compaction_test.c
> 
> 
> base-commit: a64dcfb451e254085a7daee5fe51bf22959d52d3


-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work
  2025-02-10 15:32 ` [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
@ 2025-02-13  6:52   ` kernel test robot
  2025-02-13 13:25     ` Gabriele Monaco
  0 siblings, 1 reply; 16+ messages in thread
From: kernel test robot @ 2025-02-13  6:52 UTC (permalink / raw)
  To: Gabriele Monaco
  Cc: oe-lkp, lkp, Mathieu Desnoyers, linux-mm, linux-kernel, aubrey.li,
	yu.c.chen, Andrew Morton, Ingo Molnar, Peter Zijlstra,
	Gabriele Monaco, Ingo Molnar, Paul E. McKenney, Shuah Khan,
	oliver.sang



Hello,

kernel test robot noticed "WARNING:at_kernel/workqueue.c:#__queue_delayed_work" on:

commit: 287adf9e9c1fa8a0e2b50ab1a1de3e4572a8ccd2 ("[PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work")
url: https://github.com/intel-lab-lkp/linux/commits/Gabriele-Monaco/sched-Compact-RSEQ-concurrency-IDs-with-reduced-threads-and-affinity/20250210-233547
patch link: https://lore.kernel.org/all/20250210153253.460471-3-gmonaco@redhat.com/
patch subject: [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work

in testcase: boot

config: x86_64-randconfig-004-20250211
compiler: clang-19
test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G

(please refer to attached dmesg/kmsg for entire log/backtrace)



If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202502131405.1ba0803f-lkp@intel.com


[    2.640924][    T0] ------------[ cut here ]------------
[ 2.641646][ T0] WARNING: CPU: 0 PID: 0 at kernel/workqueue.c:2495 __queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9)) 
[    2.642874][    T0] Modules linked in:
[    2.643381][    T0] CPU: 0 UID: 0 PID: 0 Comm: swapper Not tainted 6.14.0-rc2-00002-g287adf9e9c1f #1
[    2.644582][    T0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
[ 2.645943][ T0] RIP: 0010:__queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9)) 
[ 2.646755][ T0] Code: 44 89 fe 4c 89 33 e8 6d 24 19 00 48 83 c4 08 5b 41 5c 41 5d 41 5e 41 5f 5d 31 c0 31 c9 31 ff 31 d2 31 f6 c3 e8 7f 87 25 00 90 <0f> 0b 90 e9 58 fd ff ff e8 71 87 25 00 90 0f 0b 90 e9 80 fd ff ff
All code
========
   0:	44 89 fe             	mov    %r15d,%esi
   3:	4c 89 33             	mov    %r14,(%rbx)
   6:	e8 6d 24 19 00       	call   0x192478
   b:	48 83 c4 08          	add    $0x8,%rsp
   f:	5b                   	pop    %rbx
  10:	41 5c                	pop    %r12
  12:	41 5d                	pop    %r13
  14:	41 5e                	pop    %r14
  16:	41 5f                	pop    %r15
  18:	5d                   	pop    %rbp
  19:	31 c0                	xor    %eax,%eax
  1b:	31 c9                	xor    %ecx,%ecx
  1d:	31 ff                	xor    %edi,%edi
  1f:	31 d2                	xor    %edx,%edx
  21:	31 f6                	xor    %esi,%esi
  23:	c3                   	ret
  24:	e8 7f 87 25 00       	call   0x2587a8
  29:	90                   	nop
  2a:*	0f 0b                	ud2		<-- trapping instruction
  2c:	90                   	nop
  2d:	e9 58 fd ff ff       	jmp    0xfffffffffffffd8a
  32:	e8 71 87 25 00       	call   0x2587a8
  37:	90                   	nop
  38:	0f 0b                	ud2
  3a:	90                   	nop
  3b:	e9 80 fd ff ff       	jmp    0xfffffffffffffdc0

Code starting with the faulting instruction
===========================================
   0:	0f 0b                	ud2
   2:	90                   	nop
   3:	e9 58 fd ff ff       	jmp    0xfffffffffffffd60
   8:	e8 71 87 25 00       	call   0x25877e
   d:	90                   	nop
   e:	0f 0b                	ud2
  10:	90                   	nop
  11:	e9 80 fd ff ff       	jmp    0xfffffffffffffd96
[    2.649301][    T0] RSP: 0000:ffffffffab007d80 EFLAGS: 00010046
[    2.650081][    T0] RAX: 0000000000000000 RBX: ffff888100090b98 RCX: 0000000000000000
[    2.651084][    T0] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[    2.652078][    T0] RBP: ffffffffab007db0 R08: 0000000000000000 R09: 0000000000000000
[    2.653077][    T0] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[    2.654101][    T0] R13: 0000000000000040 R14: 0000000000000019 R15: 0000000000000040
[    2.655112][    T0] FS:  0000000000000000(0000) GS:ffff8883af200000(0000) knlGS:0000000000000000
[    2.656263][    T0] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    2.657103][    T0] CR2: ffff88843ffff000 CR3: 00000001d3617000 CR4: 00000000000000b0
[    2.658128][    T0] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    2.659147][    T0] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[    2.660143][    T0] Call Trace:
[    2.660553][    T0]  <TASK>
[ 2.660934][ T0] ? show_regs (arch/x86/kernel/dumpstack.c:479) 
[ 2.661524][ T0] ? __warn (kernel/panic.c:748) 
[ 2.662065][ T0] ? __queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9)) 
[ 2.662796][ T0] ? __queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9)) 
[ 2.663535][ T0] ? report_bug (lib/bug.c:?) 
[ 2.664119][ T0] ? __queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9)) 
[ 2.664836][ T0] ? handle_bug (arch/x86/kernel/traps.c:285) 
[ 2.665404][ T0] ? exc_invalid_op (arch/x86/kernel/traps.c:309 (discriminator 1)) 
[ 2.665848][ T0] ? asm_exc_invalid_op (arch/x86/include/asm/idtentry.h:621) 
[ 2.666292][ T0] ? __queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9)) 
[ 2.666762][ T0] queue_delayed_work_on (kernel/workqueue.c:?) 
[ 2.667215][ T0] mm_init (include/linux/workqueue.h:? include/linux/workqueue.h:817 include/linux/mm_types.h:1261 kernel/fork.c:1310) 
[ 2.667575][ T0] mm_alloc (kernel/fork.c:?) 
[ 2.667916][ T0] ? x86_64_start_reservations (usercopy_64.c:?) 
[ 2.668400][ T0] poking_init (arch/x86/mm/init.c:822) 
[ 2.668777][ T0] ? x86_64_start_reservations (usercopy_64.c:?) 
[ 2.669266][ T0] start_kernel (init/main.c:959) 
[ 2.669671][ T0] x86_64_start_reservations (usercopy_64.c:?) 
[ 2.670140][ T0] x86_64_start_kernel (arch/x86/kernel/head64.c:445 (discriminator 2)) 
[ 2.670574][ T0] common_startup_64 (arch/x86/kernel/head_64.S:421) 
[    2.671018][    T0]  </TASK>
[    2.671271][    T0] irq event stamp: 0
[ 2.671595][ T0] hardirqs last enabled at (0): 0x0 
[ 2.672192][ T0] hardirqs last disabled at (0): 0x0 
[ 2.672789][ T0] softirqs last enabled at (0): 0x0 
[ 2.673386][ T0] softirqs last disabled at (0): 0x0 
[    2.674005][    T0] ---[ end trace 0000000000000000 ]---


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20250213/202502131405.1ba0803f-lkp@intel.com



-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work
  2025-02-13  6:52   ` kernel test robot
@ 2025-02-13 13:25     ` Gabriele Monaco
  2025-02-13 13:55       ` Mathieu Desnoyers
                         ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Gabriele Monaco @ 2025-02-13 13:25 UTC (permalink / raw)
  To: linux-kernel
  Cc: Mathieu Desnoyers, linux-mm, linux-kernel, aubrey.li, yu.c.chen,
	Andrew Morton, Ingo Molnar, Peter Zijlstra, Ingo Molnar,
	Paul E. McKenney, Shuah Khan

On Thu, 2025-02-13 at 14:52 +0800, kernel test robot wrote:
> kernel test robot noticed
> "WARNING:at_kernel/workqueue.c:#__queue_delayed_work" on:
> 
> [    2.640924][    T0] ------------[ cut here ]------------
> [ 2.641646][ T0] WARNING: CPU: 0 PID: 0 at kernel/workqueue.c:2495
> __queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9)) 
> [    2.642874][    T0] Modules linked in:
> [    2.643381][    T0] CPU: 0 UID: 0 PID: 0 Comm: swapper Not tainted
> 6.14.0-rc2-00002-g287adf9e9c1f #1
> [    2.644582][    T0] Hardware name: QEMU Standard PC (i440FX +
> PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> [ 2.645943][ T0] RIP: 0010:__queue_delayed_work
> (kernel/workqueue.c:2495 (discriminator 9)) 

There seem to be major problems with this configuration, I'm trying to
understand what's wrong but, for the time being, this patchset is not
ready for inclusion.

Gabriele


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work
  2025-02-13 13:25     ` Gabriele Monaco
@ 2025-02-13 13:55       ` Mathieu Desnoyers
  2025-02-13 14:37         ` Gabriele Monaco
  2025-02-13 14:52       ` Mathieu Desnoyers
  2025-02-13 17:31       ` Mathieu Desnoyers
  2 siblings, 1 reply; 16+ messages in thread
From: Mathieu Desnoyers @ 2025-02-13 13:55 UTC (permalink / raw)
  To: Gabriele Monaco, linux-kernel
  Cc: linux-mm, aubrey.li, yu.c.chen, Andrew Morton, Ingo Molnar,
	Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Shuah Khan

On 2025-02-13 08:25, Gabriele Monaco wrote:
> On Thu, 2025-02-13 at 14:52 +0800, kernel test robot wrote:
>> kernel test robot noticed
>> "WARNING:at_kernel/workqueue.c:#__queue_delayed_work" on:
>>
>> [    2.640924][    T0] ------------[ cut here ]------------
>> [ 2.641646][ T0] WARNING: CPU: 0 PID: 0 at kernel/workqueue.c:2495
>> __queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9))
>> [    2.642874][    T0] Modules linked in:
>> [    2.643381][    T0] CPU: 0 UID: 0 PID: 0 Comm: swapper Not tainted
>> 6.14.0-rc2-00002-g287adf9e9c1f #1
>> [    2.644582][    T0] Hardware name: QEMU Standard PC (i440FX +
>> PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
>> [ 2.645943][ T0] RIP: 0010:__queue_delayed_work
>> (kernel/workqueue.c:2495 (discriminator 9))
> 
> There seem to be major problems with this configuration, I'm trying to
> understand what's wrong but, for the time being, this patchset is not
> ready for inclusion.

I think there is an issue with the order of init functions at boot.

poking_init() calls mm_alloc(), which ends up calling mm_init().

The WARN_ON() is about a NULL wq pointer, which I suspect happens
if poking_init() is called before workqueue_init_early(), which
allocates system_wq.

Indeed, in start_kernel(), poking_init() is called before
workqueue_init_early().

I'm not sure what are the init order dependencies across subsystems here.
There is the following order in start_kernel():

[...]
         mm_core_init();
         poking_init();
         ftrace_init();

         /* trace_printk can be enabled here */
         early_trace_init();

         /*
          * Set up the scheduler prior starting any interrupts (such as the
          * timer interrupt). Full topology setup happens at smp_init()
          * time - but meanwhile we still have a functioning scheduler.
          */
         sched_init();

         if (WARN(!irqs_disabled(),
                  "Interrupts were enabled *very* early, fixing it\n"))
                 local_irq_disable();
         radix_tree_init();
         maple_tree_init();

         /*
          * Set up housekeeping before setting up workqueues to allow the unbound
          * workqueue to take non-housekeeping into account.
          */
         housekeeping_init();

         /*
          * Allow workqueue creation and work item queueing/cancelling
          * early.  Work item execution depends on kthreads and starts after
          * workqueue_init().
          */
         workqueue_init_early();
[...]

So either we find a way to reorder this, or we make sure poking_init()
does not require the workqueue.

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work
  2025-02-13 13:55       ` Mathieu Desnoyers
@ 2025-02-13 14:37         ` Gabriele Monaco
  0 siblings, 0 replies; 16+ messages in thread
From: Gabriele Monaco @ 2025-02-13 14:37 UTC (permalink / raw)
  To: Mathieu Desnoyers, linux-kernel
  Cc: linux-mm, aubrey.li, yu.c.chen, Andrew Morton, Ingo Molnar,
	Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Shuah Khan



On Thu, 2025-02-13 at 08:55 -0500, Mathieu Desnoyers wrote:
> On 2025-02-13 08:25, Gabriele Monaco wrote:
> > On Thu, 2025-02-13 at 14:52 +0800, kernel test robot wrote:
> > > kernel test robot noticed
> > > "WARNING:at_kernel/workqueue.c:#__queue_delayed_work" on:
> > > 
> > > [    2.640924][    T0] ------------[ cut here ]------------
> > > [ 2.641646][ T0] WARNING: CPU: 0 PID: 0 at
> > > kernel/workqueue.c:2495
> > > __queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9))
> > > [    2.642874][    T0] Modules linked in:
> > > [    2.643381][    T0] CPU: 0 UID: 0 PID: 0 Comm: swapper Not
> > > tainted
> > > 6.14.0-rc2-00002-g287adf9e9c1f #1
> > > [    2.644582][    T0] Hardware name: QEMU Standard PC (i440FX +
> > > PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> > > [ 2.645943][ T0] RIP: 0010:__queue_delayed_work
> > > (kernel/workqueue.c:2495 (discriminator 9))
> > 
> > There seem to be major problems with this configuration, I'm trying
> > to
> > understand what's wrong but, for the time being, this patchset is
> > not
> > ready for inclusion.
> 
> I think there is an issue with the order of init functions at boot.
> 
> poking_init() calls mm_alloc(), which ends up calling mm_init().
> 
> The WARN_ON() is about a NULL wq pointer, which I suspect happens
> if poking_init() is called before workqueue_init_early(), which
> allocates system_wq.
> 
> Indeed, in start_kernel(), poking_init() is called before
> workqueue_init_early().
> 
> I'm not sure what are the init order dependencies across subsystems
> here.
> There is the following order in start_kernel():
> 
> [...]
>          mm_core_init();
>          poking_init();
>          ftrace_init();
> 
>          /* trace_printk can be enabled here */
>          early_trace_init();
> 
>          /*
>           * Set up the scheduler prior starting any interrupts (such
> as the
>           * timer interrupt). Full topology setup happens at
> smp_init()
>           * time - but meanwhile we still have a functioning
> scheduler.
>           */
>          sched_init();
> 
>          if (WARN(!irqs_disabled(),
>                   "Interrupts were enabled *very* early, fixing
> it\n"))
>                  local_irq_disable();
>          radix_tree_init();
>          maple_tree_init();
> 
>          /*
>           * Set up housekeeping before setting up workqueues to allow
> the unbound
>           * workqueue to take non-housekeeping into account.
>           */
>          housekeeping_init();
> 
>          /*
>           * Allow workqueue creation and work item
> queueing/cancelling
>           * early.  Work item execution depends on kthreads and
> starts after
>           * workqueue_init().
>           */
>          workqueue_init_early();
> [...]
> 
> So either we find a way to reorder this, or we make sure
> poking_init()
> does not require the workqueue.
> 
> Thanks,
> 
> Mathieu
> 

Nice suggestion! That seems the culprit..

From the full dmesg of the failure I've seen there's also a problem
with disabling the delayed work synchronously, since mmdrop cannot
sleep if we are not in PREEMPT_RT.

I'm trying to come with some satisfactory solution for both, ideally:
1. the delayed work is not needed in early boot, we may have a better
place where to start it
2. we can cancel the work asynchronously on mmdrop and abort it if the
pcpu_cid is null, but it seems racy, perhaps there's a better place for
that too

Thanks,
Gabriele


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work
  2025-02-13 13:25     ` Gabriele Monaco
  2025-02-13 13:55       ` Mathieu Desnoyers
@ 2025-02-13 14:52       ` Mathieu Desnoyers
  2025-02-13 14:54         ` Gabriele Monaco
  2025-02-13 17:31       ` Mathieu Desnoyers
  2 siblings, 1 reply; 16+ messages in thread
From: Mathieu Desnoyers @ 2025-02-13 14:52 UTC (permalink / raw)
  To: Gabriele Monaco, linux-kernel
  Cc: linux-mm, aubrey.li, yu.c.chen, Andrew Morton, Ingo Molnar,
	Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Shuah Khan

On 2025-02-13 08:25, Gabriele Monaco wrote:
> On Thu, 2025-02-13 at 14:52 +0800, kernel test robot wrote:
>> kernel test robot noticed
>> "WARNING:at_kernel/workqueue.c:#__queue_delayed_work" on:
>>
>> [    2.640924][    T0] ------------[ cut here ]------------
>> [ 2.641646][ T0] WARNING: CPU: 0 PID: 0 at kernel/workqueue.c:2495
>> __queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9))
>> [    2.642874][    T0] Modules linked in:
>> [    2.643381][    T0] CPU: 0 UID: 0 PID: 0 Comm: swapper Not tainted
>> 6.14.0-rc2-00002-g287adf9e9c1f #1
>> [    2.644582][    T0] Hardware name: QEMU Standard PC (i440FX +
>> PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
>> [ 2.645943][ T0] RIP: 0010:__queue_delayed_work
>> (kernel/workqueue.c:2495 (discriminator 9))
> 
> There seem to be major problems with this configuration, I'm trying to
> understand what's wrong but, for the time being, this patchset is not
> ready for inclusion.

Patch 1/3 has been ready for a while though. Can we post it separately
for inclusion ?

Thanks,

Mathieu

> 
> Gabriele
> 


-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work
  2025-02-13 14:52       ` Mathieu Desnoyers
@ 2025-02-13 14:54         ` Gabriele Monaco
  0 siblings, 0 replies; 16+ messages in thread
From: Gabriele Monaco @ 2025-02-13 14:54 UTC (permalink / raw)
  To: Mathieu Desnoyers, linux-kernel
  Cc: linux-mm, aubrey.li, yu.c.chen, Andrew Morton, Ingo Molnar,
	Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Shuah Khan



On Thu, 2025-02-13 at 09:52 -0500, Mathieu Desnoyers wrote:
> On 2025-02-13 08:25, Gabriele Monaco wrote:
> > On Thu, 2025-02-13 at 14:52 +0800, kernel test robot wrote:
> > > kernel test robot noticed
> > > "WARNING:at_kernel/workqueue.c:#__queue_delayed_work" on:
> > > 
> > > [    2.640924][    T0] ------------[ cut here ]------------
> > > [ 2.641646][ T0] WARNING: CPU: 0 PID: 0 at
> > > kernel/workqueue.c:2495
> > > __queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9))
> > > [    2.642874][    T0] Modules linked in:
> > > [    2.643381][    T0] CPU: 0 UID: 0 PID: 0 Comm: swapper Not
> > > tainted
> > > 6.14.0-rc2-00002-g287adf9e9c1f #1
> > > [    2.644582][    T0] Hardware name: QEMU Standard PC (i440FX +
> > > PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> > > [ 2.645943][ T0] RIP: 0010:__queue_delayed_work
> > > (kernel/workqueue.c:2495 (discriminator 9))
> > 
> > There seem to be major problems with this configuration, I'm trying
> > to
> > understand what's wrong but, for the time being, this patchset is
> > not
> > ready for inclusion.
> 
> Patch 1/3 has been ready for a while though. Can we post it
> separately
> for inclusion ?
> 

I would say so, there's no need to delay it further.

Thanks,
Gabriele


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity
  2025-02-10 15:32 ` [PATCH v6 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Gabriele Monaco
@ 2025-02-13 14:56   ` Mathieu Desnoyers
  2025-02-18  7:53     ` Peter Zijlstra
  2025-02-20 15:08   ` [tip: sched/urgent] " tip-bot2 for Mathieu Desnoyers
  1 sibling, 1 reply; 16+ messages in thread
From: Mathieu Desnoyers @ 2025-02-13 14:56 UTC (permalink / raw)
  To: Gabriele Monaco, linux-kernel, Andrew Morton, Ingo Molnar,
	Peter Zijlstra, linux-mm
  Cc: Marco Elver, Ingo Molnar, Paul E. McKenney, Shuah Khan

On 2025-02-10 10:32, Gabriele Monaco wrote:
> From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> 

Peter, Ingo, this patch has been ready for inclusion for a while. The
rest of this series does not seem to be quite ready yet, but can we at
least merge this patch into tip ?

Thanks,

Mathieu

> When a process reduces its number of threads or clears bits in its CPU
> affinity mask, the mm_cid allocation should eventually converge towards
> smaller values.
> 
> However, the change introduced by:
> 
> commit 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency
> IDs for intermittent workloads")
> 
> adds a per-mm/CPU recent_cid which is never unset unless a thread
> migrates.
> 
> This is a tradeoff between:
> 
> A) Preserving cache locality after a transition from many threads to few
>     threads, or after reducing the hamming weight of the allowed CPU mask.
> 
> B) Making the mm_cid upper bounds wrt nr threads and allowed CPU mask
>     easy to document and understand.
> 
> C) Allowing applications to eventually react to mm_cid compaction after
>     reduction of the nr threads or allowed CPU mask, making the tracking
>     of mm_cid compaction easier by shrinking it back towards 0 or not.
> 
> D) Making sure applications that periodically reduce and then increase
>     again the nr threads or allowed CPU mask still benefit from good
>     cache locality with mm_cid.
> 
> Introduce the following changes:
> 
> * After shrinking the number of threads or reducing the number of
>    allowed CPUs, reduce the value of max_nr_cid so expansion of CID
>    allocation will preserve cache locality if the number of threads or
>    allowed CPUs increase again.
> 
> * Only re-use a recent_cid if it is within the max_nr_cid upper bound,
>    else find the first available CID.
> 
> Fixes: 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads")
> Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: Marco Elver <elver@google.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Tested-by: Gabriele Monaco <gmonaco@redhat.com>
> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
>   include/linux/mm_types.h |  7 ++++---
>   kernel/sched/sched.h     | 25 ++++++++++++++++++++++---
>   2 files changed, 26 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 6b27db7f94963..0234f14f2aa6b 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -875,10 +875,11 @@ struct mm_struct {
>   		 */
>   		unsigned int nr_cpus_allowed;
>   		/**
> -		 * @max_nr_cid: Maximum number of concurrency IDs allocated.
> +		 * @max_nr_cid: Maximum number of allowed concurrency
> +		 *              IDs allocated.
>   		 *
> -		 * Track the highest number of concurrency IDs allocated for the
> -		 * mm.
> +		 * Track the highest number of allowed concurrency IDs
> +		 * allocated for the mm.
>   		 */
>   		atomic_t max_nr_cid;
>   		/**
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 38e0e323dda26..606c96b74ebfa 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -3698,10 +3698,28 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
>   {
>   	struct cpumask *cidmask = mm_cidmask(mm);
>   	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
> -	int cid = __this_cpu_read(pcpu_cid->recent_cid);
> +	int cid, max_nr_cid, allowed_max_nr_cid;
>   
> +	/*
> +	 * After shrinking the number of threads or reducing the number
> +	 * of allowed cpus, reduce the value of max_nr_cid so expansion
> +	 * of cid allocation will preserve cache locality if the number
> +	 * of threads or allowed cpus increase again.
> +	 */
> +	max_nr_cid = atomic_read(&mm->max_nr_cid);
> +	while ((allowed_max_nr_cid = min_t(int, READ_ONCE(mm->nr_cpus_allowed),
> +					   atomic_read(&mm->mm_users))),
> +	       max_nr_cid > allowed_max_nr_cid) {
> +		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into max_nr_cid. */
> +		if (atomic_try_cmpxchg(&mm->max_nr_cid, &max_nr_cid, allowed_max_nr_cid)) {
> +			max_nr_cid = allowed_max_nr_cid;
> +			break;
> +		}
> +	}
>   	/* Try to re-use recent cid. This improves cache locality. */
> -	if (!mm_cid_is_unset(cid) && !cpumask_test_and_set_cpu(cid, cidmask))
> +	cid = __this_cpu_read(pcpu_cid->recent_cid);
> +	if (!mm_cid_is_unset(cid) && cid < max_nr_cid &&
> +	    !cpumask_test_and_set_cpu(cid, cidmask))
>   		return cid;
>   	/*
>   	 * Expand cid allocation if the maximum number of concurrency
> @@ -3709,8 +3727,9 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
>   	 * and number of threads. Expanding cid allocation as much as
>   	 * possible improves cache locality.
>   	 */
> -	cid = atomic_read(&mm->max_nr_cid);
> +	cid = max_nr_cid;
>   	while (cid < READ_ONCE(mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) {
> +		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into cid. */
>   		if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1))
>   			continue;
>   		if (!cpumask_test_and_set_cpu(cid, cidmask))


-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work
  2025-02-13 13:25     ` Gabriele Monaco
  2025-02-13 13:55       ` Mathieu Desnoyers
  2025-02-13 14:52       ` Mathieu Desnoyers
@ 2025-02-13 17:31       ` Mathieu Desnoyers
  2025-02-14  6:44         ` Gabriele Monaco
  2 siblings, 1 reply; 16+ messages in thread
From: Mathieu Desnoyers @ 2025-02-13 17:31 UTC (permalink / raw)
  To: Gabriele Monaco, linux-kernel
  Cc: linux-mm, aubrey.li, yu.c.chen, Andrew Morton, Ingo Molnar,
	Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Shuah Khan

On 2025-02-13 08:25, Gabriele Monaco wrote:
> On Thu, 2025-02-13 at 14:52 +0800, kernel test robot wrote:
>> kernel test robot noticed
>> "WARNING:at_kernel/workqueue.c:#__queue_delayed_work" on:
>>
>> [    2.640924][    T0] ------------[ cut here ]------------
>> [ 2.641646][ T0] WARNING: CPU: 0 PID: 0 at kernel/workqueue.c:2495
>> __queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9))
>> [    2.642874][    T0] Modules linked in:
>> [    2.643381][    T0] CPU: 0 UID: 0 PID: 0 Comm: swapper Not tainted
>> 6.14.0-rc2-00002-g287adf9e9c1f #1
>> [    2.644582][    T0] Hardware name: QEMU Standard PC (i440FX +
>> PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
>> [ 2.645943][ T0] RIP: 0010:__queue_delayed_work
>> (kernel/workqueue.c:2495 (discriminator 9))
> 
> There seem to be major problems with this configuration, I'm trying to
> understand what's wrong but, for the time being, this patchset is not
> ready for inclusion.

I'm staring at this now, and I'm thinking we could do a simpler change
that would solve your RT issues without having to introduce a dependency
on workqueue.c.

So if the culprit is that task_mm_cid_work() runs for too long on large
many-cpus systems, why not break it up into smaller iterations ?

So rather than iterating on "for_each_possible_cpu", we could simply
break this down into iteration on at most N cpus, so:

tick #1: iteration on CPUs 0 ..   N - 1
tick #2: iteration on CPUs N .. 2*N - 1
...
circling back to 0 when it reaches the number of possible cpus.

This N value could be configurable, e.g. CONFIG_RSEQ_CID_SCAN_BATCH,
with a sane default. An RT system could decide to make that value lower.

Then all we need to do is remember which was that last observed cpu
number in the mm struct, so the next tick picks up from there.

The main downside of this approach compared to scheduling delayed
work in a workqueue is that it depends on having the mm be current when
the scheduler tick happens. But perhaps this is something we could fix
in a different way that does not add a dependency on workqueue. I'm not
sure how though.

Thoughts ?

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work
  2025-02-13 17:31       ` Mathieu Desnoyers
@ 2025-02-14  6:44         ` Gabriele Monaco
  0 siblings, 0 replies; 16+ messages in thread
From: Gabriele Monaco @ 2025-02-14  6:44 UTC (permalink / raw)
  To: Mathieu Desnoyers, linux-kernel
  Cc: linux-mm, aubrey.li, yu.c.chen, Andrew Morton, Peter Zijlstra,
	Ingo Molnar, Paul E. McKenney, Shuah Khan



On Thu, 2025-02-13 at 12:31 -0500, Mathieu Desnoyers wrote:
> On 2025-02-13 08:25, Gabriele Monaco wrote:
> > On Thu, 2025-02-13 at 14:52 +0800, kernel test robot wrote:
> > > kernel test robot noticed
> > > "WARNING:at_kernel/workqueue.c:#__queue_delayed_work" on:
> > > 
> > > [    2.640924][    T0] ------------[ cut here ]------------
> > > [ 2.641646][ T0] WARNING: CPU: 0 PID: 0 at
> > > kernel/workqueue.c:2495
> > > __queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9))
> > > [    2.642874][    T0] Modules linked in:
> > > [    2.643381][    T0] CPU: 0 UID: 0 PID: 0 Comm: swapper Not
> > > tainted
> > > 6.14.0-rc2-00002-g287adf9e9c1f #1
> > > [    2.644582][    T0] Hardware name: QEMU Standard PC (i440FX +
> > > PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> > > [ 2.645943][ T0] RIP: 0010:__queue_delayed_work
> > > (kernel/workqueue.c:2495 (discriminator 9))
> > 
> > There seem to be major problems with this configuration, I'm trying
> > to
> > understand what's wrong but, for the time being, this patchset is
> > not
> > ready for inclusion.
> 
> I'm staring at this now, and I'm thinking we could do a simpler
> change
> that would solve your RT issues without having to introduce a
> dependency
> on workqueue.c.
> 
> So if the culprit is that task_mm_cid_work() runs for too long on
> large
> many-cpus systems, why not break it up into smaller iterations ?
> 
> So rather than iterating on "for_each_possible_cpu", we could simply
> break this down into iteration on at most N cpus, so:
> 
> tick #1: iteration on CPUs 0 ..   N - 1
> tick #2: iteration on CPUs N .. 2*N - 1
> ...
> circling back to 0 when it reaches the number of possible cpus.
> 
> This N value could be configurable, e.g. CONFIG_RSEQ_CID_SCAN_BATCH,
> with a sane default. An RT system could decide to make that value
> lower.
> 
> Then all we need to do is remember which was that last observed cpu
> number in the mm struct, so the next tick picks up from there.
> 
> The main downside of this approach compared to scheduling delayed
> work in a workqueue is that it depends on having the mm be current
> when
> the scheduler tick happens. But perhaps this is something we could
> fix
> in a different way that does not add a dependency on workqueue. I'm
> not
> sure how though.
> 
> Thoughts ?

Mmh, that's indeed neat, what is not so good about this type of task
work is that it's a pure latency, it will happen before scheduling the
task and can't be interrupted.
The only acceptable latency is a bounded one and your idea is going in
that direction.

As you mentioned, this will make the compaction of mm_cid even more
rare and will likely have the test in 3/3 fail even more often, I'm not
sure if this is necessarily a bad thing though, since mm_cid compaction
is mainly aesthetic, so we could just increase the duration of the test
or even add a busy loop inside to make the task more likely to run this
compaction.

I gave a thought about this whole thing, don't take this too seriously,
but what I see essentially flawed in this approach is:
1. task_works are set on tick
2. task_works are run returning to userspace

1. is the issue with frequency and 2. can be mitigated by your idea,
but not essentially solved. What if we (also) did:
1+. set this task_work while switching in
2+. run this task_work while switching out to sleep (i.e. no
preemption)

1+. would make sure all threads have this task_work scheduled at a
certain point (perhaps a bit too much, but we have a periodicity check
in place). 2+. can effectively run the task in a moment when it is not
problematic for the real time response: on a preemptible kernel, as
soon as a task with higher priority is ready to run, it will preempt
the currently running one, the fact current is going to sleep willingly
implies there's no higher priority task ready, so likely no task really
caring about RT response is going to run after.
Not all tasks are ever going to sleep, so we must keep the original
TWA_RESUME in the task_work, especially for those long-running or low-
priority, both unlikely to be RT tasks.

I'm going to try a patch with this CONFIG_RSEQ_CID_SCAN_BATCH and
tuning the test to pass. In the future we can see if those ideas make
sense and perhaps bring them in.

Thanks,
Gabriele


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity
  2025-02-13 14:56   ` Mathieu Desnoyers
@ 2025-02-18  7:53     ` Peter Zijlstra
  0 siblings, 0 replies; 16+ messages in thread
From: Peter Zijlstra @ 2025-02-18  7:53 UTC (permalink / raw)
  To: Mathieu Desnoyers
  Cc: Gabriele Monaco, linux-kernel, Andrew Morton, Ingo Molnar,
	linux-mm, Marco Elver, Ingo Molnar, Paul E. McKenney, Shuah Khan

On Thu, Feb 13, 2025 at 09:56:17AM -0500, Mathieu Desnoyers wrote:
> On 2025-02-10 10:32, Gabriele Monaco wrote:
> > From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> > 
> 
> Peter, Ingo, this patch has been ready for inclusion for a while. The
> rest of this series does not seem to be quite ready yet, but can we at
> least merge this patch into tip ?

Done; stuck it in queue/sched/urgent for the robots. If that passes
I'll push it to tip.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [tip: sched/urgent] sched: Compact RSEQ concurrency IDs with reduced threads and affinity
  2025-02-10 15:32 ` [PATCH v6 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Gabriele Monaco
  2025-02-13 14:56   ` Mathieu Desnoyers
@ 2025-02-20 15:08   ` tip-bot2 for Mathieu Desnoyers
  1 sibling, 0 replies; 16+ messages in thread
From: tip-bot2 for Mathieu Desnoyers @ 2025-02-20 15:08 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Mathieu Desnoyers, Gabriele Monaco, Peter Zijlstra (Intel), x86,
	linux-kernel

The following commit has been merged into the sched/urgent branch of tip:

Commit-ID:     02d954c0fdf91845169cdacc7405b120f90afe01
Gitweb:        https://git.kernel.org/tip/02d954c0fdf91845169cdacc7405b120f90afe01
Author:        Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
AuthorDate:    Mon, 10 Feb 2025 16:32:50 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 18 Feb 2025 08:50:36 +01:00

sched: Compact RSEQ concurrency IDs with reduced threads and affinity

When a process reduces its number of threads or clears bits in its CPU
affinity mask, the mm_cid allocation should eventually converge towards
smaller values.

However, the change introduced by:

commit 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency
IDs for intermittent workloads")

adds a per-mm/CPU recent_cid which is never unset unless a thread
migrates.

This is a tradeoff between:

A) Preserving cache locality after a transition from many threads to few
   threads, or after reducing the hamming weight of the allowed CPU mask.

B) Making the mm_cid upper bounds wrt nr threads and allowed CPU mask
   easy to document and understand.

C) Allowing applications to eventually react to mm_cid compaction after
   reduction of the nr threads or allowed CPU mask, making the tracking
   of mm_cid compaction easier by shrinking it back towards 0 or not.

D) Making sure applications that periodically reduce and then increase
   again the nr threads or allowed CPU mask still benefit from good
   cache locality with mm_cid.

Introduce the following changes:

* After shrinking the number of threads or reducing the number of
  allowed CPUs, reduce the value of max_nr_cid so expansion of CID
  allocation will preserve cache locality if the number of threads or
  allowed CPUs increase again.

* Only re-use a recent_cid if it is within the max_nr_cid upper bound,
  else find the first available CID.

Fixes: 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads")
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Gabriele Monaco <gmonaco@redhat.com>
Link: https://lkml.kernel.org/r/20250210153253.460471-2-gmonaco@redhat.com
---
 include/linux/mm_types.h |  7 ++++---
 kernel/sched/sched.h     | 25 ++++++++++++++++++++++---
 2 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6b27db7..0234f14 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -875,10 +875,11 @@ struct mm_struct {
 		 */
 		unsigned int nr_cpus_allowed;
 		/**
-		 * @max_nr_cid: Maximum number of concurrency IDs allocated.
+		 * @max_nr_cid: Maximum number of allowed concurrency
+		 *              IDs allocated.
 		 *
-		 * Track the highest number of concurrency IDs allocated for the
-		 * mm.
+		 * Track the highest number of allowed concurrency IDs
+		 * allocated for the mm.
 		 */
 		atomic_t max_nr_cid;
 		/**
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b93c8c3..c8512a9 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3698,10 +3698,28 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
 {
 	struct cpumask *cidmask = mm_cidmask(mm);
 	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-	int cid = __this_cpu_read(pcpu_cid->recent_cid);
+	int cid, max_nr_cid, allowed_max_nr_cid;
 
+	/*
+	 * After shrinking the number of threads or reducing the number
+	 * of allowed cpus, reduce the value of max_nr_cid so expansion
+	 * of cid allocation will preserve cache locality if the number
+	 * of threads or allowed cpus increase again.
+	 */
+	max_nr_cid = atomic_read(&mm->max_nr_cid);
+	while ((allowed_max_nr_cid = min_t(int, READ_ONCE(mm->nr_cpus_allowed),
+					   atomic_read(&mm->mm_users))),
+	       max_nr_cid > allowed_max_nr_cid) {
+		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into max_nr_cid. */
+		if (atomic_try_cmpxchg(&mm->max_nr_cid, &max_nr_cid, allowed_max_nr_cid)) {
+			max_nr_cid = allowed_max_nr_cid;
+			break;
+		}
+	}
 	/* Try to re-use recent cid. This improves cache locality. */
-	if (!mm_cid_is_unset(cid) && !cpumask_test_and_set_cpu(cid, cidmask))
+	cid = __this_cpu_read(pcpu_cid->recent_cid);
+	if (!mm_cid_is_unset(cid) && cid < max_nr_cid &&
+	    !cpumask_test_and_set_cpu(cid, cidmask))
 		return cid;
 	/*
 	 * Expand cid allocation if the maximum number of concurrency
@@ -3709,8 +3727,9 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
 	 * and number of threads. Expanding cid allocation as much as
 	 * possible improves cache locality.
 	 */
-	cid = atomic_read(&mm->max_nr_cid);
+	cid = max_nr_cid;
 	while (cid < READ_ONCE(mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) {
+		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into cid. */
 		if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1))
 			continue;
 		if (!cpumask_test_and_set_cpu(cid, cidmask))

^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2025-02-20 15:08 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-10 15:32 [PATCH v6 0/3] sched: Restructure task_mm_cid_work for predictability Gabriele Monaco
2025-02-10 15:32 ` [PATCH v6 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Gabriele Monaco
2025-02-13 14:56   ` Mathieu Desnoyers
2025-02-18  7:53     ` Peter Zijlstra
2025-02-20 15:08   ` [tip: sched/urgent] " tip-bot2 for Mathieu Desnoyers
2025-02-10 15:32 ` [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
2025-02-13  6:52   ` kernel test robot
2025-02-13 13:25     ` Gabriele Monaco
2025-02-13 13:55       ` Mathieu Desnoyers
2025-02-13 14:37         ` Gabriele Monaco
2025-02-13 14:52       ` Mathieu Desnoyers
2025-02-13 14:54         ` Gabriele Monaco
2025-02-13 17:31       ` Mathieu Desnoyers
2025-02-14  6:44         ` Gabriele Monaco
2025-02-10 15:32 ` [PATCH v6 3/3] rseq/selftests: Add test for mm_cid compaction Gabriele Monaco
2025-02-10 16:09 ` [PATCH v6 0/3] sched: Restructure task_mm_cid_work for predictability Mathieu Desnoyers

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.