From: Luis Machado <luis.machado@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: mingo@redhat.com, juri.lelli@redhat.com,
vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
bristot@redhat.com, vschneid@redhat.com,
linux-kernel@vger.kernel.org, kprateek.nayak@amd.com,
wuyun.abel@bytedance.com, tglx@linutronix.de, efault@gmx.de,
nd <nd@arm.com>, John Stultz <jstultz@google.com>,
Hongyan.Xia2@arm.com
Subject: Re: [RFC][PATCH 08/10] sched/fair: Implement delayed dequeue
Date: Mon, 29 Apr 2024 15:33:04 +0100 [thread overview]
Message-ID: <c6152855-ef92-4c24-a3f5-64d4256b6789@arm.com> (raw)
In-Reply-To: <20240426093241.GI12673@noisy.programming.kicks-ass.net>
Hi Peter,
On 4/26/24 10:32, Peter Zijlstra wrote:
> On Thu, Apr 25, 2024 at 01:49:49PM +0200, Peter Zijlstra wrote:
>> On Thu, Apr 25, 2024 at 12:42:20PM +0200, Peter Zijlstra wrote:
>>
>>>> I wonder if the delayed dequeue logic is having an unwanted effect on the calculation of
>>>> utilization/load of the runqueue and, as a consequence, we're scheduling things to run on
>>>> higher OPP's in the big cores, leading to poor decisions for energy efficiency.
>>>
>>> Notably util_est_update() gets delayed. Given we don't actually do an
>>> enqueue when a delayed task gets woken, it didn't seem to make sense to
>>> update that sooner.
>>
>> The PELT runnable values will be inflated because of delayed dequeue.
>> cpu_util() uses those in the @boost case, and as such this can indeed
>> affect things.
>>
>> This can also slightly affect the cgroup case, but since the delay goes
>> away as contention goes away, and the cgroup case must already assume
>> worst case overlap, this seems limited.
>>
>> /me goes ponder things moar.
>
> First order approximation of a fix would be something like the totally
> untested below I suppose...
I gave this a try on the Pixel 6, and I noticed some improvement (see below), but not
enough to bring it back to the original levels.
(1) m6.6-stock - Basic EEVDF with wakeup preemption fix (baseline)
(2) m6.6-eevdf-complete: m6.6-stock plus this series.
(3) m6.6-eevdf-complete-no-delay-dequeue: (2) + NO_DELAY_DEQUEUE
(4) m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero: (2) + NO_DELAY_DEQUEUE + NO_DELAY_ZERO
(5) m6.6-eevdf-complete-no-delay-zero: (2) + NO_DELAY_ZERO
(6) m6.6-eevdf-complete-pelt-fix: (2) + the proposed load_avg update patch.
I included (3), (4) and (5) to exercise the impact of disabling the individual
scheduler features.
Energy use.
+------------+------------------------------------------------------+-----------+
| cluster | tag | perc_diff |
+------------+------------------------------------------------------+-----------+
| CPU | m6.6-stock | 0.0% |
| CPU-Big | m6.6-stock | 0.0% |
| CPU-Little | m6.6-stock | 0.0% |
| CPU-Mid | m6.6-stock | 0.0% |
| GPU | m6.6-stock | 0.0% |
| Total | m6.6-stock | 0.0% |
| CPU | m6.6-eevdf-complete | 114.51% |
| CPU-Big | m6.6-eevdf-complete | 90.75% |
| CPU-Little | m6.6-eevdf-complete | 98.74% |
| CPU-Mid | m6.6-eevdf-complete | 213.9% |
| GPU | m6.6-eevdf-complete | -7.04% |
| Total | m6.6-eevdf-complete | 100.92% |
| CPU | m6.6-eevdf-complete-no-delay-dequeue | 117.77% |
| CPU-Big | m6.6-eevdf-complete-no-delay-dequeue | 113.79% |
| CPU-Little | m6.6-eevdf-complete-no-delay-dequeue | 97.47% |
| CPU-Mid | m6.6-eevdf-complete-no-delay-dequeue | 189.0% |
| GPU | m6.6-eevdf-complete-no-delay-dequeue | -6.74% |
| Total | m6.6-eevdf-complete-no-delay-dequeue | 103.84% |
| CPU | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero | 120.45% |
| CPU-Big | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero | 113.65% |
| CPU-Little | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero | 99.04% |
| CPU-Mid | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero | 201.14% |
| GPU | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero | -5.37% |
| Total | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero | 106.38% |
| CPU | m6.6-eevdf-complete-no-delay-zero | 119.05% |
| CPU-Big | m6.6-eevdf-complete-no-delay-zero | 107.55% |
| CPU-Little | m6.6-eevdf-complete-no-delay-zero | 98.66% |
| CPU-Mid | m6.6-eevdf-complete-no-delay-zero | 206.58% |
| GPU | m6.6-eevdf-complete-no-delay-zero | -5.25% |
| Total | m6.6-eevdf-complete-no-delay-zero | 105.14% |
| CPU | m6.6-eevdf-complete-pelt-fix | 105.56% |
| CPU-Big | m6.6-eevdf-complete-pelt-fix | 100.45% |
| CPU-Little | m6.6-eevdf-complete-pelt-fix | 94.4% |
| CPU-Mid | m6.6-eevdf-complete-pelt-fix | 150.94% |
| GPU | m6.6-eevdf-complete-pelt-fix | -3.96% |
| Total | m6.6-eevdf-complete-pelt-fix | 93.31% |
+------------+------------------------------------------------------+-----------+
Utilization and load levels.
+---------+------------------------------------------------------+----------+-----------+
| cluster | tag | variable | perc_diff |
+---------+------------------------------------------------------+----------+-----------+
| little | m6.6-stock | load | 0.0% |
| little | m6.6-stock | util | 0.0% |
| little | m6.6-eevdf-complete | load | 29.56% |
| little | m6.6-eevdf-complete | util | 55.4% |
| little | m6.6-eevdf-complete-no-delay-dequeue | load | 42.89% |
| little | m6.6-eevdf-complete-no-delay-dequeue | util | 69.47% |
| little | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero | load | 51.05% |
| little | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero | util | 76.55% |
| little | m6.6-eevdf-complete-no-delay-zero | load | 34.51% |
| little | m6.6-eevdf-complete-no-delay-zero | util | 72.53% |
| little | m6.6-eevdf-complete-pelt-fix | load | 29.96% |
| little | m6.6-eevdf-complete-pelt-fix | util | 59.82% |
| mid | m6.6-stock | load | 0.0% |
| mid | m6.6-stock | util | 0.0% |
| mid | m6.6-eevdf-complete | load | 29.37% |
| mid | m6.6-eevdf-complete | util | 75.22% |
| mid | m6.6-eevdf-complete-no-delay-dequeue | load | 36.4% |
| mid | m6.6-eevdf-complete-no-delay-dequeue | util | 80.28% |
| mid | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero | load | 30.35% |
| mid | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero | util | 90.2% |
| mid | m6.6-eevdf-complete-no-delay-zero | load | 37.83% |
| mid | m6.6-eevdf-complete-no-delay-zero | util | 93.79% |
| mid | m6.6-eevdf-complete-pelt-fix | load | 33.57% |
| mid | m6.6-eevdf-complete-pelt-fix | util | 67.83% |
| big | m6.6-stock | load | 0.0% |
| big | m6.6-stock | util | 0.0% |
| big | m6.6-eevdf-complete | load | 97.39% |
| big | m6.6-eevdf-complete | util | 12.63% |
| big | m6.6-eevdf-complete-no-delay-dequeue | load | 139.69% |
| big | m6.6-eevdf-complete-no-delay-dequeue | util | 22.58% |
| big | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero | load | 125.36% |
| big | m6.6-eevdf-complete-no-delay-dequeue-no-delay-zero | util | 23.15% |
| big | m6.6-eevdf-complete-no-delay-zero | load | 128.56% |
| big | m6.6-eevdf-complete-no-delay-zero | util | 25.03% |
| big | m6.6-eevdf-complete-pelt-fix | load | 130.73% |
| big | m6.6-eevdf-complete-pelt-fix | util | 17.52% |
+---------+------------------------------------------------------+----------+-----------+
next prev parent reply other threads:[~2024-04-29 14:33 UTC|newest]
Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-05 10:27 [RFC][PATCH 00/10] sched/fair: Complete EEVDF Peter Zijlstra
2024-04-05 10:27 ` [RFC][PATCH 01/10] sched/eevdf: Add feature comments Peter Zijlstra
2024-04-05 10:27 ` [RFC][PATCH 02/10] sched/eevdf: Remove min_vruntime_copy Peter Zijlstra
2024-04-05 10:27 ` [RFC][PATCH 03/10] sched/fair: Cleanup pick_task_fair() vs throttle Peter Zijlstra
2024-04-05 21:11 ` Benjamin Segall
2024-04-05 10:27 ` [RFC][PATCH 04/10] sched/fair: Cleanup pick_task_fair()s curr Peter Zijlstra
2024-04-05 10:27 ` [RFC][PATCH 05/10] sched/fair: Unify pick_{,next_}_task_fair() Peter Zijlstra
2024-04-06 2:20 ` Mike Galbraith
2024-04-05 10:28 ` [RFC][PATCH 06/10] sched: Allow sched_class::dequeue_task() to fail Peter Zijlstra
2024-04-05 10:28 ` [RFC][PATCH 07/10] sched/fair: Re-organize dequeue_task_fair() Peter Zijlstra
2024-04-05 10:28 ` [RFC][PATCH 08/10] sched/fair: Implement delayed dequeue Peter Zijlstra
2024-04-06 9:23 ` Chen Yu
2024-04-08 9:06 ` Peter Zijlstra
2024-04-11 1:32 ` Yan-Jie Wang
2024-04-25 10:25 ` Peter Zijlstra
2024-04-12 10:42 ` K Prateek Nayak
2024-04-15 10:56 ` Mike Galbraith
2024-04-16 3:18 ` K Prateek Nayak
2024-04-16 5:36 ` Mike Galbraith
2024-04-18 16:24 ` Mike Galbraith
2024-04-18 17:08 ` K Prateek Nayak
2024-04-24 15:20 ` Peter Zijlstra
2024-04-25 11:28 ` Peter Zijlstra
2024-04-26 10:56 ` Peter Zijlstra
2024-04-26 11:16 ` Peter Zijlstra
2024-04-26 16:03 ` Mike Galbraith
2024-04-27 6:42 ` Mike Galbraith
2024-04-28 16:32 ` Mike Galbraith
2024-04-29 12:14 ` Peter Zijlstra
2024-04-15 17:07 ` Luis Machado
2024-04-24 15:15 ` Luis Machado
2024-04-25 10:42 ` Peter Zijlstra
2024-04-25 11:49 ` Peter Zijlstra
2024-04-26 9:32 ` Peter Zijlstra
2024-04-26 9:36 ` Peter Zijlstra
2024-04-26 10:16 ` Luis Machado
2024-04-29 14:33 ` Luis Machado [this message]
2024-05-02 10:26 ` Luis Machado
2024-05-10 14:49 ` Luis Machado
2024-05-15 9:36 ` Peter Zijlstra
2024-05-15 11:48 ` Peter Zijlstra
2024-05-15 18:03 ` Mike Galbraith
2024-05-20 15:20 ` Luis Machado
2024-04-26 10:15 ` Luis Machado
2024-04-20 5:57 ` Mike Galbraith
2024-04-22 13:13 ` Tobias Huschle
2024-04-05 10:28 ` [RFC][PATCH 09/10] sched/eevdf: Allow shorter slices to wakeup-preempt Peter Zijlstra
2024-04-05 10:28 ` [RFC][PATCH 10/10] sched/eevdf: Use sched_attr::sched_runtime to set request/slice suggestion Peter Zijlstra
2024-04-06 8:16 ` Hillf Danton
2024-05-07 5:34 ` Mike Galbraith
2024-05-15 10:13 ` Peter Zijlstra
2024-05-07 15:15 ` Chen Yu
2024-05-08 13:52 ` Mike Galbraith
2024-05-09 3:48 ` Chen Yu
2024-05-09 5:00 ` Mike Galbraith
2024-05-13 4:07 ` K Prateek Nayak
2024-05-14 9:18 ` Chen Yu
2024-05-14 15:23 ` K Prateek Nayak
2024-05-14 16:15 ` Chen Yu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c6152855-ef92-4c24-a3f5-64d4256b6789@arm.com \
--to=luis.machado@arm.com \
--cc=Hongyan.Xia2@arm.com \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=efault@gmx.de \
--cc=jstultz@google.com \
--cc=juri.lelli@redhat.com \
--cc=kprateek.nayak@amd.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=nd@arm.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=wuyun.abel@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).