Linux-PM Archive mirror
 help / color / mirror / Atom feed
From: Qais Yousef <qyousef@layalina.io>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	Ingo Molnar <mingo@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Valentin Schneider <vschneid@redhat.com>,
	Christian Loehle <christian.loehle@arm.com>,
	linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] sched: Consolidate cpufreq updates
Date: Thu, 9 May 2024 13:40:03 +0100	[thread overview]
Message-ID: <20240509124003.hsuatsid4vtjpzhv@airbuntu> (raw)
In-Reply-To: <CAKfTPtDHWBKfksW4jQJ3KZVb7_GDXLZB1F7auYVZE1ddyDpgYQ@mail.gmail.com>

On 05/07/24 14:53, Vincent Guittot wrote:
> On Tue, 7 May 2024 at 13:08, Qais Yousef <qyousef@layalina.io> wrote:
> >
> > On 05/07/24 10:58, Vincent Guittot wrote:
> > > On Mon, 6 May 2024 at 01:31, Qais Yousef <qyousef@layalina.io> wrote:
> > > >
> > > > Improve the interaction with cpufreq governors by making the
> > > > cpufreq_update_util() calls more intentional.
> > > >
> > > > At the moment we send them when load is updated for CFS, bandwidth for
> > > > DL and at enqueue/dequeue for RT. But this can lead to too many updates
> > > > sent in a short period of time and potentially be ignored at a critical
> > > > moment due to the rate_limit_us in schedutil.
> > > >
> > > > For example, simultaneous task enqueue on the CPU where 2nd task is
> > > > bigger and requires higher freq. The trigger to cpufreq_update_util() by
> > > > the first task will lead to dropping the 2nd request until tick. Or
> > > > another CPU in the same policy triggers a freq update shortly after.
> > > >
> > > > Updates at enqueue for RT are not strictly required. Though they do help
> > > > to reduce the delay for switching the frequency and the potential
> > > > observation of lower frequency during this delay. But current logic
> > > > doesn't intentionally (at least to my understanding) try to speed up the
> > > > request.
> > > >
> > > > To help reduce the amount of cpufreq updates and make them more
> > > > purposeful, consolidate them into these locations:
> > > >
> > > > 1. context_switch()
> > >
> > > I don't see any cpufreq update when switching from idle to CFS. We
> >
> > You mean SCHED_IDLE to SCHED_NORMAL, right? Yes, if we switch policies even
> > from fair to RT an update could be missed.
> 
> No I mean going out of idle. On an idle cpu, nothing happens at CFS
> task wakeup and we have to wait for the next tick to apply the new
> freq. This happens for both short task with uclamp min or long
> running/sleeping task (i.e. with high util_est)

And without my patch you see a freq change? If no stats were updated to cause
a decay, we will skip the cpufreq update at this context switch.

I'll audit the code again in case I missed a place where there's a decay. You
could be hitting a race condition with update_blocked_avg() sending a cpufreq
update and this could cause the context switch cpufreq update to be dropped by
rate limit.. You could try to reduce your rate_limit_us to see if this helps.

I'll try to reproduce and investigate. FWIW, I did test this on M1 mac mini and
pixel device running speedometer and some iowait workloads and didn't observe
problems.

> 
> >
> > I'll need to think more about it, but I think adding an update when we switch
> > policies in the syscall looks sufficient to me, if the task is on rq already.
> > Agreed?
> >
> > > have to wait for the next tick to get a freq update whatever the value
> > > of util_est and uclamp
> > >
> > > > 2. task_tick_fair()
> > >
> > > Updating only during tick is ok with a tick at 1000hz/1000us when we
> > > compare it with the1048us slice of pelt but what about 4ms or even
> > > 10ms tick ? we can have an increase of almost 200 in 10ms
> >
> > IMHO the current code can still fail with these setups to update frequencies in
> > time. If there's a single task on the rq, then the only freq update will happen
> > at tick. So this is an existing problem.
> 
> But any newly enqueued task can trigger a freq update without waiting
> 1/4/10ms whereas we need to wait for next tick with this patch

But it is racy. By deferring the decision (sampling point) we ensure better the
RUNNING task is reflecting the current state of the rq taken into account any
past events.

Note if there's no enqueue/dequeue, the problem is not fixed either way. If we
get two consecutive enqueues, the 2nd one will be dropped. And we'll end up
with delays.

I think this way we'd be just more consistently failing or working.

Systems with high rate_limit_us or TICK generally should ask for generous
headroom and I think this is the best way to address this issue in a scalable
way. People with fast systems/configurations can be more exact and frequent in
their requests. Systems/configurations that are slow will tend to exaggerate
each request to cater for the slow response. But the requests themselves are
done at better defined points of time. That's my hope at least, so I appreciate
the reviews :)

My other hope is that by doing the sampling at context switch we can better
handle uclamp and iowait boost requests which requires special cpufreq
constrains to be applied for this specifically RUNNING task. Ultimately leading
to removing uclamp max() aggregation at enqueue/dequeue.

> 
> >
> > The way I see it is that setting such high TICK values implies low
> > responsiveness by definition. So the person who selects this setup needs to
> > cater that their worst case scenario is that and be happy with it. And this
> > worst case scenario does not change.
> >
> > That said, the right way to cater for this is via my other series to remove the
> > magic margins. DVFS headroom should rely on TICK value to ensure we run at
> > adequate frequency until the next worst case scenario update, which relies on
> > TICK. Which is sufficient to handle util_est changes. See below for uclamp.
> >
> > Wake up preemption should cause context switches to happen sooner than a tick
> > too as we add more tasks on the rq. So I think the worst case scenario is not
> > really changing that much. In my view, it's better to be consistent about the
> > behavior.
> >
> > >
> > > > 3. {attach, detach}_entity_load_avg()
> > >
> > > At enqueue/dequeue, the util_est will be updated and can make cpu
> > > utilization quite different especially with long sleeping tasks. The
> > > same applies for uclamp_min/max hints of a newly enqueued task. We
> > > might end up waiting 4/10ms depending of the tick period.
> >
> > uclamp_min is a property of the task. And waiting for the task that needs the
> > boost to run is fine IMHO. And I am actually hoping to remove uclamp max()
> 
> But you will delay all CPU work and the running time fo the task

uclamp_min shouldn't cause other tasks to run faster? From my perspective this
is wasting power actually as what we want is only this task to run faster.

If the task wants better wake up latency (which I assume what you're referring
to by making things run faster then this task should get to RUNNING faster)
then we need to annotate this separately IMHO. We shouldn't rely on the boost
which is there to ensure this tasks gets work done at an acceptable rate, not
to impact its wake up latency.

> 
> And what about util_est ?

Hmm yeah this won't cause cfs_rq.decayed to trigger so we could miss it at
context switch. And could be what's causing the issue you're seeing above.

  reply	other threads:[~2024-05-09 12:40 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-05 23:31 [PATCH v2] sched: Consolidate cpufreq updates Qais Yousef
2024-05-06 10:05 ` Peter Zijlstra
2024-05-07  0:56   ` Qais Yousef
2024-05-07  8:02     ` Peter Zijlstra
2024-05-07 10:42       ` Qais Yousef
2024-05-07  8:58 ` Vincent Guittot
2024-05-07 10:21   ` Vincent Guittot
2024-05-07 11:08   ` Qais Yousef
2024-05-07 12:53     ` Vincent Guittot
2024-05-09 12:40       ` Qais Yousef [this message]
2024-05-13  8:49         ` Vincent Guittot
2024-05-11  2:03       ` Qais Yousef
2024-05-13  8:50         ` Vincent Guittot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240509124003.hsuatsid4vtjpzhv@airbuntu \
    --to=qyousef@layalina.io \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=christian.loehle@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rafael@kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).