BPF Archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Qais Yousef <qyousef@layalina.io>
Cc: Steven Rostedt <rostedt@goodmis.org>, Tejun Heo <tj@kernel.org>,
	torvalds@linux-foundation.org, mingo@redhat.com,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org,
	joshdon@google.com, brho@google.com, pjt@google.com,
	derkling@google.com, haoluo@google.com, dvernet@meta.com,
	dschatzberg@meta.com, dskarlat@cs.cmu.edu, riel@surriel.com,
	changwoo@igalia.com, himadrics@inria.fr, memxor@gmail.com,
	andrea.righi@canonical.com, joel@joelfernandes.org,
	linux-kernel@vger.kernel.org, bpf@vger.kernel.org,
	kernel-team@meta.com
Subject: Re: [PATCHSET v6] sched: Implement BPF extensible scheduler class
Date: Fri, 17 May 2024 11:58:06 +0200	[thread overview]
Message-ID: <20240517095806.GJ30852@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <20240514000715.4765jfpwi5ovlizj@airbuntu>

On Tue, May 14, 2024 at 01:07:15AM +0100, Qais Yousef wrote:
> On 05/13/24 14:26, Steven Rostedt wrote:

> > > That is, from where I am sitting I see $vendor mandate their $enterprise
> > > product needs their $BPF scheduler. At which point $vendor will have no
> > > incentive to ever contribute back.
> > 
> > Believe me they already have their own scheduler, and because its so
> > different, it's very hard to contribute back.

'They' are free to have their own scheduler, but since 'nobody' is using
it and 'they' want to have their product work on RHEL / SLES / etc..
therefore are bound to respect the common interfaces, no?

> > > So I don't at all mind people playing around with schedulers -- they can
> > > do so today, there are a ton of out of tree patches to start or learn
> > > from, or like I said, it really isn't all that hard to just rip out fair
> > > and write something new.
> > 
> > For cloud servers, I bet a lot of schedulers are not public. Although,
> > my company tries to publish the schedulers they use.

Yeah, it's the TIVO thing. Keeping all that private creates the rebase
pain. Outside of that there's nothing we can do.

Anyway, instead of doing magic mushroom schedulers, what does the cloud
crud actually want? I know the KVM people were somewhat looking forward
to the EEVDF sched_attr::sched_runtime extension because virt likes the
longer slices. Less preemption more better for them.

In fact, some of the facebook workloads also wanted longer slices (and
no wakeup preemption).

> > From what I understand (I don't work on production, but Chromebooks), a
> > lot of changes cannot be contributed back because their updates are far
> > from what is upstream. Having a plugable scheduler would actually allow
> > them to contribute *more*.

So can we please start by telling what kind of magic hacks ChromeOS has
and whatfor?

The term contributing seems to mean different things to us. Building a
external scheduler isn't contributing, it's fragmenting.

> > > Keeping a rando github repo with BPF schedulers is not contributing.
> > 
> > Agreed, and I would guess having them in the Linux kernel tree would be
> > more beneficial.

Yeah, no. Same thing. It's just a pile of junk until someone puts the
time in to figure out how to properly integrate it. Very much like Qais
argues below.

> > > That's just a repo with multiple out of tree schedulers to be ignored.
> > > Who will put in the effort of upsteaming things if they can hack up a
> > > BPF and throw it over the wall?
> > 
> > If there's a place in the Linux kernel tree, I'm sure there would be
> > motivation to place it there. Having it in the kernel proper does give
> > more visibility of code, and therefore enhancements to that code. This
> > was the same rationale for putting perf into the kernel proper.

These things are very much not the same. A pile of random hacks vs a
single unified interface to PMUs. They're like the polar opposite of
one another.

> > > So yeah, I'm very much NOT supportive of this effort. From where I'm
> > > sitting there is simply not a single benefit. You're not making my life
> > > better, so why would I care?
> > > 
> > > How does this BPF muck translate into better quality patches for me?
> > 
> > Here's how we will be using it (we will likely be porting sched_ext to
> > ChromeOS regardless of its acceptance).
> > 
> > Doing testing of scheduler changes in the field is extremely time
> > consuming and complex. We tested EEVDF vs CFS by backporting EEVDF to
> > 5.15 (as that is the kernel version we are using on the chromebooks we

/me mumbles something about necro-kernels...

> > were testing on), and then we need to add a user space "switch" to
> > change the scheduler. Note, this also risks causing a bug in adding
> > these changes. Then we push the kernel out, and then start our
> > experiment that enables our feature to a small percentage, and slowly
> > increases the number of users until we have a enough for a statistical
> > result.
> > 
> > What sched_ext would give us is a easy way to try different scheduling
> > algorithms and get feedback much quicker. Once we determine a solution
> > that improves things, we would then spend the time to implement it in
> > the scheduler, and yes, send it upstream.

This sounds a little backwards... ok, a lot. How do you do actual
problem analysis in this case? Having random statistics is not really
useful - beyond determining there might be a problem.

The next step is isolating that problem locally and reproducing it. Then
analysing *what* the actual problem is and how it happens, and then try
and think of a solution.

(preferably one that then doesn't break another thing :-)

> > To me, sched_ext should never be the final solution, but it can be
> > extremely useful in testing various changes quickly in the field. Which
> > to me would encourage more contributions.

Well, the thing is, the moment sched_ext itself lands upstream, it will
become the final solution for a fair number of people and leave us, the
wider Linux scheduler community, up a creek without no paddles on.

There is absolutely no inherent incentive to further contribute. Your
immediate problem is solved, you get assigned the next problem. That is
reality.

Worse, they can share the BPF hack and get warm fuzzy feeling of
'contribution' while in fact it's useless. At best we know 'random hack
changed something for them'. No problem description, no reproducer, no
nothing.

Anyway, if you feel you need BPF hackery to do this, by all means, do
so. But realize that it is a debug tool and in general we don't merge
debug tools.

Also, I would argue that perhaps a scheduler livepatch would be more
convenient to actually debug / A-B test things.

> I really don't think the problems we have are because of EEVDF vs CFS vs
> anything else. Other major OSes have one scheduler, but what they exceed on is
> providing better QoS interfaces and mechanism to handle specific scenarios that
> Linux lacks.

Quite possibly. The immediate problem being that adding interfaces is
terrifying. Linus has a rather strong opinion about breaking stuff, and
getting this wrong will very quickly result in a paint-into-corner type
problem.

We can/could add fields to sched_attr under the understanding that
they're purely optional and try thing, *however* too many such fields
and we're up a creek again.

> The confusion I see again and again over the years is the fragmentation of
> Linux eco system and app writers don't know how to do things properly on Linux
> vs other OSes. Note our CONFIG system is part of this fragmentation.
> 
> The addition of more flavours which inevitably will lead to custom QoS specific
> to that scheduler and libraries built on top of it that require that particular
> extension available is a recipe for more confusion and fragmentation.

Yes, this!

> I really don't buy the rapid development aspect too. The scheduler was heavily
> influenced by the early contributors which come from server market that had
> (few) very specific workloads they needed to optimize for and throughput had
> a heavier weight vs latency. Fast forward to now, things are different. Even on
> server market latency/responsiveness has become more important. Power and
> thermal are important on a larger class of systems now too. I'd dare say even
> on server market.

Absolutely, AFAIU racks are both power and thermal limited. There are
some crazy ACPI protocols to manage some of this.

> How do you know when it's okay for an app/task to consume too
> much power and when it is not? Hint hint, you can't unless someone in userspace
> tells you.

Yes, cluster/cloud infrastructure needs to manage that. There is nothing
smart the kernel can do here on its own, except respect the ACPI lunacy
and hard throttle itself when the panic signal comes.

> Similarly for latency vs throughput. What is the correct way to
> write an application to provide this info? Then we can ask what is missing in
> the scheduler to enable this.

Right, so the EEVDF thing is a start here. By providing a per task
request size, applications can indicate if they want frequent and short
activations or more infrequent longer activations.

An application can know it's (average) activation time, the kernel has
no clue when work starts and is completed. Applications can fairly
trivially measure this using CLOCK_THREAD_CPUTIME_ID reads before and
after and communicate this (very much like SCHED_DEADLINE).

Anyway, yes, userspace needs to change and provide more information. The
trick ofcourse is figuring out which bit of information is critical /
useful etc.

There is a definite limit on the amount of constraints you want to solve
at runtime.

Everybody going off and hacking their own thing does not help, we need
collaboration to figure out what it is that is needed.

> Note the original min/wakeup_granularity_ns, latency_ns etc were tuned by
> default for throughput by the way (server market bias). You can manipulate
> those and get better latencies.

The immediate problem with those knobs is that they are system wide. But
yes, everybody was randomly poking them knobs, sometimes in obviously
insane ways.

> FWIW IMO the biggest issues I see in the scheduler is that its testability and
> debuggability is hard. I think BPF can be a good fit for that. For the latter
> I started this project, yet I am still trying to figure out how to add tracer
> for the difficult paths to help people more easily report when a bad decision
> has happened to provide more info about the internal state of the scheduler, in
> hope to accelerate the process of finding solutions. 

So the pitfalls here are that exposing that information for debug
purposes can/will lead to people consuming this information for
non-debug purposes and then when we want to change things we're stuck
because suddenly someone relies something we believed was an
implementation detail :/

I've been bitten by this before and this is why I'm so very hesitant to
put tracepoints in the scheduler.

> I think it would be great to have a clear list of the current limitations
> people see in the scheduler. It could be a failure on my end, but I haven't
> seen specifics of problems and what was tried and failed to the point it is
> impossible to move forward. 

Right, list, but also ideally reproducers (yeah, I know, really hard).

The moment we merge sched_ext all motivation to do any of this work goes
out the window.

> From what I see, I am hitting bugs here and there
> all the time. But they are hard to debug to truly understand where things went
> wrong. Like this one for example where PTHREAD_PRIO_PI is a NOP for fair tasks.
> Many thought using this flag doesn't help (rather than buggy)..

Yay for the terminal backlog :/ I'll try and have a look.

  parent reply	other threads:[~2024-05-17  9:58 UTC|newest]

Thread overview: 81+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-01 15:09 [PATCHSET v6] sched: Implement BPF extensible scheduler class Tejun Heo
2024-05-01 15:09 ` [PATCH 01/39] cgroup: Implement cgroup_show_cftypes() Tejun Heo
2024-05-01 15:09 ` [PATCH 02/39] sched: Restructure sched_class order sanity checks in sched_init() Tejun Heo
2024-05-01 15:09 ` [PATCH 03/39] sched: Allow sched_cgroup_fork() to fail and introduce sched_cancel_fork() Tejun Heo
2024-05-01 15:09 ` [PATCH 04/39] sched: Add sched_class->reweight_task() Tejun Heo
2024-05-01 15:09 ` [PATCH 05/39] sched: Add sched_class->switching_to() and expose check_class_changing/changed() Tejun Heo
2024-05-01 15:09 ` [PATCH 06/39] sched: Factor out cgroup weight conversion functions Tejun Heo
2024-05-01 15:09 ` [PATCH 07/39] sched: Expose css_tg() and __setscheduler_prio() Tejun Heo
2024-05-01 15:09 ` [PATCH 08/39] sched: Enumerate CPU cgroup file types Tejun Heo
2024-05-01 15:09 ` [PATCH 09/39] sched: Add @reason to sched_class->rq_{on|off}line() Tejun Heo
2024-05-01 15:09 ` [PATCH 10/39] sched: Factor out update_other_load_avgs() from __update_blocked_others() Tejun Heo
2024-05-01 15:09 ` [PATCH 11/39] cpufreq_schedutil: Refactor sugov_cpu_is_busy() Tejun Heo
2024-05-01 15:09 ` [PATCH 12/39] sched: Add normal_policy() Tejun Heo
2024-05-01 15:09 ` [PATCH 13/39] sched_ext: Add boilerplate for extensible scheduler class Tejun Heo
2024-05-01 15:09 ` [PATCH 14/39] sched_ext: Implement BPF " Tejun Heo
2024-05-01 15:09 ` [PATCH 15/39] sched_ext: Add scx_simple and scx_example_qmap example schedulers Tejun Heo
2024-05-01 15:09 ` [PATCH 16/39] sched_ext: Add sysrq-S which disables the BPF scheduler Tejun Heo
2024-05-01 15:09 ` [PATCH 17/39] sched_ext: Implement runnable task stall watchdog Tejun Heo
2024-05-01 15:09 ` [PATCH 18/39] sched_ext: Allow BPF schedulers to disallow specific tasks from joining SCHED_EXT Tejun Heo
2024-05-01 15:09 ` [PATCH 19/39] sched_ext: Print sched_ext info when dumping stack Tejun Heo
2024-05-01 15:09 ` [PATCH 20/39] sched_ext: Print debug dump after an error exit Tejun Heo
2024-05-01 15:09 ` [PATCH 21/39] tools/sched_ext: Add scx_show_state.py Tejun Heo
2024-05-01 15:09 ` [PATCH 22/39] sched_ext: Implement scx_bpf_kick_cpu() and task preemption support Tejun Heo
2024-05-01 15:09 ` [PATCH 23/39] sched_ext: Add a central scheduler which makes all scheduling decisions on one CPU Tejun Heo
2024-05-01 15:09 ` [PATCH 24/39] sched_ext: Make watchdog handle ops.dispatch() looping stall Tejun Heo
2024-05-01 15:10 ` [PATCH 25/39] sched_ext: Add task state tracking operations Tejun Heo
2024-05-01 15:10 ` [PATCH 26/39] sched_ext: Implement tickless support Tejun Heo
2024-05-01 15:10 ` [PATCH 27/39] sched_ext: Track tasks that are subjects of the in-flight SCX operation Tejun Heo
2024-05-01 15:10 ` [PATCH 28/39] sched_ext: Add cgroup support Tejun Heo
2024-05-01 15:10 ` [PATCH 29/39] sched_ext: Add a cgroup scheduler which uses flattened hierarchy Tejun Heo
2024-05-01 15:10 ` [PATCH 30/39] sched_ext: Implement SCX_KICK_WAIT Tejun Heo
2024-05-01 15:10 ` [PATCH 31/39] sched_ext: Implement sched_ext_ops.cpu_acquire/release() Tejun Heo
2024-05-01 15:10 ` [PATCH 32/39] sched_ext: Implement sched_ext_ops.cpu_online/offline() Tejun Heo
2024-05-01 15:10 ` [PATCH 33/39] sched_ext: Bypass BPF scheduler while PM events are in progress Tejun Heo
2024-05-01 15:10 ` [PATCH 34/39] sched_ext: Implement core-sched support Tejun Heo
2024-05-01 15:10 ` [PATCH 35/39] sched_ext: Add vtime-ordered priority queue to dispatch_q's Tejun Heo
2024-05-01 15:10 ` [PATCH 36/39] sched_ext: Implement DSQ iterator Tejun Heo
2024-05-01 15:10 ` [PATCH 37/39] sched_ext: Add cpuperf support Tejun Heo
2024-05-01 15:10 ` [PATCH 38/39] sched_ext: Documentation: scheduler: Document extensible scheduler class Tejun Heo
2024-05-02  2:24   ` Bagas Sanjaya
2024-05-01 15:10 ` [PATCH 39/39] sched_ext: Add selftests Tejun Heo
2024-05-02  8:48 ` [PATCHSET v6] sched: Implement BPF extensible scheduler class Peter Zijlstra
2024-05-02 19:20   ` Tejun Heo
2024-05-03  8:52     ` Peter Zijlstra
2024-05-05 23:31       ` Tejun Heo
2024-05-13  8:03         ` Peter Zijlstra
2024-05-13 18:26           ` Steven Rostedt
2024-05-14  0:07             ` Qais Yousef
2024-05-14 21:34               ` David Vernet
2024-05-27 21:25                 ` Qais Yousef
2024-05-28 23:46                   ` Tejun Heo
2024-05-29 22:09                     ` Qais Yousef
2024-05-17  9:58               ` Peter Zijlstra [this message]
2024-05-27 20:29                 ` Qais Yousef
2024-05-14 20:22           ` Chris Mason
2024-05-14 22:06           ` Josh Don
2024-05-15 20:41           ` Tejun Heo
2024-05-21  0:19             ` Tejun Heo
2024-05-30 16:49               ` Tejun Heo
2024-05-06 18:47       ` Rik van Riel
2024-05-07 19:33         ` Tejun Heo
2024-05-07 19:47           ` Rik van Riel
2024-05-09  7:38       ` Changwoo Min
2024-05-10 18:24 ` Peter Jung
2024-05-13 20:36 ` Andrea Righi
2024-06-11 21:34 ` Linus Torvalds
2024-06-13 23:38   ` Tejun Heo
2024-06-19 20:56   ` Thomas Gleixner
2024-06-19 22:10     ` Linus Torvalds
2024-06-19 22:27       ` Thomas Gleixner
2024-06-19 22:55         ` Linus Torvalds
2024-06-20  2:35           ` Thomas Gleixner
2024-06-20  5:07             ` Linus Torvalds
2024-06-20 17:11               ` Linus Torvalds
2024-06-20 17:41                 ` Tejun Heo
2024-06-20 22:15                   ` [PATCH sched_ext/for-6.11] sched, sched_ext: Replace scx_next_task_picked() with sched_class->switch_class() Tejun Heo
2024-06-20 22:42                     ` Linus Torvalds
2024-06-20 18:47               ` [PATCHSET v6] sched: Implement BPF extensible scheduler class Thomas Gleixner
2024-06-20 19:20                 ` Linus Torvalds
2024-06-20 19:58                 ` Tejun Heo
2024-06-20 19:35             ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240517095806.GJ30852@noisy.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=andrea.righi@canonical.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brho@google.com \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=changwoo@igalia.com \
    --cc=daniel@iogearbox.net \
    --cc=derkling@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=dschatzberg@meta.com \
    --cc=dskarlat@cs.cmu.edu \
    --cc=dvernet@meta.com \
    --cc=haoluo@google.com \
    --cc=himadrics@inria.fr \
    --cc=joel@joelfernandes.org \
    --cc=joshdon@google.com \
    --cc=juri.lelli@redhat.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=martin.lau@kernel.org \
    --cc=memxor@gmail.com \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=pjt@google.com \
    --cc=qyousef@layalina.io \
    --cc=riel@surriel.com \
    --cc=rostedt@goodmis.org \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).