From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753367AbbGINup (ORCPT ); Thu, 9 Jul 2015 09:50:45 -0400 Received: from foss.arm.com ([217.140.101.70]:46390 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752128AbbGINuj (ORCPT ); Thu, 9 Jul 2015 09:50:39 -0400 Date: Thu, 9 Jul 2015 14:53:14 +0100 From: Morten Rasmussen To: Yuyang Du Cc: Mike Galbraith , Peter Zijlstra , Rabin Vincent , "mingo@redhat.com" , "linux-kernel@vger.kernel.org" , Paul Turner , Ben Segall Subject: Re: [PATCH?] Livelock in pick_next_task_fair() / idle_balance() Message-ID: <20150709135314.GA8668@e105550-lin.cambridge.arm.com> References: <1435728995.9397.7.camel@gmail.com> <20150701145551.GA15690@axis.com> <20150701204404.GH25159@twins.programming.kicks-ass.net> <20150701232511.GA5197@intel.com> <1435824347.5351.18.camel@gmail.com> <20150702010539.GB5197@intel.com> <20150702114032.GA7598@e105550-lin.cambridge.arm.com> <20150702193702.GD5197@intel.com> <20150703093441.GA15477@e105550-lin.cambridge.arm.com> <20150705201241.GE5197@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150705201241.GE5197@intel.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 06, 2015 at 04:12:41AM +0800, Yuyang Du wrote: > Hi Morten, > > On Fri, Jul 03, 2015 at 10:34:41AM +0100, Morten Rasmussen wrote: > > > > IOW, since task groups include blocked load in the load_avg_contrib (see > > > > __update_group_entity_contrib() and __update_cfs_rq_tg_load_contrib()) the > > > > imbalance includes blocked load and hence env->imbalance >= > > > > sum(task_h_load(p)) for all tasks p on the rq. Which leads to > > > > detach_tasks() emptying the rq completely in the reported scenario where > > > > blocked load > runnable load. > > > > > > Whenever I want to know the load avg concerning task group, I need to > > > walk through the complete codes again, I prefer not to do it this time. > > > But it should not be that simply to say "the 118 comes from the blocked load". > > > > But the whole hierarchy of group entities is updated each time we enqueue > > or dequeue a task. I don't see how the group entity load_avg_contrib is > > not up to date? Why do you need to update it again? > > > > In any case, we have one task in the group hierarchy which has a > > load_avg_contrib of 0 and the grand-grand parent group entity has a > > load_avg_contrib of 118 and no additional tasks. That load contribution > > must be from tasks which are no longer around on the rq? No? > > load_avg_contrib has WEIGHT inside, so the most I can say is: > SE: 8f456e00's load_avg_contrib 118 = (its cfs_rq's runnable + blocked) / (tg->load_avg + 1) * tg->shares > > The tg->shares is probably 1024 (at least 911). So we are just left with: > > cfs_rq / tg = 11.5% Yes, we also know that there is only one runnable task in the task group hierarchy and its contribution is 0. Hence the rest must be from non-runnable tasks belonging to some child group. > I myself did question the sudden jump from 0 to 118 (see a previous reply). > > But anyway, this really is irrelevant to the discusstion. Agreed. > > > > Anyway, with blocked load, yes, we definitely can't move (or even find) some > > > ammount of the imbalance if we only look at the tasks on the queue. But this > > > may or may not be a problem. > > > > > > Firstly, the question comes to whether we want blocked load anywhere. > > > This is just about a "now vs. average" question. > > > > That is what I meant in the paragraph below. It is a scheduling policy > > question. > > > > > Secondly, if we stick to average, we just need to treat the blocked load > > > consistently, not that group SE has it, but task SE does not, or somewhere > > > has it, others not. > > > > I agree that inconsistent use of blocked load will lead us into trouble. > > The problem is that none of the load-balance logic was designed for > > blocked load. It was written to deal load that is currently on the rq > > and slightly biased by average cpu load, not dealing with load > > contribution of tasks which we can't migrate at the moment because they > > are blocked. The load-balance code has to be updated to deal with > > blocked load. We will run into all sorts of issues if we don't and roll > > out use of blocked load everywhere. > > > > However, before we can rework the load-balance code, we have to agree on > > the now vs average balance policy. > > > > Your proposed patch implements a policy somewhere in between. We try to > > balance based on average, but we don't allow idle_balance() to empty a > > cpu completely. A pure average balance policy would allow a cpu to go > > idle even if we could do have tasks waiting on other rqs if the blocked > > load indicates that other task will show up shortly one the cpu. A pure > > "now" balance would balance based on runnable_load_avg for all entities > > including groups ignoring all blocked load, but that goes against the > > PELT group balancing design. > > > > I'm not against having a policy that sits somewhere in between, we just > > have to agree it is the right policy and clean up the load-balance code > > such that the implemented policy is clear. > > The proposed patch sits in between? I agree, but would rather see it from > another perspective. > > First, I don't think it merits a solution/policy. It is just a cheap > "last guard" to protect the "king" - no crash. It's fine for me to have checks that makes sure we don't crash if we hit some corner case in the load-balance code. I was more interested in figuring out why we ended up in this situation and how we can implement a more consistent policy. > Second, a "pure average" policy is pretty fine in general, but it does not > mean we would simply allow a CPU to be pulled empty, that is because we are > making a bet from a prediction (average) here. By prediction, it basically > means sometimes it fails. As the failure could lead to a disater, without > blaming the prediction, it is reasonable we make a sort of "damage control". I like the idea of balancing based on a average load/utilization that includes blocked load/utilization. It ties in very nicely driving DVFS for example. It is a fundamental change to how we perceive load for load-balancing decisions though. It basically requires an update to the load-balancing policy as most of it doesn't consider blocked load scenarios this one. This one idle-balance problem is easily solve by making sure not to pull the last task. I makes a lot of sense. But I'm quite sure that we will face many more of such cases. For example, at task wake-up, do you put the task on a cpu which is idle in this instant with a high average load or on a cpu which is busy in this instant but has a low average load? Currently, we just put the task on an idle cpu ignoring any blocked load. If we implement a strict average view policy we should choose the cpu with the lowest load even if it is currently busy. I would suggest that we revise the load-balance code before/while adding blocked load so we get a clear(er) policy in the load-balance code so we don't need too many (and maybe not so straightforward) last guards to fix wrong balancing decisions. It doesn't have to be perfect, I just want to get rid of code that nobody around can explain so we can tell if we still make the right predictions when adding blocked load.