From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752891AbbIKOLi (ORCPT ); Fri, 11 Sep 2015 10:11:38 -0400 Received: from mail-pa0-f54.google.com ([209.85.220.54]:36362 "EHLO mail-pa0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751873AbbIKOLh (ORCPT ); Fri, 11 Sep 2015 10:11:37 -0400 Date: Fri, 11 Sep 2015 22:11:22 +0800 From: Leo Yan To: Morten Rasmussen Cc: Peter Zijlstra , Vincent Guittot , Dietmar Eggemann , Steve Muckle , "mingo@redhat.com" , "daniel.lezcano@linaro.org" , "yuyang.du@intel.com" , "mturquette@baylibre.com" , "rjw@rjwysocki.net" , Juri Lelli , "sgurrappadi@nvidia.com" , "pang.xunlei@zte.com.cn" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH 5/6] sched/fair: Get rid of scaling utilization by capacity_orig Message-ID: <20150911141122.GB5337@leoy-linaro> References: <55EDAF43.30500@arm.com> <55EDDD5A.70904@arm.com> <20150908122606.GH3644@twins.programming.kicks-ass.net> <20150908125205.GW18673@twins.programming.kicks-ass.net> <20150908143157.GA27098@e105550-lin.cambridge.arm.com> <20150908165331.GC27098@e105550-lin.cambridge.arm.com> <20150911074651.GB17135@leoy-linaro> <20150911100232.GG27098@e105550-lin.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150911100232.GG27098@e105550-lin.cambridge.arm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 11, 2015 at 11:02:33AM +0100, Morten Rasmussen wrote: > On Fri, Sep 11, 2015 at 03:46:51PM +0800, Leo Yan wrote: > > Hi Morten, > > > > On Tue, Sep 08, 2015 at 05:53:31PM +0100, Morten Rasmussen wrote: > > > On Tue, Sep 08, 2015 at 03:31:58PM +0100, Morten Rasmussen wrote: > > > > On Tue, Sep 08, 2015 at 02:52:05PM +0200, Peter Zijlstra wrote: > > > > > > > > > > Something like teh below.. > > > > > > > > > > Another thing to ponder; the downside of scaled_delta_w is that its > > > > > fairly likely delta is small and you loose all bits, whereas the weight > > > > > is likely to be large can could loose a fwe bits without issue. > > > > > > > > That issue applies both to load and util. > > > > > > > > > > > > > > That is, in fixed point scaling like this, you want to start with the > > > > > biggest numbers, not the smallest, otherwise you loose too much. > > > > > > > > > > The flip side is of course that now you can share a multiplcation. > > > > > > > > But if we apply the scaling to the weight instead of time, we would only > > > > have to apply it once and not three times like it is now? So maybe we > > > > can end up with almost the same number of multiplications. > > > > > > > > We might be loosing bits for low priority task running on cpus at a low > > > > frequency though. > > > > > > Something like the below. We should be saving one multiplication. > > > > > > --- 8< --- > > > > > > From: Morten Rasmussen > > > Date: Tue, 8 Sep 2015 17:15:40 +0100 > > > Subject: [PATCH] sched/fair: Scale load/util contribution rather than time > > > > > > When updating load/util tracking the time delta might be very small (1) > > > in many cases, scaling it futher down with frequency and cpu invariance > > > scaling might cause us to loose precision. Instead of scaling time we > > > can scale the weight of the task for load and the capacity for > > > utilization. Both weight (>=15) and capacity should be significantly > > > bigger in most cases. Low priority tasks might still suffer a bit but > > > worst should be improved, as weight is at least 15 before invariance > > > scaling. > > > > > > Signed-off-by: Morten Rasmussen > > > --- > > > kernel/sched/fair.c | 38 +++++++++++++++++++------------------- > > > 1 file changed, 19 insertions(+), 19 deletions(-) > > > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > > index 9301291..d5ee72a 100644 > > > --- a/kernel/sched/fair.c > > > +++ b/kernel/sched/fair.c > > > @@ -2519,8 +2519,6 @@ static u32 __compute_runnable_contrib(u64 n) > > > #error "load tracking assumes 2^10 as unit" > > > #endif > > > > > > -#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT) > > > - > > > /* > > > * We can represent the historical contribution to runnable average as the > > > * coefficients of a geometric series. To do this we sub-divide our runnable > > > @@ -2553,10 +2551,10 @@ static __always_inline int > > > __update_load_avg(u64 now, int cpu, struct sched_avg *sa, > > > unsigned long weight, int running, struct cfs_rq *cfs_rq) > > > { > > > - u64 delta, scaled_delta, periods; > > > + u64 delta, periods; > > > u32 contrib; > > > - unsigned int delta_w, scaled_delta_w, decayed = 0; > > > - unsigned long scale_freq, scale_cpu; > > > + unsigned int delta_w, decayed = 0; > > > + unsigned long scaled_weight = 0, scale_freq, scale_freq_cpu = 0; > > > > > > delta = now - sa->last_update_time; > > > /* > > > @@ -2577,8 +2575,13 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa, > > > return 0; > > > sa->last_update_time = now; > > > > > > - scale_freq = arch_scale_freq_capacity(NULL, cpu); > > > - scale_cpu = arch_scale_cpu_capacity(NULL, cpu); > > > + if (weight || running) > > > + scale_freq = arch_scale_freq_capacity(NULL, cpu); > > > + if (weight) > > > + scaled_weight = weight * scale_freq >> SCHED_CAPACITY_SHIFT; > > > + if (running) > > > + scale_freq_cpu = scale_freq * arch_scale_cpu_capacity(NULL, cpu) > > > + >> SCHED_CAPACITY_SHIFT; > > > > maybe below question is stupid :) > > > > Why not calculate the scaled_weight depend on cpu's capacity as well? > > So like: scaled_weight = weight * scale_freq_cpu. > > IMHO, we should not scale load by cpu capacity since load isn't really > comparable to capacity. It is runnable time based (not running time like > utilization) and the idea is to used it for balancing when when the > system is fully utilized. When the system is fully utilized we can't say > anything about the true compute demands of a task, it may get exactly > the cpu time it needs or it may need much more. Hence it doesn't really > make sense to scale the demand by the capacity of the cpu. Two busy > loops on cpus with different cpu capacities should have the load as they > have the same compute demands. > > I mentioned this briefly in the commit message of patch 3 in this > series. > > Makes sense? Yeah, after your reminding, i recognise load only includes runnable time on rq but not include running time. Thanks, Leo Yan