From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752294AbbIOIYn (ORCPT ); Tue, 15 Sep 2015 04:24:43 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:36696 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751658AbbIOIYk (ORCPT ); Tue, 15 Sep 2015 04:24:40 -0400 Date: Tue, 15 Sep 2015 10:24:30 +0200 From: Peter Zijlstra To: Waiman Long Cc: Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org, Scott J Norton , Douglas Hatch , Davidlohr Bueso Subject: Re: [PATCH v6 5/6] locking/pvqspinlock: Allow 1 lock stealing attempt Message-ID: <20150915082430.GX16853@twins.programming.kicks-ass.net> References: <1441996658-62854-1-git-send-email-Waiman.Long@hpe.com> <1441996658-62854-6-git-send-email-Waiman.Long@hpe.com> <20150914140051.GT18489@twins.programming.kicks-ass.net> <55F71CC8.3050306@hpe.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55F71CC8.3050306@hpe.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 14, 2015 at 03:15:20PM -0400, Waiman Long wrote: > On 09/14/2015 10:00 AM, Peter Zijlstra wrote: > >On Fri, Sep 11, 2015 at 02:37:37PM -0400, Waiman Long wrote: > >>This patch allows one attempt for the lock waiter to steal the lock ^^^ > >>when entering the PV slowpath. This helps to reduce the performance > >>penalty caused by lock waiter preemption while not having much of > >>the downsides of a real unfair lock. > >>@@ -415,8 +458,12 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node) > >> > >> for (;; waitcnt++) { > >> for (loop = SPIN_THRESHOLD; loop; loop--) { > >>- if (!READ_ONCE(l->locked)) > >>- return; > >>+ /* > >>+ * Try to acquire the lock when it is free. > >>+ */ > >>+ if (!READ_ONCE(l->locked)&& > >>+ (cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0)) > >>+ goto gotlock; > >> cpu_relax(); > >> } > >> > >This isn't _once_, this is once per 'wakeup'. And note that interrupts > >unrelated to the kick can equally wake the vCPU up. > > void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) > > { > > : > > /* > > * We touched a (possibly) cold cacheline in the per-cpu queue node; > > * attempt the trylock once more in the hope someone let go while we > > * weren't watching. > > */ > > if (queued_spin_trylock(lock)) > > goto release; > > This is the only place where I consider lock stealing happens. Again, I > should have a comment in pv_queued_spin_trylock_unfair() to say where it > will be called. But you're not adding that.. What you did add is a steal in pv_wait_head(), and its not even once per pv_wait_head, its inside the spin loop (I read it wrong yesterday). So that makes the entire Changelog complete crap. There isn't _one_ attempt, and there is absolutely no fairness left.