From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751591AbbINTPX (ORCPT ); Mon, 14 Sep 2015 15:15:23 -0400 Received: from g2t4623.austin.hp.com ([15.73.212.78]:41340 "EHLO g2t4623.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750951AbbINTPW (ORCPT ); Mon, 14 Sep 2015 15:15:22 -0400 Message-ID: <55F71CC8.3050306@hpe.com> Date: Mon, 14 Sep 2015 15:15:20 -0400 From: Waiman Long User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12 MIME-Version: 1.0 To: Peter Zijlstra CC: Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org, Scott J Norton , Douglas Hatch , Davidlohr Bueso Subject: Re: [PATCH v6 5/6] locking/pvqspinlock: Allow 1 lock stealing attempt References: <1441996658-62854-1-git-send-email-Waiman.Long@hpe.com> <1441996658-62854-6-git-send-email-Waiman.Long@hpe.com> <20150914140051.GT18489@twins.programming.kicks-ass.net> In-Reply-To: <20150914140051.GT18489@twins.programming.kicks-ass.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/14/2015 10:00 AM, Peter Zijlstra wrote: > On Fri, Sep 11, 2015 at 02:37:37PM -0400, Waiman Long wrote: >> This patch allows one attempt for the lock waiter to steal the lock >> when entering the PV slowpath. This helps to reduce the performance >> penalty caused by lock waiter preemption while not having much of >> the downsides of a real unfair lock. >> @@ -415,8 +458,12 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node) >> >> for (;; waitcnt++) { >> for (loop = SPIN_THRESHOLD; loop; loop--) { >> - if (!READ_ONCE(l->locked)) >> - return; >> + /* >> + * Try to acquire the lock when it is free. >> + */ >> + if (!READ_ONCE(l->locked)&& >> + (cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0)) >> + goto gotlock; >> cpu_relax(); >> } >> > This isn't _once_, this is once per 'wakeup'. And note that interrupts > unrelated to the kick can equally wake the vCPU up. > Oh! There is a minor bug that I shouldn't need to have a second READ_ONCE() call here. As this is the queue head, finding the lock free entitles the vCPU to own the lock. However, because of lock stealing, I can't just write a 1 to the lock and assume thing is all set. That is why I need to use cmpxchg() to make sure that the queue head vCPU can actually get the lock without the lock stolen underneath. I don't count that as lock stealing as it is the rightful owner of the lock. I am sorry that I should have added a comment to clarify that. Will do so in the next update. > void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) > { > : > /* > * We touched a (possibly) cold cacheline in the per-cpu queue node; > * attempt the trylock once more in the hope someone let go while we > * weren't watching. > */ > if (queued_spin_trylock(lock)) > goto release; This is the only place where I consider lock stealing happens. Again, I should have a comment in pv_queued_spin_trylock_unfair() to say where it will be called. Cheers, Longman