From: Waiman Long <waiman.long@hpe.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
"H. Peter Anvin" <hpa@zytor.com>,
x86@kernel.org, linux-kernel@vger.kernel.org,
Scott J Norton <scott.norton@hp.com>,
Douglas Hatch <doug.hatch@hp.com>,
Davidlohr Bueso <dave@stgolabs.net>
Subject: Re: [PATCH v6 5/6] locking/pvqspinlock: Allow 1 lock stealing attempt
Date: Mon, 14 Sep 2015 15:15:20 -0400 [thread overview]
Message-ID: <55F71CC8.3050306@hpe.com> (raw)
In-Reply-To: <20150914140051.GT18489@twins.programming.kicks-ass.net>
On 09/14/2015 10:00 AM, Peter Zijlstra wrote:
> On Fri, Sep 11, 2015 at 02:37:37PM -0400, Waiman Long wrote:
>> This patch allows one attempt for the lock waiter to steal the lock
>> when entering the PV slowpath. This helps to reduce the performance
>> penalty caused by lock waiter preemption while not having much of
>> the downsides of a real unfair lock.
>> @@ -415,8 +458,12 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node)
>>
>> for (;; waitcnt++) {
>> for (loop = SPIN_THRESHOLD; loop; loop--) {
>> - if (!READ_ONCE(l->locked))
>> - return;
>> + /*
>> + * Try to acquire the lock when it is free.
>> + */
>> + if (!READ_ONCE(l->locked)&&
>> + (cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0))
>> + goto gotlock;
>> cpu_relax();
>> }
>>
> This isn't _once_, this is once per 'wakeup'. And note that interrupts
> unrelated to the kick can equally wake the vCPU up.
>
Oh! There is a minor bug that I shouldn't need to have a second
READ_ONCE() call here.
As this is the queue head, finding the lock free entitles the vCPU to
own the lock. However, because of lock stealing, I can't just write a 1
to the lock and assume thing is all set. That is why I need to use
cmpxchg() to make sure that the queue head vCPU can actually get the
lock without the lock stolen underneath. I don't count that as lock
stealing as it is the rightful owner of the lock.
I am sorry that I should have added a comment to clarify that. Will do
so in the next update.
> void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> {
> :
> /*
> * We touched a (possibly) cold cacheline in the per-cpu queue
node;
> * attempt the trylock once more in the hope someone let go
while we
> * weren't watching.
> */
> if (queued_spin_trylock(lock))
> goto release;
This is the only place where I consider lock stealing happens. Again, I
should have a comment in pv_queued_spin_trylock_unfair() to say where it
will be called.
Cheers,
Longman
next prev parent reply other threads:[~2015-09-14 19:15 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-11 18:37 [PATCH v6 0/6] locking/qspinlock: Enhance pvqspinlock performance Waiman Long
2015-09-11 18:37 ` [PATCH v6 1/6] locking/qspinlock: relaxes cmpxchg & xchg ops in native code Waiman Long
2015-09-11 22:27 ` Davidlohr Bueso
2015-09-14 12:06 ` Peter Zijlstra
2015-09-14 18:40 ` Waiman Long
2015-09-14 15:16 ` Waiman Long
2015-09-11 18:37 ` [PATCH v6 2/6] locking/pvqspinlock: Unconditional PV kick with _Q_SLOW_VAL Waiman Long
2015-09-18 8:50 ` [tip:locking/core] locking/pvqspinlock: Kick the PV CPU unconditionally when _Q_SLOW_VAL tip-bot for Waiman Long
2015-09-11 18:37 ` [PATCH v6 3/6] locking/pvqspinlock, x86: Optimize PV unlock code path Waiman Long
2015-09-11 18:37 ` [PATCH v6 4/6] locking/pvqspinlock: Collect slowpath lock statistics Waiman Long
2015-09-11 23:13 ` Davidlohr Bueso
2015-09-14 15:25 ` Waiman Long
2015-09-14 21:41 ` Davidlohr Bueso
2015-09-15 3:47 ` Waiman Long
2015-09-11 18:37 ` [PATCH v6 5/6] locking/pvqspinlock: Allow 1 lock stealing attempt Waiman Long
2015-09-14 13:57 ` Peter Zijlstra
2015-09-14 19:02 ` Waiman Long
2015-09-14 14:00 ` Peter Zijlstra
2015-09-14 19:15 ` Waiman Long [this message]
2015-09-14 19:38 ` Waiman Long
2015-09-15 8:24 ` Peter Zijlstra
2015-09-15 15:29 ` Waiman Long
2015-09-16 15:01 ` Peter Zijlstra
2015-09-17 15:08 ` Waiman Long
2015-09-14 14:04 ` Peter Zijlstra
2015-09-14 19:19 ` Waiman Long
2015-09-11 18:37 ` [PATCH v6 6/6] locking/pvqspinlock: Queue node adaptive spinning Waiman Long
2015-09-14 14:10 ` Peter Zijlstra
2015-09-14 19:37 ` Waiman Long
2015-09-15 8:38 ` Peter Zijlstra
2015-09-15 15:32 ` Waiman Long
2015-09-16 15:03 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55F71CC8.3050306@hpe.com \
--to=waiman.long@hpe.com \
--cc=dave@stgolabs.net \
--cc=doug.hatch@hp.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=scott.norton@hp.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.