From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753277AbbIRIvw (ORCPT ); Fri, 18 Sep 2015 04:51:52 -0400 Received: from terminus.zytor.com ([198.137.202.10]:44826 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753437AbbIRIvp (ORCPT ); Fri, 18 Sep 2015 04:51:45 -0400 Date: Fri, 18 Sep 2015 01:50:55 -0700 From: tip-bot for Waiman Long Message-ID: Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, scott.norton@hp.com, torvalds@linux-foundation.org, paulmck@linux.vnet.ibm.com, dave@stgolabs.net, peterz@infradead.org, doug.hatch@hp.com, Waiman.Long@hpe.com, tglx@linutronix.de, mingo@kernel.org, hpa@zytor.com Reply-To: tglx@linutronix.de, hpa@zytor.com, mingo@kernel.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, scott.norton@hp.com, paulmck@linux.vnet.ibm.com, peterz@infradead.org, dave@stgolabs.net, Waiman.Long@hpe.com, doug.hatch@hp.com, torvalds@linux-foundation.org In-Reply-To: <1441996658-62854-3-git-send-email-Waiman.Long@hpe.com> References: <1441996658-62854-3-git-send-email-Waiman.Long@hpe.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:locking/core] locking/pvqspinlock: Kick the PV CPU unconditionally when _Q_SLOW_VAL Git-Commit-ID: 93edc8bd7750ff3cae088bfca453ea73dc9004a4 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 93edc8bd7750ff3cae088bfca453ea73dc9004a4 Gitweb: http://git.kernel.org/tip/93edc8bd7750ff3cae088bfca453ea73dc9004a4 Author: Waiman Long AuthorDate: Fri, 11 Sep 2015 14:37:34 -0400 Committer: Ingo Molnar CommitDate: Fri, 18 Sep 2015 09:27:29 +0200 locking/pvqspinlock: Kick the PV CPU unconditionally when _Q_SLOW_VAL If _Q_SLOW_VAL has been set, the vCPU state must have been vcpu_hashed. The extra check at the end of __pv_queued_spin_unlock() is unnecessary and can be removed. Signed-off-by: Waiman Long Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Davidlohr Bueso Cc: Andrew Morton Cc: Douglas Hatch Cc: H. Peter Anvin Cc: Linus Torvalds Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Scott J Norton Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/1441996658-62854-3-git-send-email-Waiman.Long@hpe.com Signed-off-by: Ingo Molnar --- kernel/locking/qspinlock_paravirt.h | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h index c8e6e9a..f0450ff 100644 --- a/kernel/locking/qspinlock_paravirt.h +++ b/kernel/locking/qspinlock_paravirt.h @@ -267,7 +267,6 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node) } if (!lp) { /* ONCE */ - WRITE_ONCE(pn->state, vcpu_hashed); lp = pv_hash(lock, pn); /* @@ -275,11 +274,9 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node) * when we observe _Q_SLOW_VAL in __pv_queued_spin_unlock() * we'll be sure to be able to observe our hash entry. * - * [S] pn->state * [S] [Rmw] l->locked == _Q_SLOW_VAL * MB RMB * [RmW] l->locked = _Q_SLOW_VAL [L] - * [L] pn->state * * Matches the smp_rmb() in __pv_queued_spin_unlock(). */ @@ -364,8 +361,7 @@ __visible void __pv_queued_spin_unlock(struct qspinlock *lock) * vCPU is harmless other than the additional latency in completing * the unlock. */ - if (READ_ONCE(node->state) == vcpu_hashed) - pv_kick(node->cpu); + pv_kick(node->cpu); } /* * Include the architecture specific callee-save thunk of the