From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754289AbbINHhp (ORCPT ); Mon, 14 Sep 2015 03:37:45 -0400 Received: from smtp2.provo.novell.com ([137.65.250.81]:33723 "EHLO smtp2.provo.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752209AbbINHhn (ORCPT ); Mon, 14 Sep 2015 03:37:43 -0400 From: Davidlohr Bueso To: Peter Zijlstra , Ingo Molnar , Thomas Gleixner Cc: "Paul E. McKenney" , Davidlohr Bueso , linux-kernel@vger.kernel.org, Davidlohr Bueso Subject: [PATCH -tip 3/3] locking/osq: Relax atomic semantics Date: Mon, 14 Sep 2015 00:37:24 -0700 Message-Id: <1442216244-4409-3-git-send-email-dave@stgolabs.net> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1442216244-4409-1-git-send-email-dave@stgolabs.net> References: <1442216244-4409-1-git-send-email-dave@stgolabs.net> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ... by using acquire/release for ops around the lock->tail. As such, weakly ordered archs can benefit from more relaxed use of barriers when issuing atomics. Signed-off-by: Davidlohr Bueso --- kernel/locking/osq_lock.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c index dc85ee2..d092a0c 100644 --- a/kernel/locking/osq_lock.c +++ b/kernel/locking/osq_lock.c @@ -50,7 +50,7 @@ osq_wait_next(struct optimistic_spin_queue *lock, for (;;) { if (atomic_read(&lock->tail) == curr && - atomic_cmpxchg(&lock->tail, curr, old) == curr) { + atomic_cmpxchg_acquire(&lock->tail, curr, old) == curr) { /* * We were the last queued, we moved @lock back. @prev * will now observe @lock and will complete its @@ -92,7 +92,11 @@ bool osq_lock(struct optimistic_spin_queue *lock) node->next = NULL; node->cpu = curr; - old = atomic_xchg(&lock->tail, curr); + /* + * ACQUIRE semantics, pairs with corresponding RELEASE + * in unlock() uncontended, or fastpath. + */ + old = atomic_xchg_acquire(&lock->tail, curr); if (old == OSQ_UNLOCKED_VAL) return true; @@ -184,7 +188,8 @@ void osq_unlock(struct optimistic_spin_queue *lock) /* * Fast path for the uncontended case. */ - if (likely(atomic_cmpxchg(&lock->tail, curr, OSQ_UNLOCKED_VAL) == curr)) + if (likely(atomic_cmpxchg_release(&lock->tail, curr, + OSQ_UNLOCKED_VAL) == curr)) return; /* -- 2.1.4