From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754205AbbINHhi (ORCPT ); Mon, 14 Sep 2015 03:37:38 -0400 Received: from smtp2.provo.novell.com ([137.65.250.81]:56054 "EHLO smtp2.provo.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752209AbbINHhg (ORCPT ); Mon, 14 Sep 2015 03:37:36 -0400 From: Davidlohr Bueso To: Peter Zijlstra , Ingo Molnar , Thomas Gleixner Cc: "Paul E. McKenney" , Davidlohr Bueso , linux-kernel@vger.kernel.org, Davidlohr Bueso Subject: [PATCH -tip 1/3] locking/qrwlock: Rename ->lock to ->wait_lock Date: Mon, 14 Sep 2015 00:37:22 -0700 Message-Id: <1442216244-4409-1-git-send-email-dave@stgolabs.net> X-Mailer: git-send-email 2.1.4 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ... trivial, but reads a little nicer when we name our actual primitive 'lock'. Signed-off-by: Davidlohr Bueso --- include/asm-generic/qrwlock_types.h | 4 ++-- kernel/locking/qrwlock.c | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/asm-generic/qrwlock_types.h b/include/asm-generic/qrwlock_types.h index 4d76f24..0abc6b6 100644 --- a/include/asm-generic/qrwlock_types.h +++ b/include/asm-generic/qrwlock_types.h @@ -10,12 +10,12 @@ typedef struct qrwlock { atomic_t cnts; - arch_spinlock_t lock; + arch_spinlock_t wait_lock; } arch_rwlock_t; #define __ARCH_RW_LOCK_UNLOCKED { \ .cnts = ATOMIC_INIT(0), \ - .lock = __ARCH_SPIN_LOCK_UNLOCKED, \ + .wait_lock = __ARCH_SPIN_LOCK_UNLOCKED, \ } #endif /* __ASM_GENERIC_QRWLOCK_TYPES_H */ diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c index f17a3e3..fec0823 100644 --- a/kernel/locking/qrwlock.c +++ b/kernel/locking/qrwlock.c @@ -86,7 +86,7 @@ void queued_read_lock_slowpath(struct qrwlock *lock, u32 cnts) /* * Put the reader into the wait queue */ - arch_spin_lock(&lock->lock); + arch_spin_lock(&lock->wait_lock); /* * The ACQUIRE semantics of the following spinning code ensure @@ -99,7 +99,7 @@ void queued_read_lock_slowpath(struct qrwlock *lock, u32 cnts) /* * Signal the next one in queue to become queue head */ - arch_spin_unlock(&lock->lock); + arch_spin_unlock(&lock->wait_lock); } EXPORT_SYMBOL(queued_read_lock_slowpath); @@ -112,7 +112,7 @@ void queued_write_lock_slowpath(struct qrwlock *lock) u32 cnts; /* Put the writer into the wait queue */ - arch_spin_lock(&lock->lock); + arch_spin_lock(&lock->wait_lock); /* Try to acquire the lock directly if no reader is present */ if (!atomic_read(&lock->cnts) && @@ -144,6 +144,6 @@ void queued_write_lock_slowpath(struct qrwlock *lock) cpu_relax_lowlatency(); } unlock: - arch_spin_unlock(&lock->lock); + arch_spin_unlock(&lock->wait_lock); } EXPORT_SYMBOL(queued_write_lock_slowpath); -- 2.1.4