From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755725AbbIKXOG (ORCPT ); Fri, 11 Sep 2015 19:14:06 -0400 Received: from mx2.suse.de ([195.135.220.15]:32777 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755229AbbIKXOE (ORCPT ); Fri, 11 Sep 2015 19:14:04 -0400 Date: Fri, 11 Sep 2015 16:13:55 -0700 From: Davidlohr Bueso To: Waiman Long Cc: Peter Zijlstra , Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org, Scott J Norton , Douglas Hatch Subject: Re: [PATCH v6 4/6] locking/pvqspinlock: Collect slowpath lock statistics Message-ID: <20150911231355.GD19736@linux-q0g1.site> References: <1441996658-62854-1-git-send-email-Waiman.Long@hpe.com> <1441996658-62854-5-git-send-email-Waiman.Long@hpe.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <1441996658-62854-5-git-send-email-Waiman.Long@hpe.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 11 Sep 2015, Waiman Long wrote: >A sample of statistics counts after system bootup (with vCPU >overcommit) was: > >hash_hops_count=9001 >kick_latencies=138047878 >kick_unlock_count=9001 >kick_wait_count=9000 >spurious_wakeup=3 >wait_again_count=2 >wait_head_count=10 >wait_node_count=8994 >wake_latencies=713195944 Any reason you chose not to make the stats per-cpu? The locking numbers don't have to be exact, so you can easily get away with it and suffer from much less overhead that resorting to atomics. Obviously assuming that reading/collecting the stats is done infrequently, such as between workloads or at bootup as you did. Thanks, Davidlohr