From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752297AbbINThs (ORCPT ); Mon, 14 Sep 2015 15:37:48 -0400 Received: from g1t6223.austin.hp.com ([15.73.96.124]:54764 "EHLO g1t6223.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751902AbbINThr (ORCPT ); Mon, 14 Sep 2015 15:37:47 -0400 Message-ID: <55F721FC.4000504@hpe.com> Date: Mon, 14 Sep 2015 15:37:32 -0400 From: Waiman Long User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12 MIME-Version: 1.0 To: Peter Zijlstra CC: Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org, Scott J Norton , Douglas Hatch , Davidlohr Bueso Subject: Re: [PATCH v6 6/6] locking/pvqspinlock: Queue node adaptive spinning References: <1441996658-62854-1-git-send-email-Waiman.Long@hpe.com> <1441996658-62854-7-git-send-email-Waiman.Long@hpe.com> <20150914141012.GV18489@twins.programming.kicks-ass.net> In-Reply-To: <20150914141012.GV18489@twins.programming.kicks-ass.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/14/2015 10:10 AM, Peter Zijlstra wrote: > On Fri, Sep 11, 2015 at 02:37:38PM -0400, Waiman Long wrote: >> In an overcommitted guest where some vCPUs have to be halted to make >> forward progress in other areas, it is highly likely that a vCPU later >> in the spinlock queue will be spinning while the ones earlier in the >> queue would have been halted. The spinning in the later vCPUs is then >> just a waste of precious CPU cycles because they are not going to >> get the lock soon as the earlier ones have to be woken up and take >> their turn to get the lock. >> >> This patch implements an adaptive spinning mechanism where the vCPU >> will call pv_wait() if the following conditions are true: >> >> 1) the vCPU has not been halted before; >> 2) the previous vCPU is not running. > Why 1? For the mutex adaptive stuff we only care about the lock holder > running, right? The wait-early once logic was there because of the kick-ahead patch as I don't want a recently kicked vCPU near the head of the queue to go back to sleep too early. However, without kick-ahead, a woken up vCPU should now be at the queue head. Indeed, we can remove that check and simplify the logic. BTW, the queue head vCPU at pv_wait_head_and_lock() doesn't wait early, it will spin the full threshold as there is no way for it to figure out if the lock holder is running or not. Cheers, Longman