From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E0B7C4338F for ; Tue, 3 Aug 2021 21:18:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 12A9760F0F for ; Tue, 3 Aug 2021 21:18:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232047AbhHCVSm (ORCPT ); Tue, 3 Aug 2021 17:18:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46608 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231519AbhHCVSl (ORCPT ); Tue, 3 Aug 2021 17:18:41 -0400 Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com [IPv6:2a00:1450:4864:20::134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01425C061757 for ; Tue, 3 Aug 2021 14:18:30 -0700 (PDT) Received: by mail-lf1-x134.google.com with SMTP id y34so831152lfa.8 for ; Tue, 03 Aug 2021 14:18:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Ta6l2xXWDT9u5jccv96eXJ1qjXmVTPggm0wilGcpdVs=; b=BdE9W00YEuXqcEPUPfUmVzyU+jcHWH+410m74a1ncOOtdr5zirV7UEplYvEhtLPMre WaJruCdCQxginzPDCczLWu89yDppye3VsjFTRRvJx0YL9OFa5tRmy2fIQWJ12YlcZfY1 kTufqa+cc2M7lKqhQemENjDHf1CRXAyT9ZktXCzVoIaVNxD0D6KRIOYOCg1zIh1b4Jwp 2R3NiFWyLNR200i/dmGmjfDVaVGGG2CBL5Dh7GGv1K/XTgQCvRBZ/38jnpkv0bTShWe6 UR72L1IOHGOyCrNc7ttsTvZYbgs4gvGld1YdPhKmDauqAZ/glfLv0vhV5gYwXRnmBPjC 8s3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ta6l2xXWDT9u5jccv96eXJ1qjXmVTPggm0wilGcpdVs=; b=ek1kcdFNHTA6wyJaOGKauc62Im0b823SrcPag2cwg25ghsSD7bPhQ3h2gpFvZG/AYM 9lTYJZUAGJg+W5Mrl4CvE2R4r2UHCRRQXsgTDtV/9kANYr4blyO9uDOr5VXDRvvfZMrk MF35SdNqHCQzJCfNceHfEFaJJDx4n82b1W8in4plOW5NoX2fuOkwK+VUvxGxphUe+CVu nCyDJuM6VamEpMED7r/ljMNtOKyj10AXndMZwbzngizlYIaqztQ8gZTof/WRGR7qhA5A k5Rjm5QNFBf4fN19Sd1772beg2eCUDkRRvkjdhJYPaQjBNZJIN0yMtitfYdm7lvsPbzk FcaA== X-Gm-Message-State: AOAM5335YqmHgGCQ6kiftMFzArQlGC+erenTiXxOluWTsHm8DfVHKikp I0+lVG+nlMpXi7tXADu0ns3A/fBGyASKTVc4OW0yuA== X-Google-Smtp-Source: ABdhPJxHnayzG6ipEbIowb4o96jcK4d1APZCmmGvNA676lxtxxTmkXjlKp38rlwEV12gMG1y7qkziiPC0bhMql6xLi8= X-Received: by 2002:ac2:57cd:: with SMTP id k13mr147309lfo.117.1628025508018; Tue, 03 Aug 2021 14:18:28 -0700 (PDT) MIME-Version: 1.0 References: <20210729173300.181775-1-oupton@google.com> <20210729173300.181775-3-oupton@google.com> In-Reply-To: From: Oliver Upton Date: Tue, 3 Aug 2021 14:18:17 -0700 Message-ID: Subject: Re: [PATCH v5 02/13] KVM: x86: Refactor tsc synchronization code To: Sean Christopherson Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, Paolo Bonzini , Marc Zyngier , Peter Shier , Jim Mattson , David Matlack , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Andrew Jones Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Fri, Jul 30, 2021 at 11:08 AM Sean Christopherson wrote: > > On Thu, Jul 29, 2021, Oliver Upton wrote: > > Refactor kvm_synchronize_tsc to make a new function that allows callers > > to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly > > for the sake of participating in TSC synchronization. > > > > This changes the locking semantics around TSC writes. > > "refactor" and "changes the locking semantics" are somewhat contradictory. The > correct way to do this is to first change the locking semantics, then extract the > helper you want. That makes review and archaeology easier, and isolates the > locking change in case it isn't so safe after all. Indeed, it was mere laziness doing so :) > > Writes to the TSC will now take the pvclock gtod lock while holding the tsc > > write lock, whereas before these locks were disjoint. > > > > Reviewed-by: David Matlack > > Signed-off-by: Oliver Upton > > --- > > +/* > > + * Infers attempts to synchronize the guest's tsc from host writes. Sets the > > + * offset for the vcpu and tracks the TSC matching generation that the vcpu > > + * participates in. > > + * > > + * Must hold kvm->arch.tsc_write_lock to call this function. > > Drop this blurb, lockdep assertions exist for a reason :-) > Ack. > > + */ > > +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc, > > + u64 ns, bool matched) > > +{ > > + struct kvm *kvm = vcpu->kvm; > > + bool already_matched; > > Eww, not your code, but "matched" and "already_matched" are not helpful names, > e.g. they don't provide a clue as to _what_ matched, and thus don't explain why > there are two separate variables. And I would expect an "already" variant to > come in from the caller, not the other way 'round. > > matched => freq_matched > already_matched => gen_matched Yeah, everything this series touches is a bit messy. I greedily avoided the pile of cleanups that are needed, but alas... > > + spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); > > I believe this can be spin_lock(), since AFAICT the caller _must_ disable IRQs > when taking tsc_write_lock, i.e. we know IRQs are disabled at this point. Definitely. > > > + if (!matched) { > > + /* > > + * We split periods of matched TSC writes into generations. > > + * For each generation, we track the original measured > > + * nanosecond time, offset, and write, so if TSCs are in > > + * sync, we can match exact offset, and if not, we can match > > + * exact software computation in compute_guest_tsc() > > + * > > + * These values are tracked in kvm->arch.cur_xxx variables. > > + */ > > + kvm->arch.nr_vcpus_matched_tsc = 0; > > + kvm->arch.cur_tsc_generation++; > > + kvm->arch.cur_tsc_nsec = ns; > > + kvm->arch.cur_tsc_write = tsc; > > + kvm->arch.cur_tsc_offset = offset; > > IMO, adjusting kvm->arch.cur_tsc_* belongs outside of pvclock_gtod_sync_lock. > Based on the existing code, it is protected by tsc_write_lock. I don't care > about the extra work while holding pvclock_gtod_sync_lock, but it's very confusing > to see code that reads variables outside of a lock, then take a lock and write > those same variables without first rechecking. > > > + matched = false; > > What's the point of clearing "matched"? It's already false... None, besides just yanking the old chunk of code :) > > > + } else if (!already_matched) { > > + kvm->arch.nr_vcpus_matched_tsc++; > > + } > > + > > + kvm_track_tsc_matching(vcpu); > > + spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); > > +} > > + -- Thanks, Oliver From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19D25C4338F for ; Tue, 3 Aug 2021 21:18:35 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 8A72860F45 for ; Tue, 3 Aug 2021 21:18:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8A72860F45 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 05D66407ED; Tue, 3 Aug 2021 17:18:34 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id usMq9WuaVko9; Tue, 3 Aug 2021 17:18:32 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id D9B724066E; Tue, 3 Aug 2021 17:18:32 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id EB4864066E for ; Tue, 3 Aug 2021 17:18:30 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id aKaze-voKKPE for ; Tue, 3 Aug 2021 17:18:29 -0400 (EDT) Received: from mail-lf1-f45.google.com (mail-lf1-f45.google.com [209.85.167.45]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id B423040630 for ; Tue, 3 Aug 2021 17:18:29 -0400 (EDT) Received: by mail-lf1-f45.google.com with SMTP id c16so891159lfc.2 for ; Tue, 03 Aug 2021 14:18:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Ta6l2xXWDT9u5jccv96eXJ1qjXmVTPggm0wilGcpdVs=; b=BdE9W00YEuXqcEPUPfUmVzyU+jcHWH+410m74a1ncOOtdr5zirV7UEplYvEhtLPMre WaJruCdCQxginzPDCczLWu89yDppye3VsjFTRRvJx0YL9OFa5tRmy2fIQWJ12YlcZfY1 kTufqa+cc2M7lKqhQemENjDHf1CRXAyT9ZktXCzVoIaVNxD0D6KRIOYOCg1zIh1b4Jwp 2R3NiFWyLNR200i/dmGmjfDVaVGGG2CBL5Dh7GGv1K/XTgQCvRBZ/38jnpkv0bTShWe6 UR72L1IOHGOyCrNc7ttsTvZYbgs4gvGld1YdPhKmDauqAZ/glfLv0vhV5gYwXRnmBPjC 8s3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ta6l2xXWDT9u5jccv96eXJ1qjXmVTPggm0wilGcpdVs=; b=mFAVbKQnF7G5KN5ixW6Tk4AStG7WMc+buh17a5QSMK/kgWzJ3t9cCWWyQ2x3Tw2zMp 2RoXyJ2t8JgQOQRl8bkzxEVEKXfETmK+V8ZLbngf2ekEz1hNaJtWdsSxBijP4YRhmrHy SeEKEjMBgMKjSqzKlby57rPJCm3tcCy0OVFXZIcuf5fsaO6Ik95yD5J5VfC2qteskiHX jdHEOM06WsorTyGIjtHv8NC4S/wt7fXf3e5nmtIE/BBna+EZhBI4gSjO/sqbTiXK6I18 BT/VuOxLfpVKwYGpZq62gB/bj8pOMIcbviTF3DKU5LDfjh1W/I22mZZufXWAYvkrU6cH mtQg== X-Gm-Message-State: AOAM5318ASnyDakKaqMOEOlDhTSjJ+GDri20ApE3wZSUtlU60HLZ6fmn PdcotVtfUCfV3zcNWY5UGg3p7eLBBGYMxEESypJCcA== X-Google-Smtp-Source: ABdhPJxHnayzG6ipEbIowb4o96jcK4d1APZCmmGvNA676lxtxxTmkXjlKp38rlwEV12gMG1y7qkziiPC0bhMql6xLi8= X-Received: by 2002:ac2:57cd:: with SMTP id k13mr147309lfo.117.1628025508018; Tue, 03 Aug 2021 14:18:28 -0700 (PDT) MIME-Version: 1.0 References: <20210729173300.181775-1-oupton@google.com> <20210729173300.181775-3-oupton@google.com> In-Reply-To: From: Oliver Upton Date: Tue, 3 Aug 2021 14:18:17 -0700 Message-ID: Subject: Re: [PATCH v5 02/13] KVM: x86: Refactor tsc synchronization code To: Sean Christopherson Cc: kvm@vger.kernel.org, Marc Zyngier , Peter Shier , Raghavendra Rao Anata , David Matlack , Paolo Bonzini , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Jim Mattson X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Fri, Jul 30, 2021 at 11:08 AM Sean Christopherson wrote: > > On Thu, Jul 29, 2021, Oliver Upton wrote: > > Refactor kvm_synchronize_tsc to make a new function that allows callers > > to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly > > for the sake of participating in TSC synchronization. > > > > This changes the locking semantics around TSC writes. > > "refactor" and "changes the locking semantics" are somewhat contradictory. The > correct way to do this is to first change the locking semantics, then extract the > helper you want. That makes review and archaeology easier, and isolates the > locking change in case it isn't so safe after all. Indeed, it was mere laziness doing so :) > > Writes to the TSC will now take the pvclock gtod lock while holding the tsc > > write lock, whereas before these locks were disjoint. > > > > Reviewed-by: David Matlack > > Signed-off-by: Oliver Upton > > --- > > +/* > > + * Infers attempts to synchronize the guest's tsc from host writes. Sets the > > + * offset for the vcpu and tracks the TSC matching generation that the vcpu > > + * participates in. > > + * > > + * Must hold kvm->arch.tsc_write_lock to call this function. > > Drop this blurb, lockdep assertions exist for a reason :-) > Ack. > > + */ > > +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc, > > + u64 ns, bool matched) > > +{ > > + struct kvm *kvm = vcpu->kvm; > > + bool already_matched; > > Eww, not your code, but "matched" and "already_matched" are not helpful names, > e.g. they don't provide a clue as to _what_ matched, and thus don't explain why > there are two separate variables. And I would expect an "already" variant to > come in from the caller, not the other way 'round. > > matched => freq_matched > already_matched => gen_matched Yeah, everything this series touches is a bit messy. I greedily avoided the pile of cleanups that are needed, but alas... > > + spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); > > I believe this can be spin_lock(), since AFAICT the caller _must_ disable IRQs > when taking tsc_write_lock, i.e. we know IRQs are disabled at this point. Definitely. > > > + if (!matched) { > > + /* > > + * We split periods of matched TSC writes into generations. > > + * For each generation, we track the original measured > > + * nanosecond time, offset, and write, so if TSCs are in > > + * sync, we can match exact offset, and if not, we can match > > + * exact software computation in compute_guest_tsc() > > + * > > + * These values are tracked in kvm->arch.cur_xxx variables. > > + */ > > + kvm->arch.nr_vcpus_matched_tsc = 0; > > + kvm->arch.cur_tsc_generation++; > > + kvm->arch.cur_tsc_nsec = ns; > > + kvm->arch.cur_tsc_write = tsc; > > + kvm->arch.cur_tsc_offset = offset; > > IMO, adjusting kvm->arch.cur_tsc_* belongs outside of pvclock_gtod_sync_lock. > Based on the existing code, it is protected by tsc_write_lock. I don't care > about the extra work while holding pvclock_gtod_sync_lock, but it's very confusing > to see code that reads variables outside of a lock, then take a lock and write > those same variables without first rechecking. > > > + matched = false; > > What's the point of clearing "matched"? It's already false... None, besides just yanking the old chunk of code :) > > > + } else if (!already_matched) { > > + kvm->arch.nr_vcpus_matched_tsc++; > > + } > > + > > + kvm_track_tsc_matching(vcpu); > > + spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); > > +} > > + -- Thanks, Oliver _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C326C4338F for ; Tue, 3 Aug 2021 21:20:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4A1A860F0F for ; Tue, 3 Aug 2021 21:20:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4A1A860F0F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+cGLWSOkliwGTMlah2e1e2/+JjRP02E3Z0gOkkqLExw=; b=eByXOwUQSz1gVB xfzsjWB6tgX4wNRqz+skQnOsdKoJ9/AHBWC08JGBo1eYGBcLQql45eNYw3ohwh1D9cI7TRO25TQRR jUWb3H3ApW7LHAJcYSDcaZjiur5UtsspmsjF0YeE9FdohYOmEW2aDdisYgc5E8ERSxoHpP8mVoXx/ X/mC8nQA0BFX1BURpY44hr0zJG1GkuBsPVIluNJJkCOFvMJQVkknd5rvxY9LZn8oZgI9mgvr0hPZk 7gq/rOTWuJIWPKqvvlZX9+R8GTJIQafC796i3ZqUq6BCKsgoTvdHoYbQwvyPdrOgHVcG94qNeKrPp hcJngD244UD1Ic/4Tm7g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mB1o1-004FOh-QS; Tue, 03 Aug 2021 21:18:33 +0000 Received: from mail-lf1-x12a.google.com ([2a00:1450:4864:20::12a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mB1ny-004FOE-CH for linux-arm-kernel@lists.infradead.org; Tue, 03 Aug 2021 21:18:31 +0000 Received: by mail-lf1-x12a.google.com with SMTP id h2so874641lfu.4 for ; Tue, 03 Aug 2021 14:18:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Ta6l2xXWDT9u5jccv96eXJ1qjXmVTPggm0wilGcpdVs=; b=BdE9W00YEuXqcEPUPfUmVzyU+jcHWH+410m74a1ncOOtdr5zirV7UEplYvEhtLPMre WaJruCdCQxginzPDCczLWu89yDppye3VsjFTRRvJx0YL9OFa5tRmy2fIQWJ12YlcZfY1 kTufqa+cc2M7lKqhQemENjDHf1CRXAyT9ZktXCzVoIaVNxD0D6KRIOYOCg1zIh1b4Jwp 2R3NiFWyLNR200i/dmGmjfDVaVGGG2CBL5Dh7GGv1K/XTgQCvRBZ/38jnpkv0bTShWe6 UR72L1IOHGOyCrNc7ttsTvZYbgs4gvGld1YdPhKmDauqAZ/glfLv0vhV5gYwXRnmBPjC 8s3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ta6l2xXWDT9u5jccv96eXJ1qjXmVTPggm0wilGcpdVs=; b=HrBS72/2lvVIOuO2wYauscp4m6ebwBC7do5pcf2CjKzJSHBp3oaFUYjana4JEvrrg/ XfXJTDKbyT7fjH0o/OIj8d4P0gnzVzHhUnySKBM2MxaHXxSC7gFW4kRFWEOB0nLiKOd9 5+RtmdPVSCi1ZurHfCpJw3qHwIpWD5J0H12rEDTvRY1Pk7owI0nHpNJZxocOyFhOFREf Gay9SnkoGhJVPzbsKhESYpDHUwfonPnCDcHuLm3/RrDxipFaSNdM55INwYjHhDuTk9zG F2Q56EYflf0RL8MbE4wyWlnLbGxJSaEgcUIAaMvNQ5kbXM5B8015oGbsJtZD/TpxnGyx TrBw== X-Gm-Message-State: AOAM5315JeQ5tyocTmAKqPp+HDPaNKmV0Xjr/YaXsMwS6pAR8k5ASfih 1L74oQ17JQ9TkhmqsPYoWApJIfQV+cCB74FZIwH3rA== X-Google-Smtp-Source: ABdhPJxHnayzG6ipEbIowb4o96jcK4d1APZCmmGvNA676lxtxxTmkXjlKp38rlwEV12gMG1y7qkziiPC0bhMql6xLi8= X-Received: by 2002:ac2:57cd:: with SMTP id k13mr147309lfo.117.1628025508018; Tue, 03 Aug 2021 14:18:28 -0700 (PDT) MIME-Version: 1.0 References: <20210729173300.181775-1-oupton@google.com> <20210729173300.181775-3-oupton@google.com> In-Reply-To: From: Oliver Upton Date: Tue, 3 Aug 2021 14:18:17 -0700 Message-ID: Subject: Re: [PATCH v5 02/13] KVM: x86: Refactor tsc synchronization code To: Sean Christopherson Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, Paolo Bonzini , Marc Zyngier , Peter Shier , Jim Mattson , David Matlack , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Andrew Jones X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210803_141830_482774_7D84B46A X-CRM114-Status: GOOD ( 32.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Jul 30, 2021 at 11:08 AM Sean Christopherson wrote: > > On Thu, Jul 29, 2021, Oliver Upton wrote: > > Refactor kvm_synchronize_tsc to make a new function that allows callers > > to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly > > for the sake of participating in TSC synchronization. > > > > This changes the locking semantics around TSC writes. > > "refactor" and "changes the locking semantics" are somewhat contradictory. The > correct way to do this is to first change the locking semantics, then extract the > helper you want. That makes review and archaeology easier, and isolates the > locking change in case it isn't so safe after all. Indeed, it was mere laziness doing so :) > > Writes to the TSC will now take the pvclock gtod lock while holding the tsc > > write lock, whereas before these locks were disjoint. > > > > Reviewed-by: David Matlack > > Signed-off-by: Oliver Upton > > --- > > +/* > > + * Infers attempts to synchronize the guest's tsc from host writes. Sets the > > + * offset for the vcpu and tracks the TSC matching generation that the vcpu > > + * participates in. > > + * > > + * Must hold kvm->arch.tsc_write_lock to call this function. > > Drop this blurb, lockdep assertions exist for a reason :-) > Ack. > > + */ > > +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc, > > + u64 ns, bool matched) > > +{ > > + struct kvm *kvm = vcpu->kvm; > > + bool already_matched; > > Eww, not your code, but "matched" and "already_matched" are not helpful names, > e.g. they don't provide a clue as to _what_ matched, and thus don't explain why > there are two separate variables. And I would expect an "already" variant to > come in from the caller, not the other way 'round. > > matched => freq_matched > already_matched => gen_matched Yeah, everything this series touches is a bit messy. I greedily avoided the pile of cleanups that are needed, but alas... > > + spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); > > I believe this can be spin_lock(), since AFAICT the caller _must_ disable IRQs > when taking tsc_write_lock, i.e. we know IRQs are disabled at this point. Definitely. > > > + if (!matched) { > > + /* > > + * We split periods of matched TSC writes into generations. > > + * For each generation, we track the original measured > > + * nanosecond time, offset, and write, so if TSCs are in > > + * sync, we can match exact offset, and if not, we can match > > + * exact software computation in compute_guest_tsc() > > + * > > + * These values are tracked in kvm->arch.cur_xxx variables. > > + */ > > + kvm->arch.nr_vcpus_matched_tsc = 0; > > + kvm->arch.cur_tsc_generation++; > > + kvm->arch.cur_tsc_nsec = ns; > > + kvm->arch.cur_tsc_write = tsc; > > + kvm->arch.cur_tsc_offset = offset; > > IMO, adjusting kvm->arch.cur_tsc_* belongs outside of pvclock_gtod_sync_lock. > Based on the existing code, it is protected by tsc_write_lock. I don't care > about the extra work while holding pvclock_gtod_sync_lock, but it's very confusing > to see code that reads variables outside of a lock, then take a lock and write > those same variables without first rechecking. > > > + matched = false; > > What's the point of clearing "matched"? It's already false... None, besides just yanking the old chunk of code :) > > > + } else if (!already_matched) { > > + kvm->arch.nr_vcpus_matched_tsc++; > > + } > > + > > + kvm_track_tsc_matching(vcpu); > > + spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); > > +} > > + -- Thanks, Oliver _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel