From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757597AbbFQAil (ORCPT ); Tue, 16 Jun 2015 20:38:41 -0400 Received: from mail.kernel.org ([198.145.29.136]:54465 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757459AbbFQAhZ (ORCPT ); Tue, 16 Jun 2015 20:37:25 -0400 From: Andy Lutomirski To: x86@kernel.org Cc: Borislav Petkov , Peter Zijlstra , John Stultz , linux-kernel@vger.kernel.org, Len Brown , Huang Rui , Denys Vlasenko , kvm@vger.kernel.org, Ralf Baechle , Andy Lutomirski Subject: [PATCH v3 16/18] x86/tsc: In read_tsc, use rdtsc_ordered() instead of get_cycles() Date: Tue, 16 Jun 2015 17:36:04 -0700 Message-Id: X-Mailer: git-send-email 2.4.2 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are two logical changes here. First, this removes a check for cpu_has_tsc. That check is unnecessary, as we don't register the TSC as a clocksource on systems that have no TSC. Second, it adds a barrier, thus preventing observable non-monotonicity. I suspect that the missing barrier was never a problem in practice because system calls themselves were heavy enough barriers to prevent user code from observing time warps due to speculation. (Without the corresponding barrier in the vDSO, however, non-monotonicity is easy to detect.) Signed-off-by: Andy Lutomirski --- arch/x86/kernel/tsc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index 21d6e04e3e82..451bade0d320 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -961,7 +961,7 @@ static struct clocksource clocksource_tsc; */ static cycle_t read_tsc(struct clocksource *cs) { - return (cycle_t)get_cycles(); + return (cycle_t)rdtsc_ordered(); } /* -- 2.4.2