From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9744C433DB for ; Mon, 29 Mar 2021 05:50:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A42B2619A7 for ; Mon, 29 Mar 2021 05:50:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231455AbhC2Fui (ORCPT ); Mon, 29 Mar 2021 01:50:38 -0400 Received: from mga07.intel.com ([134.134.136.100]:15632 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230509AbhC2Ft7 (ORCPT ); Mon, 29 Mar 2021 01:49:59 -0400 IronPort-SDR: SthjlaO5OQmkf7pUoIGLPwDiib/Y4LRyqSPv5JRaK1Ot+Dxi7G6A1AC0g4JtgY4gJGSPxvujxl LLNWRffw5rCg== X-IronPort-AV: E=McAfee;i="6000,8403,9937"; a="255478751" X-IronPort-AV: E=Sophos;i="5.81,285,1610438400"; d="scan'208";a="255478751" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2021 22:49:59 -0700 IronPort-SDR: FvOGacXRLLTvfzhgq+wUTdl7JyWnksg8/blcFlJv5DNYXGsXKnKZbJXU0KjSv1mOJWH/y19zUn AfR35XgwoIgw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,285,1610438400"; d="scan'208";a="417506848" Received: from clx-ap-likexu.sh.intel.com ([10.239.48.108]) by orsmga008.jf.intel.com with ESMTP; 28 Mar 2021 22:49:55 -0700 From: Like Xu To: peterz@infradead.org, Sean Christopherson , Paolo Bonzini Cc: eranian@google.com, andi@firstfloor.org, kan.liang@linux.intel.com, wei.w.wang@intel.com, Wanpeng Li , Vitaly Kuznetsov , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Like Xu , Luwei Kang Subject: [PATCH v4 09/16] KVM: x86/pmu: Add PEBS_DATA_CFG MSR emulation to support adaptive PEBS Date: Mon, 29 Mar 2021 13:41:30 +0800 Message-Id: <20210329054137.120994-10-like.xu@linux.intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210329054137.120994-1-like.xu@linux.intel.com> References: <20210329054137.120994-1-like.xu@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If IA32_PERF_CAPABILITIES.PEBS_BASELINE [bit 14] is set, the adaptive PEBS is supported. The PEBS_DATA_CFG MSR and adaptive record enable bits (IA32_PERFEVTSELx.Adaptive_Record and IA32_FIXED_CTR_CTRL. FCx_Adaptive_Record) are also supported. Adaptive PEBS provides software the capability to configure the PEBS records to capture only the data of interest, keeping the record size compact. An overflow of PMCx results in generation of an adaptive PEBS record with state information based on the selections specified in MSR_PEBS_DATA_CFG (Memory Info [bit 0], GPRs [bit 1], XMMs [bit 2], and LBRs [bit 3], LBR Entries [bit 31:24]). By default, the PEBS record will only contain the Basic group. When guest adaptive PEBS is enabled, the IA32_PEBS_ENABLE MSR will be added to the perf_guest_switch_msr() and switched during the VMX transitions just like CORE_PERF_GLOBAL_CTRL MSR. Co-developed-by: Luwei Kang Signed-off-by: Luwei Kang Signed-off-by: Like Xu --- arch/x86/events/intel/core.c | 11 ++++++++++- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/vmx/pmu_intel.c | 16 ++++++++++++++++ 3 files changed, 28 insertions(+), 1 deletion(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 7f3821a59b84..3bbdfc4f6931 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3844,6 +3844,7 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data) struct perf_guest_switch_msr *arr = cpuc->guest_switch_msrs; struct debug_store *ds = __this_cpu_read(cpu_hw_events.ds); struct kvm_pmu *pmu = (struct kvm_pmu *)data; + bool baseline = x86_pmu.intel_cap.pebs_baseline; arr[0].msr = MSR_CORE_PERF_GLOBAL_CTRL; arr[0].host = x86_pmu.intel_ctrl & ~cpuc->intel_ctrl_guest_mask; @@ -3863,6 +3864,12 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data) arr[2].host = (unsigned long)ds; arr[2].guest = pmu->ds_area; + if (baseline) { + arr[3].msr = MSR_PEBS_DATA_CFG; + arr[3].host = cpuc->pebs_data_cfg; + arr[3].guest = pmu->pebs_data_cfg; + } + /* * If PMU counter has PEBS enabled it is not enough to * disable counter on a guest entry since PEBS memory @@ -3879,9 +3886,11 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data) else { arr[1].guest = arr[1].host; arr[2].guest = arr[2].host; + if (baseline) + arr[3].guest = arr[3].host; } - *nr = 3; + *nr = baseline ? 4 : 3; } return arr; diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 2275cc144f58..94366da2dfee 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -463,6 +463,8 @@ struct kvm_pmu { u64 ds_area; u64 pebs_enable; u64 pebs_enable_mask; + u64 pebs_data_cfg; + u64 pebs_data_cfg_mask; /* * The gate to release perf_events not marked in diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 77d30106abca..7f18c760dbae 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -226,6 +226,9 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) case MSR_IA32_DS_AREA: ret = guest_cpuid_has(vcpu, X86_FEATURE_DS); break; + case MSR_PEBS_DATA_CFG: + ret = vcpu->arch.perf_capabilities & PERF_CAP_PEBS_BASELINE; + break; default: ret = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0) || get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0) || @@ -379,6 +382,9 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_DS_AREA: msr_info->data = pmu->ds_area; return 0; + case MSR_PEBS_DATA_CFG: + msr_info->data = pmu->pebs_data_cfg; + return 0; default: if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) || (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) { @@ -452,6 +458,14 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 1; pmu->ds_area = data; return 0; + case MSR_PEBS_DATA_CFG: + if (pmu->pebs_data_cfg == data) + return 0; + if (!(data & pmu->pebs_data_cfg_mask)) { + pmu->pebs_data_cfg = data; + return 0; + } + break; default: if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) || (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) { @@ -505,6 +519,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->reserved_bits = 0xffffffff00200000ull; pmu->fixed_ctr_ctrl_mask = ~0ull; pmu->pebs_enable_mask = ~0ull; + pmu->pebs_data_cfg_mask = ~0ull; entry = kvm_find_cpuid_entry(vcpu, 0xa, 0); if (!entry) @@ -579,6 +594,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) for (i = 0; i < pmu->nr_arch_fixed_counters; i++) pmu->fixed_ctr_ctrl_mask &= ~(1ULL << (INTEL_PMC_IDX_FIXED + i * 4)); + pmu->pebs_data_cfg_mask = ~0xff00000full; } else pmu->pebs_enable_mask = ~((1ull << pmu->nr_arch_gp_counters) - 1); } else { -- 2.29.2