From: Mingwei Zhang <mizhang@google.com>
To: "Mi, Dapeng" <dapeng1.mi@linux.intel.com>
Cc: Sean Christopherson <seanjc@google.com>,
Xiong Zhang <xiong.y.zhang@linux.intel.com>,
pbonzini@redhat.com, peterz@infradead.org, kan.liang@intel.com,
zhenyuw@linux.intel.com, jmattson@google.com,
kvm@vger.kernel.org, linux-perf-users@vger.kernel.org,
linux-kernel@vger.kernel.org, zhiyuan.lv@intel.com,
eranian@google.com, irogers@google.com, samantha.alt@intel.com,
like.xu.linux@gmail.com, chao.gao@intel.com
Subject: Re: [RFC PATCH 23/41] KVM: x86/pmu: Implement the save/restore of PMU state for Intel CPU
Date: Sun, 14 Apr 2024 23:06:17 -0700 [thread overview]
Message-ID: <CAL715WJXWQgfzgh8KqL+pAzeqL+dkF6imfRM37nQ6PkZd09mhQ@mail.gmail.com> (raw)
In-Reply-To: <4c47b975-ad30-4be9-a0a9-f0989d1fa395@linux.intel.com>
On Fri, Apr 12, 2024 at 9:25 PM Mi, Dapeng <dapeng1.mi@linux.intel.com> wrote:
>
>
> On 4/13/2024 11:34 AM, Mingwei Zhang wrote:
> > On Sat, Apr 13, 2024, Mi, Dapeng wrote:
> >> On 4/12/2024 5:44 AM, Sean Christopherson wrote:
> >>> On Fri, Jan 26, 2024, Xiong Zhang wrote:
> >>>> From: Dapeng Mi <dapeng1.mi@linux.intel.com>
> >>>>
> >>>> Implement the save/restore of PMU state for pasthrough PMU in Intel. In
> >>>> passthrough mode, KVM owns exclusively the PMU HW when control flow goes to
> >>>> the scope of passthrough PMU. Thus, KVM needs to save the host PMU state
> >>>> and gains the full HW PMU ownership. On the contrary, host regains the
> >>>> ownership of PMU HW from KVM when control flow leaves the scope of
> >>>> passthrough PMU.
> >>>>
> >>>> Implement PMU context switches for Intel CPUs and opptunistically use
> >>>> rdpmcl() instead of rdmsrl() when reading counters since the former has
> >>>> lower latency in Intel CPUs.
> >>>>
> >>>> Co-developed-by: Mingwei Zhang <mizhang@google.com>
> >>>> Signed-off-by: Mingwei Zhang <mizhang@google.com>
> >>>> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> >>>> ---
> >>>> arch/x86/kvm/vmx/pmu_intel.c | 73 ++++++++++++++++++++++++++++++++++++
> >>>> 1 file changed, 73 insertions(+)
> >>>>
> >>>> diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
> >>>> index 0d58fe7d243e..f79bebe7093d 100644
> >>>> --- a/arch/x86/kvm/vmx/pmu_intel.c
> >>>> +++ b/arch/x86/kvm/vmx/pmu_intel.c
> >>>> @@ -823,10 +823,83 @@ void intel_passthrough_pmu_msrs(struct kvm_vcpu *vcpu)
> >>>> static void intel_save_pmu_context(struct kvm_vcpu *vcpu)
> >>> I would prefer there be a "guest" in there somewhere, e.g. intel_save_guest_pmu_context().
> >> Yeah. It looks clearer.
> >>>> {
> >>>> + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
> >>>> + struct kvm_pmc *pmc;
> >>>> + u32 i;
> >>>> +
> >>>> + if (pmu->version != 2) {
> >>>> + pr_warn("only PerfMon v2 is supported for passthrough PMU");
> >>>> + return;
> >>>> + }
> >>>> +
> >>>> + /* Global ctrl register is already saved at VM-exit. */
> >>>> + rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, pmu->global_status);
> >>>> + /* Clear hardware MSR_CORE_PERF_GLOBAL_STATUS MSR, if non-zero. */
> >>>> + if (pmu->global_status)
> >>>> + wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, pmu->global_status);
> >>>> +
> >>>> + for (i = 0; i < pmu->nr_arch_gp_counters; i++) {
> >>>> + pmc = &pmu->gp_counters[i];
> >>>> + rdpmcl(i, pmc->counter);
> >>>> + rdmsrl(i + MSR_ARCH_PERFMON_EVENTSEL0, pmc->eventsel);
> >>>> + /*
> >>>> + * Clear hardware PERFMON_EVENTSELx and its counter to avoid
> >>>> + * leakage and also avoid this guest GP counter get accidentally
> >>>> + * enabled during host running when host enable global ctrl.
> >>>> + */
> >>>> + if (pmc->eventsel)
> >>>> + wrmsrl(MSR_ARCH_PERFMON_EVENTSEL0 + i, 0);
> >>>> + if (pmc->counter)
> >>>> + wrmsrl(MSR_IA32_PMC0 + i, 0);
> >>> This doesn't make much sense. The kernel already has full access to the guest,
> >>> I don't see what is gained by zeroing out the MSRs just to hide them from perf.
> >> It's necessary to clear the EVENTSELx MSRs for both GP and fixed counters.
> >> Considering this case, Guest uses GP counter 2, but Host doesn't use it. So
> >> if the EVENTSEL2 MSR is not cleared here, the GP counter 2 would be enabled
> >> unexpectedly on host later since Host perf always enable all validate bits
> >> in PERF_GLOBAL_CTRL MSR. That would cause issues.
> >>
> >> Yeah, the clearing for PMCx MSR should be unnecessary .
> >>
> > Why is clearing for PMCx MSR unnecessary? Do we want to leaking counter
> > values to the host? NO. Not in cloud usage.
>
> No, this place is clearing the guest counter value instead of host
> counter value. Host always has method to see guest value in a normal VM
> if he want. I don't see its necessity, it's just a overkill and
> introduce extra overhead to write MSRs.
>
I am curious how the perf subsystem solves the problem? Does perf
subsystem in the host only scrubbing the selector but not the counter
value when doing the context switch?
>
> >
> > Please make changes to this patch with **extreme** caution.
> >
> > According to our past experience, if there is a bug somewhere,
> > there is a catch here (normally).
> >
> > Thanks.
> > -Mingwei
> >>> Similarly, if perf enables a counter if PERF_GLOBAL_CTRL without first restoring
> >>> the event selector, we gots problems.
> >>>
> >>> Same thing for the fixed counters below. Can't this just be?
> >>>
> >>> for (i = 0; i < pmu->nr_arch_gp_counters; i++)
> >>> rdpmcl(i, pmu->gp_counters[i].counter);
> >>>
> >>> for (i = 0; i < pmu->nr_arch_fixed_counters; i++)
> >>> rdpmcl(INTEL_PMC_FIXED_RDPMC_BASE | i,
> >>> pmu->fixed_counters[i].counter);
> >>>
> >>>> + }
> >>>> +
> >>>> + rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, pmu->fixed_ctr_ctrl);
> >>>> + /*
> >>>> + * Clear hardware FIXED_CTR_CTRL MSR to avoid information leakage and
> >>>> + * also avoid these guest fixed counters get accidentially enabled
> >>>> + * during host running when host enable global ctrl.
> >>>> + */
> >>>> + if (pmu->fixed_ctr_ctrl)
> >>>> + wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, 0);
> >>>> + for (i = 0; i < pmu->nr_arch_fixed_counters; i++) {
> >>>> + pmc = &pmu->fixed_counters[i];
> >>>> + rdpmcl(INTEL_PMC_FIXED_RDPMC_BASE | i, pmc->counter);
> >>>> + if (pmc->counter)
> >>>> + wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 0);
> >>>> + }
> >>>> }
> >>>> static void intel_restore_pmu_context(struct kvm_vcpu *vcpu)
> >>>> {
> >>>> + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
> >>>> + struct kvm_pmc *pmc;
> >>>> + u64 global_status;
> >>>> + int i;
> >>>> +
> >>>> + if (pmu->version != 2) {
> >>>> + pr_warn("only PerfMon v2 is supported for passthrough PMU");
> >>>> + return;
> >>>> + }
> >>>> +
> >>>> + /* Clear host global_ctrl and global_status MSR if non-zero. */
> >>>> + wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
> >>> Why? PERF_GLOBAL_CTRL will be auto-loaded at VM-Enter, why do it now?
> >> As previous comments say, host perf always enable all counters in
> >> PERF_GLOBAL_CTRL by default. The reason to clear PERF_GLOBAL_CTRL here is to
> >> ensure all counters in disabled state and the later counter manipulation
> >> (writing MSR) won't cause any race condition or unexpected behavior on HW.
> >>
> >>
> >>>> + rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, global_status);
> >>>> + if (global_status)
> >>>> + wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, global_status);
> >>> This seems especially silly, isn't the full MSR being written below? Or am I
> >>> misunderstanding how these things work?
> >> I think Jim's comment has already explain why we need to do this.
> >>
> >>>> + wrmsrl(MSR_CORE_PERF_GLOBAL_STATUS_SET, pmu->global_status);
> >>>> +
> >>>> + for (i = 0; i < pmu->nr_arch_gp_counters; i++) {
> >>>> + pmc = &pmu->gp_counters[i];
> >>>> + wrmsrl(MSR_IA32_PMC0 + i, pmc->counter);
> >>>> + wrmsrl(MSR_ARCH_PERFMON_EVENTSEL0 + i, pmc->eventsel);
> >>>> + }
> >>>> +
> >>>> + wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, pmu->fixed_ctr_ctrl);
> >>>> + for (i = 0; i < pmu->nr_arch_fixed_counters; i++) {
> >>>> + pmc = &pmu->fixed_counters[i];
> >>>> + wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, pmc->counter);
> >>>> + }
> >>>> }
> >>>> struct kvm_pmu_ops intel_pmu_ops __initdata = {
> >>>> --
> >>>> 2.34.1
> >>>>
next prev parent reply other threads:[~2024-04-15 6:06 UTC|newest]
Thread overview: 181+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-26 8:54 [RFC PATCH 00/41] KVM: x86/pmu: Introduce passthrough vPM Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 01/41] perf: x86/intel: Support PERF_PMU_CAP_VPMU_PASSTHROUGH Xiong Zhang
2024-04-11 17:04 ` Sean Christopherson
2024-04-11 17:21 ` Liang, Kan
2024-04-11 17:24 ` Jim Mattson
2024-04-11 17:46 ` Sean Christopherson
2024-04-11 19:13 ` Liang, Kan
2024-04-11 20:43 ` Sean Christopherson
2024-04-11 21:04 ` Liang, Kan
2024-04-11 19:32 ` Sean Christopherson
2024-01-26 8:54 ` [RFC PATCH 02/41] perf: Support guest enter/exit interfaces Xiong Zhang
2024-03-20 16:40 ` Raghavendra Rao Ananta
2024-03-20 17:12 ` Liang, Kan
2024-04-11 18:06 ` Sean Christopherson
2024-04-11 19:53 ` Liang, Kan
2024-04-12 19:17 ` Sean Christopherson
2024-04-12 20:56 ` Liang, Kan
2024-04-15 16:03 ` Liang, Kan
2024-04-16 5:34 ` Zhang, Xiong Y
2024-04-16 12:48 ` Liang, Kan
2024-04-17 9:42 ` Zhang, Xiong Y
2024-04-18 16:11 ` Sean Christopherson
2024-04-19 1:37 ` Zhang, Xiong Y
2024-04-26 4:09 ` Zhang, Xiong Y
2024-01-26 8:54 ` [RFC PATCH 03/41] perf: Set exclude_guest onto nmi_watchdog Xiong Zhang
2024-04-11 18:56 ` Sean Christopherson
2024-01-26 8:54 ` [RFC PATCH 04/41] perf: core/x86: Add support to register a new vector for PMI handling Xiong Zhang
2024-04-11 17:10 ` Sean Christopherson
2024-04-11 19:05 ` Sean Christopherson
2024-04-12 3:56 ` Zhang, Xiong Y
2024-04-13 1:17 ` Mi, Dapeng
2024-01-26 8:54 ` [RFC PATCH 05/41] KVM: x86/pmu: Register PMI handler for passthrough PMU Xiong Zhang
2024-04-11 19:07 ` Sean Christopherson
2024-04-12 5:44 ` Zhang, Xiong Y
2024-01-26 8:54 ` [RFC PATCH 06/41] perf: x86: Add function to switch PMI handler Xiong Zhang
2024-04-11 19:17 ` Sean Christopherson
2024-04-11 19:34 ` Sean Christopherson
2024-04-12 6:03 ` Zhang, Xiong Y
2024-04-12 5:57 ` Zhang, Xiong Y
2024-01-26 8:54 ` [RFC PATCH 07/41] perf/x86: Add interface to reflect virtual LVTPC_MASK bit onto HW Xiong Zhang
2024-04-11 19:21 ` Sean Christopherson
2024-04-12 6:17 ` Zhang, Xiong Y
2024-01-26 8:54 ` [RFC PATCH 08/41] KVM: x86/pmu: Add get virtual LVTPC_MASK bit function Xiong Zhang
2024-04-11 19:22 ` Sean Christopherson
2024-01-26 8:54 ` [RFC PATCH 09/41] perf: core/x86: Forbid PMI handler when guest own PMU Xiong Zhang
2024-04-11 19:26 ` Sean Christopherson
2024-01-26 8:54 ` [RFC PATCH 10/41] perf: core/x86: Plumb passthrough PMU capability from x86_pmu to x86_pmu_cap Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 11/41] KVM: x86/pmu: Introduce enable_passthrough_pmu module parameter and propage to KVM instance Xiong Zhang
2024-04-11 20:54 ` Sean Christopherson
2024-04-11 21:03 ` Sean Christopherson
2024-01-26 8:54 ` [RFC PATCH 12/41] KVM: x86/pmu: Plumb through passthrough PMU to vcpu for Intel CPUs Xiong Zhang
2024-04-11 20:57 ` Sean Christopherson
2024-01-26 8:54 ` [RFC PATCH 13/41] KVM: x86/pmu: Add a helper to check if passthrough PMU is enabled Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 14/41] KVM: x86/pmu: Allow RDPMC pass through Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 15/41] KVM: x86/pmu: Manage MSR interception for IA32_PERF_GLOBAL_CTRL Xiong Zhang
2024-04-11 21:21 ` Sean Christopherson
2024-04-11 22:30 ` Jim Mattson
2024-04-11 23:27 ` Sean Christopherson
2024-04-13 2:10 ` Mi, Dapeng
2024-01-26 8:54 ` [RFC PATCH 16/41] KVM: x86/pmu: Create a function prototype to disable MSR interception Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 17/41] KVM: x86/pmu: Implement pmu function for Intel CPU " Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 18/41] KVM: x86/pmu: Intercept full-width GP counter MSRs by checking with perf capabilities Xiong Zhang
2024-04-11 21:23 ` Sean Christopherson
2024-04-11 21:50 ` Jim Mattson
2024-04-12 16:01 ` Sean Christopherson
2024-01-26 8:54 ` [RFC PATCH 19/41] KVM: x86/pmu: Whitelist PMU MSRs for passthrough PMU Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 20/41] KVM: x86/pmu: Introduce PMU operation prototypes for save/restore PMU context Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 21/41] KVM: x86/pmu: Introduce function prototype for Intel CPU to " Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 22/41] x86: Introduce MSR_CORE_PERF_GLOBAL_STATUS_SET for passthrough PMU Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 23/41] KVM: x86/pmu: Implement the save/restore of PMU state for Intel CPU Xiong Zhang
2024-04-11 21:26 ` Sean Christopherson
2024-04-13 2:29 ` Mi, Dapeng
2024-04-11 21:44 ` Sean Christopherson
2024-04-11 22:19 ` Jim Mattson
2024-04-11 23:31 ` Sean Christopherson
2024-04-13 3:19 ` Mi, Dapeng
2024-04-13 3:03 ` Mi, Dapeng
2024-04-13 3:34 ` Mingwei Zhang
2024-04-13 4:25 ` Mi, Dapeng
2024-04-15 6:06 ` Mingwei Zhang [this message]
2024-04-15 10:04 ` Mi, Dapeng
2024-04-15 16:44 ` Mingwei Zhang
2024-04-15 17:38 ` Sean Christopherson
2024-04-15 17:54 ` Mingwei Zhang
2024-04-15 22:45 ` Sean Christopherson
2024-04-22 2:14 ` maobibo
2024-04-22 17:01 ` Sean Christopherson
2024-04-23 1:01 ` maobibo
2024-04-23 2:44 ` Mi, Dapeng
2024-04-23 2:53 ` maobibo
2024-04-23 3:13 ` Mi, Dapeng
2024-04-23 3:26 ` maobibo
2024-04-23 3:59 ` Mi, Dapeng
2024-04-23 3:55 ` maobibo
2024-04-23 4:23 ` Mingwei Zhang
2024-04-23 6:08 ` maobibo
2024-04-23 6:45 ` Mi, Dapeng
2024-04-23 7:10 ` Mingwei Zhang
2024-04-23 8:24 ` Mi, Dapeng
2024-04-23 8:51 ` maobibo
2024-04-23 16:50 ` Mingwei Zhang
2024-04-23 12:12 ` maobibo
2024-04-23 17:02 ` Mingwei Zhang
2024-04-24 1:07 ` maobibo
2024-04-24 8:18 ` Mi, Dapeng
2024-04-24 15:00 ` Sean Christopherson
2024-04-25 3:55 ` Mi, Dapeng
2024-04-25 4:24 ` Mingwei Zhang
2024-04-25 16:13 ` Liang, Kan
2024-04-25 20:16 ` Mingwei Zhang
2024-04-25 20:43 ` Liang, Kan
2024-04-25 21:46 ` Sean Christopherson
2024-04-26 1:46 ` Mi, Dapeng
2024-04-26 3:12 ` Mingwei Zhang
2024-04-26 4:02 ` Mi, Dapeng
2024-04-26 4:46 ` Mingwei Zhang
2024-04-26 14:09 ` Liang, Kan
2024-04-26 18:41 ` Mingwei Zhang
2024-04-26 19:06 ` Liang, Kan
2024-04-26 19:46 ` Sean Christopherson
2024-04-27 3:04 ` Mingwei Zhang
2024-04-28 0:58 ` Mi, Dapeng
2024-04-28 6:01 ` Mingwei Zhang
2024-04-29 17:44 ` Sean Christopherson
2024-05-01 17:43 ` Mingwei Zhang
2024-05-01 18:00 ` Liang, Kan
2024-05-01 20:36 ` Sean Christopherson
2024-04-29 13:08 ` Liang, Kan
2024-04-26 13:53 ` Liang, Kan
2024-04-26 1:50 ` Mi, Dapeng
2024-04-18 21:21 ` Mingwei Zhang
2024-04-18 21:41 ` Mingwei Zhang
2024-04-19 1:02 ` Mi, Dapeng
2024-01-26 8:54 ` [RFC PATCH 24/41] KVM: x86/pmu: Zero out unexposed Counters/Selectors to avoid information leakage Xiong Zhang
2024-04-11 21:36 ` Sean Christopherson
2024-04-11 21:56 ` Jim Mattson
2024-01-26 8:54 ` [RFC PATCH 25/41] KVM: x86/pmu: Introduce macro PMU_CAP_PERF_METRICS Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 26/41] KVM: x86/pmu: Add host_perf_cap field in kvm_caps to record host PMU capability Xiong Zhang
2024-04-11 21:49 ` Sean Christopherson
2024-01-26 8:54 ` [RFC PATCH 27/41] KVM: x86/pmu: Clear PERF_METRICS MSR for guest Xiong Zhang
2024-04-11 21:50 ` Sean Christopherson
2024-04-13 3:29 ` Mi, Dapeng
2024-01-26 8:54 ` [RFC PATCH 28/41] KVM: x86/pmu: Switch IA32_PERF_GLOBAL_CTRL at VM boundary Xiong Zhang
2024-04-11 21:54 ` Sean Christopherson
2024-04-11 22:10 ` Jim Mattson
2024-04-11 22:54 ` Sean Christopherson
2024-04-11 23:08 ` Jim Mattson
2024-01-26 8:54 ` [RFC PATCH 29/41] KVM: x86/pmu: Exclude existing vLBR logic from the passthrough PMU Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 30/41] KVM: x86/pmu: Switch PMI handler at KVM context switch boundary Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 31/41] KVM: x86/pmu: Call perf_guest_enter() at PMU context switch Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 32/41] KVM: x86/pmu: Add support for PMU context switch at VM-exit/enter Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 33/41] KVM: x86/pmu: Make check_pmu_event_filter() an exported function Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 34/41] KVM: x86/pmu: Intercept EVENT_SELECT MSR Xiong Zhang
2024-04-11 21:55 ` Sean Christopherson
2024-01-26 8:54 ` [RFC PATCH 35/41] KVM: x86/pmu: Allow writing to event selector for GP counters if event is allowed Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 36/41] KVM: x86/pmu: Intercept FIXED_CTR_CTRL MSR Xiong Zhang
2024-04-11 21:56 ` Sean Christopherson
2024-01-26 8:54 ` [RFC PATCH 37/41] KVM: x86/pmu: Allow writing to fixed counter selector if counter is exposed Xiong Zhang
2024-04-11 22:03 ` Sean Christopherson
2024-04-13 4:12 ` Mi, Dapeng
2024-01-26 8:54 ` [RFC PATCH 38/41] KVM: x86/pmu: Introduce PMU helper to increment counter Xiong Zhang
2024-01-26 8:54 ` [RFC PATCH 39/41] KVM: x86/pmu: Implement emulated counter increment for passthrough PMU Xiong Zhang
2024-04-11 23:12 ` Sean Christopherson
2024-04-11 23:17 ` Sean Christopherson
2024-01-26 8:54 ` [RFC PATCH 40/41] KVM: x86/pmu: Separate passthrough PMU logic in set/get_msr() from non-passthrough vPMU Xiong Zhang
2024-04-11 23:18 ` Sean Christopherson
2024-04-18 21:54 ` Mingwei Zhang
2024-01-26 8:54 ` [RFC PATCH 41/41] KVM: nVMX: Add nested virtualization support for passthrough PMU Xiong Zhang
2024-04-11 23:21 ` Sean Christopherson
2024-04-11 17:03 ` [RFC PATCH 00/41] KVM: x86/pmu: Introduce passthrough vPM Sean Christopherson
2024-04-12 2:19 ` Zhang, Xiong Y
2024-04-12 18:32 ` Sean Christopherson
2024-04-15 1:06 ` Zhang, Xiong Y
2024-04-15 15:05 ` Sean Christopherson
2024-04-16 5:11 ` Zhang, Xiong Y
2024-04-18 20:46 ` Mingwei Zhang
2024-04-18 21:52 ` Mingwei Zhang
2024-04-19 19:14 ` Sean Christopherson
2024-04-19 22:02 ` Mingwei Zhang
2024-04-11 23:25 ` Sean Christopherson
2024-04-11 23:56 ` Mingwei Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAL715WJXWQgfzgh8KqL+pAzeqL+dkF6imfRM37nQ6PkZd09mhQ@mail.gmail.com \
--to=mizhang@google.com \
--cc=chao.gao@intel.com \
--cc=dapeng1.mi@linux.intel.com \
--cc=eranian@google.com \
--cc=irogers@google.com \
--cc=jmattson@google.com \
--cc=kan.liang@intel.com \
--cc=kvm@vger.kernel.org \
--cc=like.xu.linux@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=samantha.alt@intel.com \
--cc=seanjc@google.com \
--cc=xiong.y.zhang@linux.intel.com \
--cc=zhenyuw@linux.intel.com \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).