Xen-Devel Archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>,
	Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 4/4] x86/cpuid: Fix handling of xsave dynamic leaves
Date: Thu, 2 May 2024 15:04:38 +0200	[thread overview]
Message-ID: <d87b31be-6c52-4271-a61f-bf31043f507d@suse.com> (raw)
In-Reply-To: <20240429182823.1130436-5-andrew.cooper3@citrix.com>

On 29.04.2024 20:28, Andrew Cooper wrote:
> If max leaf is greater than 0xd but xsave not available to the guest, then the
> current XSAVE size should not be filled in.  This is a latent bug for now as
> the guest max leaf is 0xd, but will become problematic in the future.

Why would this not be an issue when .max_leaf == 0xd, but .xsave == 0? Without
my "x86/CPUID: shrink max_{,sub}leaf fields according to actual leaf contents"
I don't think we shrink max_leaf to below 0xd when there's no xsave for the
guest?

> The comment concerning XSS state is wrong.  VT-x doesn't manage host/guest
> state automatically, but there is provision for "host only" bits to be set, so
> the implications are still accurate.
> 
> Introduce {xstate,hw}_compressed_size() helpers to mirror the uncompressed
> ones.
> 
> This in turn higlights a bug in xstate_init().  Defaulting this_cpu(xss) to ~0
> requires a forced write to clear it back out.  This in turn highlights that
> it's only a safe default on systems with XSAVES.

Well, yes, such an explicit write was expected to appear when some xsaves-
based component would actually be enabled. Much like the set_xcr0() there.

> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> 
> The more I think about it, the more I think that cross-checking with hardware
> is a bad move.  It's horribly expensive, and for supervisor states in
> particular, liable to interfere with functionality.

I agree, but I'd also like to see the cross checking not dropped entirely.
Can't we arrange for such to happen during boot, before we enable any such
functionality? After all even in debug builds it's not overly useful to do
the same cross-checking (i.e. for identical inputs) over and over again.
Of course doing in an exhaustive manner may be okay for the uncompressed
values, but might be a little too much for all possible combinations in
order to check compressed values, too.

> --- a/xen/arch/x86/xstate.c
> +++ b/xen/arch/x86/xstate.c
> @@ -614,6 +614,65 @@ unsigned int xstate_uncompressed_size(uint64_t xcr0)
>      return size;
>  }
>  
> +static unsigned int hw_compressed_size(uint64_t xstates)
> +{
> +    uint64_t curr_xcr0 = get_xcr0(), curr_xss = get_msr_xss();
> +    unsigned int size;
> +    bool ok;
> +
> +    ok = set_xcr0(xstates & ~XSTATE_XSAVES_ONLY);
> +    ASSERT(ok);
> +    set_msr_xss(xstates & XSTATE_XSAVES_ONLY);
> +
> +    size = cpuid_count_ebx(XSTATE_CPUID, 1);
> +
> +    ok = set_xcr0(curr_xcr0);
> +    ASSERT(ok);
> +    set_msr_xss(curr_xss);
> +
> +    return size;
> +}
> +
> +unsigned int xstate_compressed_size(uint64_t xstates)
> +{
> +    unsigned int i, size = XSTATE_AREA_MIN_SIZE;
> +
> +    if ( xstates == 0 ) /* TODO: clean up paths passing 0 in here. */
> +        return 0;
> +
> +    if ( xstates <= (X86_XCR0_SSE | X86_XCR0_FP) )

Same comment as on the earlier change regarding the (lack of) use of ....

> +        return size;
> +
> +    /*
> +     * For the compressed size, every component matters.  Some are
> +     * automatically rounded up to 64 first.
> +     */
> +    xstates &= ~XSTATE_FP_SSE;

... this constant up there.

> +    for_each_set_bit ( i, &xstates, 63 )
> +    {
> +        if ( test_bit(i, &xstate_align) )
> +            size = ROUNDUP(size, 64);
> +
> +        size += xstate_sizes[i];
> +    }

The comment is a little misleading: As you have it in code, it is not the
component's size that is rounded up, but its position. Maybe "Some have
their start automatically rounded up to 64 first"?

Jan


  reply	other threads:[~2024-05-02 13:05 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-29 18:28 [PATCH v2 0/4] x86/xstate: Fixes to size calculations Andrew Cooper
2024-04-29 18:28 ` [PATCH v2 1/4] x86/hvm: Defer the size calculation in hvm_save_cpu_xsave_states() Andrew Cooper
2024-05-02 12:08   ` Jan Beulich
2024-04-29 18:28 ` [PATCH v2 2/4] x86/xstate: Rework xstate_ctxt_size() as xstate_uncompressed_size() Andrew Cooper
2024-05-02 12:19   ` Jan Beulich
2024-05-02 13:17     ` Andrew Cooper
2024-04-29 18:28 ` [PATCH v2 3/4] x86/cpu-policy: Simplify recalculate_xstate() Andrew Cooper
2024-05-02 12:39   ` Jan Beulich
2024-05-02 13:24     ` Andrew Cooper
2024-05-02 14:33       ` Jan Beulich
2024-04-29 18:28 ` [PATCH v2 4/4] x86/cpuid: Fix handling of xsave dynamic leaves Andrew Cooper
2024-05-02 13:04   ` Jan Beulich [this message]
2024-05-02 14:32     ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d87b31be-6c52-4271-a61f-bf31043f507d@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).