From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755680AbbIBXeZ (ORCPT ); Wed, 2 Sep 2015 19:34:25 -0400 Received: from mga11.intel.com ([192.55.52.93]:58890 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754865AbbIBXb3 (ORCPT ); Wed, 2 Sep 2015 19:31:29 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,457,1437462000"; d="scan'208";a="781541777" Subject: [PATCH 05/15] x86, fpu: XSAVE macro renames To: dave@sr71.net Cc: dave.hansen@linux.intel.com, mingo@redhat.com, x86@kernel.org, bp@alien8.de, fenghua.yu@intel.com, tim.c.chen@linux.intel.com, linux-kernel@vger.kernel.org From: Dave Hansen Date: Wed, 02 Sep 2015 16:31:26 -0700 References: <20150902233123.3A7E5FB0@viggo.jf.intel.com> In-Reply-To: <20150902233123.3A7E5FB0@viggo.jf.intel.com> Message-Id: <20150902233126.38653250@viggo.jf.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dave Hansen There are two concepts that have some confusing naming: 1. Extended State Component numbers (currently called XFEATURE_BIT_*) 2. Extended State Component masks (currently called XSTATE_*) The numbers are (currently) from 0-9. State component 3 is the bounds registers for MPX, for instance. But when we want to enable "state component 3", we go set a bit in XCR0. The bit we set is 1<<3. We can check to see if a state component feature is enabled by looking at its bit. The current 'xfeature_bit's are at best xfeature bit _numbers_. Calling them bits is at best inconsistent with ending the enum list with 'XFEATURES_NR_MAX'. This patch renames the enum to be 'xfeature'. These also happen to be what the Intel documentation calls a "state component". We also want to differentiate these from the "XSTATE_*" macros. The "XSTATE_*" macros are a mask, and we rename them to match. These macros are reasonably widely used so this patch is a wee bit big, but this really is just a rename. The only non-mechanical part of this is the s/XSTATE_EXTEND_MASK/XFEATURE_MASK_EXTEND/ We need a better name for it, but that's another patch. Signed-off-by: Dave Hansen Cc: Ingo Molnar Cc: x86@kernel.org Cc: Borislav Petkov Cc: Fenghua Yu Cc: Tim Chen Cc: linux-kernel@vger.kernel.org --- b/arch/x86/crypto/camellia_aesni_avx2_glue.c | 3 + b/arch/x86/crypto/camellia_aesni_avx_glue.c | 3 + b/arch/x86/crypto/cast5_avx_glue.c | 3 + b/arch/x86/crypto/cast6_avx_glue.c | 3 + b/arch/x86/crypto/serpent_avx2_glue.c | 3 + b/arch/x86/crypto/serpent_avx_glue.c | 3 + b/arch/x86/crypto/sha1_ssse3_glue.c | 2 - b/arch/x86/crypto/sha256_ssse3_glue.c | 2 - b/arch/x86/crypto/sha512_ssse3_glue.c | 2 - b/arch/x86/crypto/twofish_avx_glue.c | 2 - b/arch/x86/include/asm/fpu/types.h | 44 +++++++++++++++------------ b/arch/x86/include/asm/fpu/xstate.h | 14 +++++--- b/arch/x86/kernel/fpu/init.c | 6 +-- b/arch/x86/kernel/fpu/regset.c | 4 +- b/arch/x86/kernel/fpu/signal.c | 6 +-- b/arch/x86/kernel/fpu/xstate.c | 36 ++++++++++++---------- b/arch/x86/kernel/traps.c | 2 - b/arch/x86/kvm/cpuid.c | 4 +- b/arch/x86/kvm/x86.c | 27 ++++++++-------- b/arch/x86/kvm/x86.h | 6 +-- b/arch/x86/mm/mpx.c | 6 +-- 21 files changed, 101 insertions(+), 80 deletions(-) diff -puN arch/x86/crypto/camellia_aesni_avx2_glue.c~x86-fpu-rename-xfeature_bit arch/x86/crypto/camellia_aesni_avx2_glue.c --- a/arch/x86/crypto/camellia_aesni_avx2_glue.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.768883518 -0700 +++ b/arch/x86/crypto/camellia_aesni_avx2_glue.c 2015-09-02 15:52:49.805885204 -0700 @@ -567,7 +567,8 @@ static int __init camellia_aesni_init(vo return -ENODEV; } - if (!cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, &feature_name)) { + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, + &feature_name)) { pr_info("CPU feature '%s' is not supported.\n", feature_name); return -ENODEV; } diff -puN arch/x86/crypto/camellia_aesni_avx_glue.c~x86-fpu-rename-xfeature_bit arch/x86/crypto/camellia_aesni_avx_glue.c --- a/arch/x86/crypto/camellia_aesni_avx_glue.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.769883563 -0700 +++ b/arch/x86/crypto/camellia_aesni_avx_glue.c 2015-09-02 15:52:49.805885204 -0700 @@ -554,7 +554,8 @@ static int __init camellia_aesni_init(vo { const char *feature_name; - if (!cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, &feature_name)) { + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, + &feature_name)) { pr_info("CPU feature '%s' is not supported.\n", feature_name); return -ENODEV; } diff -puN arch/x86/crypto/cast5_avx_glue.c~x86-fpu-rename-xfeature_bit arch/x86/crypto/cast5_avx_glue.c --- a/arch/x86/crypto/cast5_avx_glue.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.771883654 -0700 +++ b/arch/x86/crypto/cast5_avx_glue.c 2015-09-02 15:52:49.806885250 -0700 @@ -469,7 +469,8 @@ static int __init cast5_init(void) { const char *feature_name; - if (!cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, &feature_name)) { + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, + &feature_name)) { pr_info("CPU feature '%s' is not supported.\n", feature_name); return -ENODEV; } diff -puN arch/x86/crypto/cast6_avx_glue.c~x86-fpu-rename-xfeature_bit arch/x86/crypto/cast6_avx_glue.c --- a/arch/x86/crypto/cast6_avx_glue.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.773883745 -0700 +++ b/arch/x86/crypto/cast6_avx_glue.c 2015-09-02 15:52:49.806885250 -0700 @@ -591,7 +591,8 @@ static int __init cast6_init(void) { const char *feature_name; - if (!cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, &feature_name)) { + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, + &feature_name)) { pr_info("CPU feature '%s' is not supported.\n", feature_name); return -ENODEV; } diff -puN arch/x86/crypto/serpent_avx2_glue.c~x86-fpu-rename-xfeature_bit arch/x86/crypto/serpent_avx2_glue.c --- a/arch/x86/crypto/serpent_avx2_glue.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.774883791 -0700 +++ b/arch/x86/crypto/serpent_avx2_glue.c 2015-09-02 15:52:49.807885295 -0700 @@ -542,7 +542,8 @@ static int __init init(void) pr_info("AVX2 instructions are not detected.\n"); return -ENODEV; } - if (!cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, &feature_name)) { + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, + &feature_name)) { pr_info("CPU feature '%s' is not supported.\n", feature_name); return -ENODEV; } diff -puN arch/x86/crypto/serpent_avx_glue.c~x86-fpu-rename-xfeature_bit arch/x86/crypto/serpent_avx_glue.c --- a/arch/x86/crypto/serpent_avx_glue.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.776883882 -0700 +++ b/arch/x86/crypto/serpent_avx_glue.c 2015-09-02 15:52:49.807885295 -0700 @@ -597,7 +597,8 @@ static int __init serpent_init(void) { const char *feature_name; - if (!cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, &feature_name)) { + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, + &feature_name)) { pr_info("CPU feature '%s' is not supported.\n", feature_name); return -ENODEV; } diff -puN arch/x86/crypto/sha1_ssse3_glue.c~x86-fpu-rename-xfeature_bit arch/x86/crypto/sha1_ssse3_glue.c --- a/arch/x86/crypto/sha1_ssse3_glue.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.778883973 -0700 +++ b/arch/x86/crypto/sha1_ssse3_glue.c 2015-09-02 15:52:49.807885295 -0700 @@ -121,7 +121,7 @@ static struct shash_alg alg = { #ifdef CONFIG_AS_AVX static bool __init avx_usable(void) { - if (!cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, NULL)) { + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { if (cpu_has_avx) pr_info("AVX detected but unusable.\n"); return false; diff -puN arch/x86/crypto/sha256_ssse3_glue.c~x86-fpu-rename-xfeature_bit arch/x86/crypto/sha256_ssse3_glue.c --- a/arch/x86/crypto/sha256_ssse3_glue.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.779884019 -0700 +++ b/arch/x86/crypto/sha256_ssse3_glue.c 2015-09-02 15:52:49.808885341 -0700 @@ -130,7 +130,7 @@ static struct shash_alg algs[] = { { #ifdef CONFIG_AS_AVX static bool __init avx_usable(void) { - if (!cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, NULL)) { + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { if (cpu_has_avx) pr_info("AVX detected but unusable.\n"); return false; diff -puN arch/x86/crypto/sha512_ssse3_glue.c~x86-fpu-rename-xfeature_bit arch/x86/crypto/sha512_ssse3_glue.c --- a/arch/x86/crypto/sha512_ssse3_glue.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.781884110 -0700 +++ b/arch/x86/crypto/sha512_ssse3_glue.c 2015-09-02 15:52:49.808885341 -0700 @@ -129,7 +129,7 @@ static struct shash_alg algs[] = { { #ifdef CONFIG_AS_AVX static bool __init avx_usable(void) { - if (!cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, NULL)) { + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { if (cpu_has_avx) pr_info("AVX detected but unusable.\n"); return false; diff -puN arch/x86/crypto/twofish_avx_glue.c~x86-fpu-rename-xfeature_bit arch/x86/crypto/twofish_avx_glue.c --- a/arch/x86/crypto/twofish_avx_glue.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.783884201 -0700 +++ b/arch/x86/crypto/twofish_avx_glue.c 2015-09-02 15:52:49.808885341 -0700 @@ -558,7 +558,7 @@ static int __init twofish_init(void) { const char *feature_name; - if (!cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, &feature_name)) { + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { pr_info("CPU feature '%s' is not supported.\n", feature_name); return -ENODEV; } diff -puN arch/x86/include/asm/fpu/types.h~x86-fpu-rename-xfeature_bit arch/x86/include/asm/fpu/types.h --- a/arch/x86/include/asm/fpu/types.h~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.784884247 -0700 +++ b/arch/x86/include/asm/fpu/types.h 2015-09-02 16:25:46.638712748 -0700 @@ -95,30 +95,36 @@ struct swregs_state { /* * List of XSAVE features Linux knows about: */ -enum xfeature_bit { - XSTATE_BIT_FP, - XSTATE_BIT_SSE, - XSTATE_BIT_YMM, - XSTATE_BIT_BNDREGS, - XSTATE_BIT_BNDCSR, - XSTATE_BIT_OPMASK, - XSTATE_BIT_ZMM_Hi256, - XSTATE_BIT_Hi16_ZMM, +enum xfeature { + XFEATURE_FP, + XFEATURE_SSE, + /* + * Values above here are "legacy states". + * Those below are "extended states". + */ + XFEATURE_YMM, + XFEATURE_BNDREGS, + XFEATURE_BNDCSR, + XFEATURE_OPMASK, + XFEATURE_ZMM_Hi256, + XFEATURE_Hi16_ZMM, XFEATURES_NR_MAX, }; -#define XSTATE_FP (1 << XSTATE_BIT_FP) -#define XSTATE_SSE (1 << XSTATE_BIT_SSE) -#define XSTATE_YMM (1 << XSTATE_BIT_YMM) -#define XSTATE_BNDREGS (1 << XSTATE_BIT_BNDREGS) -#define XSTATE_BNDCSR (1 << XSTATE_BIT_BNDCSR) -#define XSTATE_OPMASK (1 << XSTATE_BIT_OPMASK) -#define XSTATE_ZMM_Hi256 (1 << XSTATE_BIT_ZMM_Hi256) -#define XSTATE_Hi16_ZMM (1 << XSTATE_BIT_Hi16_ZMM) +#define XFEATURE_MASK_FP (1 << XFEATURE_FP) +#define XFEATURE_MASK_SSE (1 << XFEATURE_SSE) +#define XFEATURE_MASK_YMM (1 << XFEATURE_YMM) +#define XFEATURE_MASK_BNDREGS (1 << XFEATURE_BNDREGS) +#define XFEATURE_MASK_BNDCSR (1 << XFEATURE_BNDCSR) +#define XFEATURE_MASK_OPMASK (1 << XFEATURE_OPMASK) +#define XFEATURE_MASK_ZMM_Hi256 (1 << XFEATURE_ZMM_Hi256) +#define XFEATURE_MASK_Hi16_ZMM (1 << XFEATURE_Hi16_ZMM) -#define XSTATE_FPSSE (XSTATE_FP | XSTATE_SSE) -#define XSTATE_AVX512 (XSTATE_OPMASK | XSTATE_ZMM_Hi256 | XSTATE_Hi16_ZMM) +#define XFEATURE_MASK_FPSSE (XFEATURE_MASK_FP | XFEATURE_MASK_SSE) +#define XFEATURE_MASK_AVX512 (XFEATURE_MASK_OPMASK \ + | XFEATURE_MASK_ZMM_Hi256 \ + | XFEATURE_MASK_Hi16_ZMM) /* * There are 16x 256-bit AVX registers named YMM0-YMM15. diff -puN arch/x86/include/asm/fpu/xstate.h~x86-fpu-rename-xfeature_bit arch/x86/include/asm/fpu/xstate.h --- a/arch/x86/include/asm/fpu/xstate.h~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.786884338 -0700 +++ b/arch/x86/include/asm/fpu/xstate.h 2015-09-02 16:26:07.201646755 -0700 @@ -6,7 +6,7 @@ #include /* Bit 63 of XCR0 is reserved for future expansion */ -#define XSTATE_EXTEND_MASK (~(XSTATE_FPSSE | (1ULL << 63))) +#define XFEATURE_MASK_EXTEND (~(XFEATURE_MASK_FPSSE | (1ULL << 63))) #define XSTATE_CPUID 0x0000000d @@ -19,14 +19,18 @@ #define XSAVE_YMM_OFFSET (XSAVE_HDR_SIZE + XSAVE_HDR_OFFSET) /* Supported features which support lazy state saving */ -#define XSTATE_LAZY (XSTATE_FP | XSTATE_SSE | XSTATE_YMM \ - | XSTATE_OPMASK | XSTATE_ZMM_Hi256 | XSTATE_Hi16_ZMM) +#define XFEATURE_MASK_LAZY (XFEATURE_MASK_FP | \ + XFEATURE_MASK_SSE | \ + XFEATURE_MASK_YMM | \ + XFEATURE_MASK_OPMASK | \ + XFEATURE_MASK_ZMM_Hi256 | \ + XFEATURE_MASK_Hi16_ZMM) /* Supported features which require eager state saving */ -#define XSTATE_EAGER (XSTATE_BNDREGS | XSTATE_BNDCSR) +#define XFEATURE_MASK_EAGER (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR) /* All currently supported features */ -#define XCNTXT_MASK (XSTATE_LAZY | XSTATE_EAGER) +#define XCNTXT_MASK (XFEATURE_MASK_LAZY | XFEATURE_MASK_EAGER) #ifdef CONFIG_X86_64 #define REX_PREFIX "0x48, " diff -puN arch/x86/kernel/fpu/init.c~x86-fpu-rename-xfeature_bit arch/x86/kernel/fpu/init.c --- a/arch/x86/kernel/fpu/init.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.788884429 -0700 +++ b/arch/x86/kernel/fpu/init.c 2015-09-02 15:52:49.809885386 -0700 @@ -290,11 +290,11 @@ static void __init fpu__init_system_ctx_ if (cpu_has_xsaveopt && eagerfpu != DISABLE) eagerfpu = ENABLE; - if (xfeatures_mask & XSTATE_EAGER) { + if (xfeatures_mask & XFEATURE_MASK_EAGER) { if (eagerfpu == DISABLE) { pr_err("x86/fpu: eagerfpu switching disabled, disabling the following xstate features: 0x%llx.\n", - xfeatures_mask & XSTATE_EAGER); - xfeatures_mask &= ~XSTATE_EAGER; + xfeatures_mask & XFEATURE_MASK_EAGER); + xfeatures_mask &= ~XFEATURE_MASK_EAGER; } else { eagerfpu = ENABLE; } diff -puN arch/x86/kernel/fpu/regset.c~x86-fpu-rename-xfeature_bit arch/x86/kernel/fpu/regset.c --- a/arch/x86/kernel/fpu/regset.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.789884475 -0700 +++ b/arch/x86/kernel/fpu/regset.c 2015-09-02 15:52:49.810885432 -0700 @@ -66,7 +66,7 @@ int xfpregs_set(struct task_struct *targ * presence of FP and SSE state. */ if (cpu_has_xsave) - fpu->state.xsave.header.xfeatures |= XSTATE_FPSSE; + fpu->state.xsave.header.xfeatures |= XFEATURE_MASK_FPSSE; return ret; } @@ -326,7 +326,7 @@ int fpregs_set(struct task_struct *targe * presence of FP. */ if (cpu_has_xsave) - fpu->state.xsave.header.xfeatures |= XSTATE_FP; + fpu->state.xsave.header.xfeatures |= XFEATURE_MASK_FP; return ret; } diff -puN arch/x86/kernel/fpu/signal.c~x86-fpu-rename-xfeature_bit arch/x86/kernel/fpu/signal.c --- a/arch/x86/kernel/fpu/signal.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.791884566 -0700 +++ b/arch/x86/kernel/fpu/signal.c 2015-09-02 15:52:49.810885432 -0700 @@ -107,7 +107,7 @@ static inline int save_xstate_epilog(voi * header as well as change any contents in the memory layout. * xrestore as part of sigreturn will capture all the changes. */ - xfeatures |= XSTATE_FPSSE; + xfeatures |= XFEATURE_MASK_FPSSE; err |= __put_user(xfeatures, (__u32 *)&x->header.xfeatures); @@ -207,7 +207,7 @@ sanitize_restored_xstate(struct task_str * layout and not enabled by the OS. */ if (fx_only) - header->xfeatures = XSTATE_FPSSE; + header->xfeatures = XFEATURE_MASK_FPSSE; else header->xfeatures &= (xfeatures_mask & xfeatures); } @@ -230,7 +230,7 @@ static inline int copy_user_to_fpregs_ze { if (use_xsave()) { if ((unsigned long)buf % 64 || fx_only) { - u64 init_bv = xfeatures_mask & ~XSTATE_FPSSE; + u64 init_bv = xfeatures_mask & ~XFEATURE_MASK_FPSSE; copy_kernel_to_xregs(&init_fpstate.xsave, init_bv); return copy_user_to_fxregs(buf); } else { diff -puN arch/x86/kernel/fpu/xstate.c~x86-fpu-rename-xfeature_bit arch/x86/kernel/fpu/xstate.c --- a/arch/x86/kernel/fpu/xstate.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.792884612 -0700 +++ b/arch/x86/kernel/fpu/xstate.c 2015-09-02 16:25:46.638712748 -0700 @@ -72,7 +72,7 @@ int cpu_has_xfeatures(u64 xfeatures_need /* * So we use FLS here to be able to print the most advanced * feature that was requested but is missing. So if a driver - * asks about "XSTATE_SSE | XSTATE_YMM" we'll print the + * asks about "XFEATURE_MASK_SSE | XFEATURE_MASK_YMM" we'll print the * missing AVX feature - this is the most informative message * to users: */ @@ -131,7 +131,7 @@ void fpstate_sanitize_xstate(struct fpu /* * FP is in init state */ - if (!(xfeatures & XSTATE_FP)) { + if (!(xfeatures & XFEATURE_MASK_FP)) { fx->cwd = 0x37f; fx->swd = 0; fx->twd = 0; @@ -144,7 +144,7 @@ void fpstate_sanitize_xstate(struct fpu /* * SSE is in init state */ - if (!(xfeatures & XSTATE_SSE)) + if (!(xfeatures & XFEATURE_MASK_SSE)) memset(&fx->xmm_space[0], 0, 256); /* @@ -223,14 +223,14 @@ static void __init print_xstate_feature( */ static void __init print_xstate_features(void) { - print_xstate_feature(XSTATE_FP); - print_xstate_feature(XSTATE_SSE); - print_xstate_feature(XSTATE_YMM); - print_xstate_feature(XSTATE_BNDREGS); - print_xstate_feature(XSTATE_BNDCSR); - print_xstate_feature(XSTATE_OPMASK); - print_xstate_feature(XSTATE_ZMM_Hi256); - print_xstate_feature(XSTATE_Hi16_ZMM); + print_xstate_feature(XFEATURE_MASK_FP); + print_xstate_feature(XFEATURE_MASK_SSE); + print_xstate_feature(XFEATURE_MASK_YMM); + print_xstate_feature(XFEATURE_MASK_BNDREGS); + print_xstate_feature(XFEATURE_MASK_BNDCSR); + print_xstate_feature(XFEATURE_MASK_OPMASK); + print_xstate_feature(XFEATURE_MASK_ZMM_Hi256); + print_xstate_feature(XFEATURE_MASK_Hi16_ZMM); } /* @@ -365,7 +365,11 @@ static int init_xstate_size(void) return 0; } -void fpu__init_disable_system_xstate(void) +/* + * We enabled the XSAVE hardware, but something went wrong and + * we can not use it. Disable it. + */ +static void fpu__init_disable_system_xstate(void) { xfeatures_mask = 0; cr4_clear_bits(X86_CR4_OSXSAVE); @@ -398,7 +402,7 @@ void __init fpu__init_system_xstate(void cpuid_count(XSTATE_CPUID, 0, &eax, &ebx, &ecx, &edx); xfeatures_mask = eax + ((u64)edx << 32); - if ((xfeatures_mask & XSTATE_FPSSE) != XSTATE_FPSSE) { + if ((xfeatures_mask & XFEATURE_MASK_FPSSE) != XFEATURE_MASK_FPSSE) { pr_err("x86/fpu: FP/SSE not present amongst the CPU's xstate features: 0x%llx.\n", xfeatures_mask); BUG(); } @@ -451,7 +455,7 @@ void fpu__resume_cpu(void) * Inputs: * xstate: the thread's storage area for all FPU data * xstate_feature: state which is defined in xsave.h (e.g. - * XSTATE_FP, XSTATE_SSE, etc...) + * XFEATURE_MASK_FP, XFEATURE_MASK_SSE, etc...) * Output: * address of the state in the xsave area, or NULL if the * field is not present in the xsave buffer. @@ -502,8 +506,8 @@ EXPORT_SYMBOL_GPL(get_xsave_addr); * Note that this only works on the current task. * * Inputs: - * @xsave_state: state which is defined in xsave.h (e.g. XSTATE_FP, - * XSTATE_SSE, etc...) + * @xsave_state: state which is defined in xsave.h (e.g. XFEATURE_MASK_FP, + * XFEATURE_MASK_SSE, etc...) * Output: * address of the state in the xsave area or NULL if the state * is not present or is in its 'init state'. diff -puN arch/x86/kernel/traps.c~x86-fpu-rename-xfeature_bit arch/x86/kernel/traps.c --- a/arch/x86/kernel/traps.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.794884703 -0700 +++ b/arch/x86/kernel/traps.c 2015-09-02 16:24:49.864133927 -0700 @@ -395,7 +395,7 @@ dotraplinkage void do_bounds(struct pt_r * which is all zeros which indicates MPX was not * responsible for the exception. */ - bndcsr = get_xsave_field_ptr(XSTATE_BNDCSR); + bndcsr = get_xsave_field_ptr(XFEATURE_MASK_BNDCSR); if (!bndcsr) goto exit_trap; diff -puN arch/x86/kvm/cpuid.c~x86-fpu-rename-xfeature_bit arch/x86/kvm/cpuid.c --- a/arch/x86/kvm/cpuid.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.796884794 -0700 +++ b/arch/x86/kvm/cpuid.c 2015-09-02 15:52:49.811885477 -0700 @@ -30,7 +30,7 @@ static u32 xstate_required_size(u64 xsta int feature_bit = 0; u32 ret = XSAVE_HDR_SIZE + XSAVE_HDR_OFFSET; - xstate_bv &= XSTATE_EXTEND_MASK; + xstate_bv &= XFEATURE_MASK_EXTEND; while (xstate_bv) { if (xstate_bv & 0x1) { u32 eax, ebx, ecx, edx, offset; @@ -51,7 +51,7 @@ u64 kvm_supported_xcr0(void) u64 xcr0 = KVM_SUPPORTED_XCR0 & host_xcr0; if (!kvm_x86_ops->mpx_supported()) - xcr0 &= ~(XSTATE_BNDREGS | XSTATE_BNDCSR); + xcr0 &= ~(XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR); return xcr0; } diff -puN arch/x86/kvm/x86.c~x86-fpu-rename-xfeature_bit arch/x86/kvm/x86.c --- a/arch/x86/kvm/x86.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.798884885 -0700 +++ b/arch/x86/kvm/x86.c 2015-09-02 15:52:49.814885614 -0700 @@ -662,9 +662,9 @@ static int __kvm_set_xcr(struct kvm_vcpu /* Only support XCR_XFEATURE_ENABLED_MASK(xcr0) now */ if (index != XCR_XFEATURE_ENABLED_MASK) return 1; - if (!(xcr0 & XSTATE_FP)) + if (!(xcr0 & XFEATURE_MASK_FP)) return 1; - if ((xcr0 & XSTATE_YMM) && !(xcr0 & XSTATE_SSE)) + if ((xcr0 & XFEATURE_MASK_YMM) && !(xcr0 & XFEATURE_MASK_SSE)) return 1; /* @@ -672,23 +672,24 @@ static int __kvm_set_xcr(struct kvm_vcpu * saving. However, xcr0 bit 0 is always set, even if the * emulated CPU does not support XSAVE (see fx_init). */ - valid_bits = vcpu->arch.guest_supported_xcr0 | XSTATE_FP; + valid_bits = vcpu->arch.guest_supported_xcr0 | XFEATURE_MASK_FP; if (xcr0 & ~valid_bits) return 1; - if ((!(xcr0 & XSTATE_BNDREGS)) != (!(xcr0 & XSTATE_BNDCSR))) + if ((!(xcr0 & XFEATURE_MASK_BNDREGS)) != + (!(xcr0 & XFEATURE_MASK_BNDCSR))) return 1; - if (xcr0 & XSTATE_AVX512) { - if (!(xcr0 & XSTATE_YMM)) + if (xcr0 & XFEATURE_MASK_AVX512) { + if (!(xcr0 & XFEATURE_MASK_YMM)) return 1; - if ((xcr0 & XSTATE_AVX512) != XSTATE_AVX512) + if ((xcr0 & XFEATURE_MASK_AVX512) != XFEATURE_MASK_AVX512) return 1; } kvm_put_guest_xcr0(vcpu); vcpu->arch.xcr0 = xcr0; - if ((xcr0 ^ old_xcr0) & XSTATE_EXTEND_MASK) + if ((xcr0 ^ old_xcr0) & XFEATURE_MASK_EXTEND) kvm_update_cpuid(vcpu); return 0; } @@ -2918,7 +2919,7 @@ static void fill_xsave(u8 *dest, struct * Copy each region from the possibly compacted offset to the * non-compacted offset. */ - valid = xstate_bv & ~XSTATE_FPSSE; + valid = xstate_bv & ~XFEATURE_MASK_FPSSE; while (valid) { u64 feature = valid & -valid; int index = fls64(feature) - 1; @@ -2956,7 +2957,7 @@ static void load_xsave(struct kvm_vcpu * * Copy each region from the non-compacted offset to the * possibly compacted offset. */ - valid = xstate_bv & ~XSTATE_FPSSE; + valid = xstate_bv & ~XFEATURE_MASK_FPSSE; while (valid) { u64 feature = valid & -valid; int index = fls64(feature) - 1; @@ -2984,7 +2985,7 @@ static void kvm_vcpu_ioctl_x86_get_xsave &vcpu->arch.guest_fpu.state.fxsave, sizeof(struct fxregs_state)); *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)] = - XSTATE_FPSSE; + XFEATURE_MASK_FPSSE; } } @@ -3004,7 +3005,7 @@ static int kvm_vcpu_ioctl_x86_set_xsave( return -EINVAL; load_xsave(vcpu, (u8 *)guest_xsave->region); } else { - if (xstate_bv & ~XSTATE_FPSSE) + if (xstate_bv & ~XFEATURE_MASK_FPSSE) return -EINVAL; memcpy(&vcpu->arch.guest_fpu.state.fxsave, guest_xsave->region, sizeof(struct fxregs_state)); @@ -7011,7 +7012,7 @@ static void fx_init(struct kvm_vcpu *vcp /* * Ensure guest xcr0 is valid for loading */ - vcpu->arch.xcr0 = XSTATE_FP; + vcpu->arch.xcr0 = XFEATURE_MASK_FP; vcpu->arch.cr0 |= X86_CR0_ET; } diff -puN arch/x86/kvm/x86.h~x86-fpu-rename-xfeature_bit arch/x86/kvm/x86.h --- a/arch/x86/kvm/x86.h~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.800884976 -0700 +++ b/arch/x86/kvm/x86.h 2015-09-02 15:52:49.814885614 -0700 @@ -180,9 +180,9 @@ int kvm_mtrr_get_msr(struct kvm_vcpu *vc bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn, int page_num); -#define KVM_SUPPORTED_XCR0 (XSTATE_FP | XSTATE_SSE | XSTATE_YMM \ - | XSTATE_BNDREGS | XSTATE_BNDCSR \ - | XSTATE_AVX512) +#define KVM_SUPPORTED_XCR0 (XFEATURE_MASK_FP | XFEATURE_MASK_SSE \ + | XFEATURE_MASK_YMM | XFEATURE_MASK_BNDREGS \ + | XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512) extern u64 host_xcr0; extern u64 kvm_supported_xcr0(void); diff -puN arch/x86/mm/mpx.c~x86-fpu-rename-xfeature_bit arch/x86/mm/mpx.c --- a/arch/x86/mm/mpx.c~x86-fpu-rename-xfeature_bit 2015-09-02 15:52:49.802885067 -0700 +++ b/arch/x86/mm/mpx.c 2015-09-02 16:24:49.865133973 -0700 @@ -295,7 +295,7 @@ siginfo_t *mpx_generate_siginfo(struct p goto err_out; } /* get bndregs field from current task's xsave area */ - bndregs = get_xsave_field_ptr(XSTATE_BNDREGS); + bndregs = get_xsave_field_ptr(XFEATURE_MASK_BNDREGS); if (!bndregs) { err = -EINVAL; goto err_out; @@ -352,7 +352,7 @@ static __user void *mpx_get_bounds_dir(v * The bounds directory pointer is stored in a register * only accessible if we first do an xsave. */ - bndcsr = get_xsave_field_ptr(XSTATE_BNDCSR); + bndcsr = get_xsave_field_ptr(XFEATURE_MASK_BNDCSR); if (!bndcsr) return MPX_INVALID_BOUNDS_DIR; @@ -529,7 +529,7 @@ static int do_mpx_bt_fault(void) const struct bndcsr *bndcsr; struct mm_struct *mm = current->mm; - bndcsr = get_xsave_field_ptr(XSTATE_BNDCSR); + bndcsr = get_xsave_field_ptr(XFEATURE_MASK_BNDCSR); if (!bndcsr) return -EINVAL; /* _