LKML Archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/22] arm64: Consolidate CPU feature handling
@ 2015-09-16 14:20 Suzuki K. Poulose
  2015-09-16 14:20 ` [PATCH 01/22] arm64: Make the CPU information more clear Suzuki K. Poulose
                   ` (23 more replies)
  0 siblings, 24 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:20 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

This is an updated reincarnation of my "arm64: Expose CPU feature registers"
series [1], which does much more.

This series introduces a new infrastructure to keep track of the CPU
feature registers on ARMv8-A for arm64 kernel. It provides the safe value
of a CPU feature register across all the CPUs on (a heterogeneous) system.
The infrastructure checks the individual CPU feature registers as they are
brought online (during system boot up) and udpates the status of each of
the feature bits across the system. Once all the active CPUs are brought online
(i.e, smp_cpus_done() ), the system can compute a reliable set of capabilities
(arm64_features CPU capability and ELF HWCAP). This allows system to operate
safely on CPUs with differing capabilities. Any new CPU brought up(hotplugged in)
should have all the established capabilities, failing which could be disastrous.
(e.g, alternative code patched in for a feature avaialble on the system). We
add a hotplug notifier to check if the new CPU is missing any of the advertised
capabilities and prevents it from turning online if it is.

Also consolidates the users of the feature registers, (KVM, debug, CPU capability,
ELF HWCAP, cpuinfo and CPU feature Sanity chec) to make use of the system wide
safe value of the feature to make safer decisions. As mentioned above, the
calculation of the system CPU capabilities and ELF HWCAP is delayed until
smp_cpus_done() and makes use of the value from the infrastructure. The cpu_errata
capability checks still go through each CPU and is not impacted by this series
(not delayed).

At the end, we add a new ABI to expose the CPU feature registers to the user
space via emulation of MRS. The system exposes only a limited set
of feature values (See the documentation patch) from the above infrastructure.
The feature bits that are not exposed are set to the 'safe value' which implies
'not supported'.

Apart from the selected feature registers, we expose MIDR_EL1 (Main
ID Register). The user should be aware that, reading MIDR_EL1 can be
tricky on a heterogeneous system (just like getcpu()). We export the
value of the current CPU where 'MRS' is executed. REVIDR is not exposed
via MRS, since we cannot guarantee atomic access to both MIDR and REVIDR
(task migration). So they both are exposed via sysfs under :

	/sys/devices/system/cpu/cpu$ID/identification/
							\- midr
							\- revidr

The ABI useful for the toolchains (e.g, gcc, dynamic linker, JIT) to make
better runtime decisions based on what is available.

[1] RFC post : https://lkml.org/lkml/2015/7/24/152

---
Changes since V1:
  - Rebased to 4.3-rc1
  - Consolidate HWCAP, capability check into the new infrastructure
  - Add a new HWCAP 'cpuid' to announce the ABI
  - Pulled in Steve's patch to expose midr/revidr via sysfs
  - Changes to documentation.

Steve Capper (1):
  arm64: cpuinfo: Expose MIDR_EL1 and REVIDR_EL1 to sysfs

Suzuki K. Poulose (21):
  arm64: Make the CPU information more clear
  arm64: Delay ELF HWCAP initialisation until all CPUs are up
  arm64: Move cpu feature detection code
  arm64: Move mixed endian support detection
  arm64: Move /proc/cpuinfo handling code
  arm64: sys_reg: Define System register encoding
  arm64: Keep track of CPU feature registers
  arm64: Consolidate CPU Sanity check to CPU Feature infrastructure
  arm64: Read system wide CPUID value
  arm64: Cleanup mixed endian support detection
  arm64: Populate cpuinfo after notify_cpu_starting
  arm64: Delay cpu feature checks
  arm64: Make use of system wide capability checks
  arm64: Cleanup HWCAP handling
  arm64: Move FP/ASIMD hwcap handling to common code
  arm64/debug: Make use of the system wide safe value
  arm64/kvm: Make use of the system wide safe values
  arm64: Add helper to decode register from instruction
  arm64: cpufeature: Track the user visible fields
  arm64: Expose feature registers by emulating MRS
  arm64: feature registers: Documentation

 Documentation/arm64/cpu-feature-registers.txt |  209 ++++++
 arch/arm64/include/asm/cpu.h                  |    1 +
 arch/arm64/include/asm/cpufeature.h           |   76 +-
 arch/arm64/include/asm/cputype.h              |   15 -
 arch/arm64/include/asm/hw_breakpoint.h        |   14 +-
 arch/arm64/include/asm/hwcap.h                |    8 +
 arch/arm64/include/asm/insn.h                 |    2 +
 arch/arm64/include/asm/processor.h            |    2 +-
 arch/arm64/include/asm/sysreg.h               |  177 ++++-
 arch/arm64/include/uapi/asm/hwcap.h           |    1 +
 arch/arm64/kernel/cpufeature.c                |  922 ++++++++++++++++++++++++-
 arch/arm64/kernel/cpuinfo.c                   |  305 ++++----
 arch/arm64/kernel/debug-monitors.c            |    6 +-
 arch/arm64/kernel/fpsimd.c                    |   16 +-
 arch/arm64/kernel/hw_breakpoint.c             |   19 +-
 arch/arm64/kernel/insn.c                      |   29 +
 arch/arm64/kernel/setup.c                     |  233 +------
 arch/arm64/kernel/smp.c                       |   12 +-
 arch/arm64/kvm/reset.c                        |    2 +-
 arch/arm64/kvm/sys_regs.c                     |   12 +-
 arch/arm64/mm/fault.c                         |    2 +-
 21 files changed, 1605 insertions(+), 458 deletions(-)
 create mode 100644 Documentation/arm64/cpu-feature-registers.txt

-- 
1.7.9.5


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 01/22] arm64: Make the CPU information more clear
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
@ 2015-09-16 14:20 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 02/22] arm64: Delay ELF HWCAP initialisation until all CPUs are up Suzuki K. Poulose
                   ` (22 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:20 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

At early boot, we print the CPU version/revision. On a heterogeneous
system, we could have different types of CPUs. Print the CPU info for
all active cpus.

Also, remove the redundant 'revision' information which doesn't
make any sense without the 'variant' field.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/kernel/setup.c |    3 +--
 arch/arm64/kernel/smp.c   |    3 ++-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 6bab21f..60fc9a9 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -202,8 +202,7 @@ static void __init setup_processor(void)
 	u32 cwg;
 	int cls;
 
-	printk("CPU: AArch64 Processor [%08x] revision %d\n",
-	       read_cpuid_id(), read_cpuid_id() & 15);
+	pr_info("Boot CPU: AArch64 Processor [%08x]\n", read_cpuid_id());
 
 	sprintf(init_utsname()->machine, ELF_PLATFORM);
 	elf_hwcap = 0;
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index dbdaacd..641f529 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -145,7 +145,8 @@ asmlinkage void secondary_start_kernel(void)
 	cpumask_set_cpu(cpu, mm_cpumask(mm));
 
 	set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
-	printk("CPU%u: Booted secondary processor\n", cpu);
+	pr_info("CPU%u: Booted secondary processor [%08x]\n",
+					 cpu, read_cpuid_id());
 
 	/*
 	 * TTBR0 is only used for the identity mapping at this stage. Make it
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 02/22] arm64: Delay ELF HWCAP initialisation until all CPUs are up
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
  2015-09-16 14:20 ` [PATCH 01/22] arm64: Make the CPU information more clear Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 03/22] arm64: Move cpu feature detection code Suzuki K. Poulose
                   ` (21 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Delay the ELF HWCAP initialisation untill all the (enabled) CPUs are
up, i.e, smp_cpus_done(). This is in preparation for detecting the
common features across the CPUS and creating a consistent ELF HWCAP
for the system.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |    1 +
 arch/arm64/kernel/setup.c           |   16 ++++++++--------
 arch/arm64/kernel/smp.c             |    1 +
 3 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 1715707..b7769f6 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -81,6 +81,7 @@ static inline int __attribute_const__ cpuid_feature_extract_field(u64 features,
 	return (s64)(features << (64 - 4 - field)) >> (64 - 4);
 }
 
+void __init setup_cpu_features(void);
 
 void check_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 			    const char *info);
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 60fc9a9..d149c18 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -195,20 +195,13 @@ static void __init smp_build_mpidr_hash(void)
 	__flush_dcache_area(&mpidr_hash, sizeof(struct mpidr_hash));
 }
 
-static void __init setup_processor(void)
+void __init setup_cpu_features(void)
 {
 	u64 features;
 	s64 block;
 	u32 cwg;
 	int cls;
 
-	pr_info("Boot CPU: AArch64 Processor [%08x]\n", read_cpuid_id());
-
-	sprintf(init_utsname()->machine, ELF_PLATFORM);
-	elf_hwcap = 0;
-
-	cpuinfo_store_boot_cpu();
-
 	/*
 	 * Check for sane CTR_EL0.CWG value.
 	 */
@@ -292,6 +285,13 @@ static void __init setup_processor(void)
 #endif
 }
 
+static void __init setup_processor(void)
+{
+	pr_info("Boot CPU: AArch64 Processor [%08x]\n", read_cpuid_id());
+	sprintf(init_utsname()->machine, ELF_PLATFORM);
+	cpuinfo_store_boot_cpu();
+}
+
 static void __init setup_machine_fdt(phys_addr_t dt_phys)
 {
 	void *dt_virt = fixmap_remap_fdt(dt_phys);
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 641f529..cb3e0d8 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -326,6 +326,7 @@ static void __init hyp_mode_check(void)
 void __init smp_cpus_done(unsigned int max_cpus)
 {
 	pr_info("SMP: Total of %d processors activated.\n", num_online_cpus());
+	setup_cpu_features();
 	hyp_mode_check();
 	apply_alternatives_all();
 }
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 03/22] arm64: Move cpu feature detection code
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
  2015-09-16 14:20 ` [PATCH 01/22] arm64: Make the CPU information more clear Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 02/22] arm64: Delay ELF HWCAP initialisation until all CPUs are up Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 04/22] arm64: Move mixed endian support detection Suzuki K. Poulose
                   ` (20 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

This patch moves the CPU feature detection code from
 arch/arm64/kernel/{setup.c to cpufeature.c}

The plan is to consolidate all the CPU feature handling
in cpufeature.c.

Changes pr_fmt from "alternatives" to "cpu features"

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/kernel/cpufeature.c |  111 +++++++++++++++++++++++++++++++++++++++-
 arch/arm64/kernel/setup.c      |  107 --------------------------------------
 2 files changed, 110 insertions(+), 108 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3c9aed3..bdb3eb4 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -16,13 +16,31 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
-#define pr_fmt(fmt) "alternatives: " fmt
+#define pr_fmt(fmt) "CPU features: " fmt
 
 #include <linux/types.h>
 #include <asm/cpu.h>
 #include <asm/cpufeature.h>
 #include <asm/processor.h>
 
+unsigned long elf_hwcap __read_mostly;
+EXPORT_SYMBOL_GPL(elf_hwcap);
+
+#ifdef CONFIG_COMPAT
+#define COMPAT_ELF_HWCAP_DEFAULT	\
+				(COMPAT_HWCAP_HALF|COMPAT_HWCAP_THUMB|\
+				 COMPAT_HWCAP_FAST_MULT|COMPAT_HWCAP_EDSP|\
+				 COMPAT_HWCAP_TLS|COMPAT_HWCAP_VFP|\
+				 COMPAT_HWCAP_VFPv3|COMPAT_HWCAP_VFPv4|\
+				 COMPAT_HWCAP_NEON|COMPAT_HWCAP_IDIV|\
+				 COMPAT_HWCAP_LPAE)
+unsigned int compat_elf_hwcap __read_mostly = COMPAT_ELF_HWCAP_DEFAULT;
+unsigned int compat_elf_hwcap2 __read_mostly;
+#endif
+
+DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
+
+
 static bool
 feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry)
 {
@@ -100,3 +118,94 @@ void check_local_cpu_features(void)
 {
 	check_cpu_capabilities(arm64_features, "detected feature:");
 }
+
+void __init setup_cpu_features(void)
+{
+	u64 features;
+	s64 block;
+	u32 cwg;
+	int cls;
+
+	/*
+	 * Check for sane CTR_EL0.CWG value.
+	 */
+	cwg = cache_type_cwg();
+	cls = cache_line_size();
+	if (!cwg)
+		pr_warn("No Cache Writeback Granule information, assuming cache line size %d\n",
+			cls);
+	if (L1_CACHE_BYTES < cls)
+		pr_warn("L1_CACHE_BYTES smaller than the Cache Writeback Granule (%d < %d)\n",
+			L1_CACHE_BYTES, cls);
+
+	/*
+	 * ID_AA64ISAR0_EL1 contains 4-bit wide signed feature blocks.
+	 * The blocks we test below represent incremental functionality
+	 * for non-negative values. Negative values are reserved.
+	 */
+	features = read_cpuid(ID_AA64ISAR0_EL1);
+	block = cpuid_feature_extract_field(features, 4);
+	if (block > 0) {
+		switch (block) {
+		default:
+		case 2:
+			elf_hwcap |= HWCAP_PMULL;
+		case 1:
+			elf_hwcap |= HWCAP_AES;
+		case 0:
+			break;
+		}
+	}
+
+	if (cpuid_feature_extract_field(features, 8) > 0)
+		elf_hwcap |= HWCAP_SHA1;
+
+	if (cpuid_feature_extract_field(features, 12) > 0)
+		elf_hwcap |= HWCAP_SHA2;
+
+	if (cpuid_feature_extract_field(features, 16) > 0)
+		elf_hwcap |= HWCAP_CRC32;
+
+	block = cpuid_feature_extract_field(features, 20);
+	if (block > 0) {
+		switch (block) {
+		default:
+		case 2:
+			elf_hwcap |= HWCAP_ATOMICS;
+		case 1:
+			/* RESERVED */
+		case 0:
+			break;
+		}
+	}
+
+#ifdef CONFIG_COMPAT
+	/*
+	 * ID_ISAR5_EL1 carries similar information as above, but pertaining to
+	 * the AArch32 32-bit execution state.
+	 */
+	features = read_cpuid(ID_ISAR5_EL1);
+	block = cpuid_feature_extract_field(features, 4);
+	if (block > 0) {
+		switch (block) {
+		default:
+		case 2:
+			compat_elf_hwcap2 |= COMPAT_HWCAP2_PMULL;
+		case 1:
+			compat_elf_hwcap2 |= COMPAT_HWCAP2_AES;
+		case 0:
+			break;
+		}
+	}
+
+	if (cpuid_feature_extract_field(features, 8) > 0)
+		compat_elf_hwcap2 |= COMPAT_HWCAP2_SHA1;
+
+	if (cpuid_feature_extract_field(features, 12) > 0)
+		compat_elf_hwcap2 |= COMPAT_HWCAP2_SHA2;
+
+	if (cpuid_feature_extract_field(features, 16) > 0)
+		compat_elf_hwcap2 |= COMPAT_HWCAP2_CRC32;
+#endif
+}
+
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index d149c18..4c7bca8 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -64,23 +64,6 @@
 #include <asm/efi.h>
 #include <asm/xen/hypervisor.h>
 
-unsigned long elf_hwcap __read_mostly;
-EXPORT_SYMBOL_GPL(elf_hwcap);
-
-#ifdef CONFIG_COMPAT
-#define COMPAT_ELF_HWCAP_DEFAULT	\
-				(COMPAT_HWCAP_HALF|COMPAT_HWCAP_THUMB|\
-				 COMPAT_HWCAP_FAST_MULT|COMPAT_HWCAP_EDSP|\
-				 COMPAT_HWCAP_TLS|COMPAT_HWCAP_VFP|\
-				 COMPAT_HWCAP_VFPv3|COMPAT_HWCAP_VFPv4|\
-				 COMPAT_HWCAP_NEON|COMPAT_HWCAP_IDIV|\
-				 COMPAT_HWCAP_LPAE)
-unsigned int compat_elf_hwcap __read_mostly = COMPAT_ELF_HWCAP_DEFAULT;
-unsigned int compat_elf_hwcap2 __read_mostly;
-#endif
-
-DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
-
 phys_addr_t __fdt_pointer __initdata;
 
 /*
@@ -195,96 +178,6 @@ static void __init smp_build_mpidr_hash(void)
 	__flush_dcache_area(&mpidr_hash, sizeof(struct mpidr_hash));
 }
 
-void __init setup_cpu_features(void)
-{
-	u64 features;
-	s64 block;
-	u32 cwg;
-	int cls;
-
-	/*
-	 * Check for sane CTR_EL0.CWG value.
-	 */
-	cwg = cache_type_cwg();
-	cls = cache_line_size();
-	if (!cwg)
-		pr_warn("No Cache Writeback Granule information, assuming cache line size %d\n",
-			cls);
-	if (L1_CACHE_BYTES < cls)
-		pr_warn("L1_CACHE_BYTES smaller than the Cache Writeback Granule (%d < %d)\n",
-			L1_CACHE_BYTES, cls);
-
-	/*
-	 * ID_AA64ISAR0_EL1 contains 4-bit wide signed feature blocks.
-	 * The blocks we test below represent incremental functionality
-	 * for non-negative values. Negative values are reserved.
-	 */
-	features = read_cpuid(ID_AA64ISAR0_EL1);
-	block = cpuid_feature_extract_field(features, 4);
-	if (block > 0) {
-		switch (block) {
-		default:
-		case 2:
-			elf_hwcap |= HWCAP_PMULL;
-		case 1:
-			elf_hwcap |= HWCAP_AES;
-		case 0:
-			break;
-		}
-	}
-
-	if (cpuid_feature_extract_field(features, 8) > 0)
-		elf_hwcap |= HWCAP_SHA1;
-
-	if (cpuid_feature_extract_field(features, 12) > 0)
-		elf_hwcap |= HWCAP_SHA2;
-
-	if (cpuid_feature_extract_field(features, 16) > 0)
-		elf_hwcap |= HWCAP_CRC32;
-
-	block = cpuid_feature_extract_field(features, 20);
-	if (block > 0) {
-		switch (block) {
-		default:
-		case 2:
-			elf_hwcap |= HWCAP_ATOMICS;
-		case 1:
-			/* RESERVED */
-		case 0:
-			break;
-		}
-	}
-
-#ifdef CONFIG_COMPAT
-	/*
-	 * ID_ISAR5_EL1 carries similar information as above, but pertaining to
-	 * the AArch32 32-bit execution state.
-	 */
-	features = read_cpuid(ID_ISAR5_EL1);
-	block = cpuid_feature_extract_field(features, 4);
-	if (block > 0) {
-		switch (block) {
-		default:
-		case 2:
-			compat_elf_hwcap2 |= COMPAT_HWCAP2_PMULL;
-		case 1:
-			compat_elf_hwcap2 |= COMPAT_HWCAP2_AES;
-		case 0:
-			break;
-		}
-	}
-
-	if (cpuid_feature_extract_field(features, 8) > 0)
-		compat_elf_hwcap2 |= COMPAT_HWCAP2_SHA1;
-
-	if (cpuid_feature_extract_field(features, 12) > 0)
-		compat_elf_hwcap2 |= COMPAT_HWCAP2_SHA2;
-
-	if (cpuid_feature_extract_field(features, 16) > 0)
-		compat_elf_hwcap2 |= COMPAT_HWCAP2_CRC32;
-#endif
-}
-
 static void __init setup_processor(void)
 {
 	pr_info("Boot CPU: AArch64 Processor [%08x]\n", read_cpuid_id());
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 04/22] arm64: Move mixed endian support detection
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (2 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 03/22] arm64: Move cpu feature detection code Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 05/22] arm64: Move /proc/cpuinfo handling code Suzuki K. Poulose
                   ` (19 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Move the mixed endian support detection code to cpufeature.c
from cpuinfo.c. This also moves the update_cpu_features()
used by mixed endian detection code, which will get more
functionality.

Also moves the ID register field shifts to asm/sysreg.h,
where all the useful definitions will end up in later patches.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |    9 +++++++++
 arch/arm64/include/asm/cputype.h    |   15 ---------------
 arch/arm64/include/asm/sysreg.h     |    3 +++
 arch/arm64/kernel/cpufeature.c      |   22 ++++++++++++++++++++++
 arch/arm64/kernel/cpuinfo.c         |   21 ---------------------
 5 files changed, 34 insertions(+), 36 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index b7769f6..aaa84d9 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -10,6 +10,7 @@
 #define __ASM_CPUFEATURE_H
 
 #include <asm/hwcap.h>
+#include <asm/sysreg.h>
 
 /*
  * In the arm64 world (as in the ARM world), elf_hwcap is used both internally
@@ -32,6 +33,7 @@
 
 #ifndef __ASSEMBLY__
 
+#include <asm/cpu.h>
 #include <linux/kernel.h>
 
 struct arm64_cpu_capabilities {
@@ -81,7 +83,14 @@ static inline int __attribute_const__ cpuid_feature_extract_field(u64 features,
 	return (s64)(features << (64 - 4 - field)) >> (64 - 4);
 }
 
+static inline bool id_aa64mmfr0_mixed_endian_el0(u64 mmfr0)
+{
+	return cpuid_feature_extract_field(mmfr0, ID_AA64MMFR0_BIGENDEL_SHIFT) == 0x1 ||
+		cpuid_feature_extract_field(mmfr0, ID_AA64MMFR0_BIGENDEL0_SHIFT) == 0x1;
+}
+
 void __init setup_cpu_features(void);
+void update_cpu_features(struct cpuinfo_arm64 *info);
 
 void check_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 			    const char *info);
diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index ee6403d..31678b2 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -72,15 +72,6 @@
 
 #define APM_CPU_PART_POTENZA	0x000
 
-#define ID_AA64MMFR0_BIGENDEL0_SHIFT	16
-#define ID_AA64MMFR0_BIGENDEL0_MASK	(0xf << ID_AA64MMFR0_BIGENDEL0_SHIFT)
-#define ID_AA64MMFR0_BIGENDEL0(mmfr0)	\
-	(((mmfr0) & ID_AA64MMFR0_BIGENDEL0_MASK) >> ID_AA64MMFR0_BIGENDEL0_SHIFT)
-#define ID_AA64MMFR0_BIGEND_SHIFT	8
-#define ID_AA64MMFR0_BIGEND_MASK	(0xf << ID_AA64MMFR0_BIGEND_SHIFT)
-#define ID_AA64MMFR0_BIGEND(mmfr0)	\
-	(((mmfr0) & ID_AA64MMFR0_BIGEND_MASK) >> ID_AA64MMFR0_BIGEND_SHIFT)
-
 #ifndef __ASSEMBLY__
 
 /*
@@ -112,12 +103,6 @@ static inline u32 __attribute_const__ read_cpuid_cachetype(void)
 {
 	return read_cpuid(CTR_EL0);
 }
-
-static inline bool id_aa64mmfr0_mixed_endian_el0(u64 mmfr0)
-{
-	return (ID_AA64MMFR0_BIGEND(mmfr0) == 0x1) ||
-		(ID_AA64MMFR0_BIGENDEL0(mmfr0) == 0x1);
-}
 #endif /* __ASSEMBLY__ */
 
 #endif
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index a7f3d4b..4246e41 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -44,6 +44,9 @@
 #define SET_PSTATE_PAN(x) __inst_arm(0xd5000000 | REG_PSTATE_PAN_IMM |\
 				     (!!x)<<8 | 0x1f)
 
+#define ID_AA64MMFR0_BIGENDEL0_SHIFT	16
+#define ID_AA64MMFR0_BIGENDEL_SHIFT	8
+
 #ifdef __ASSEMBLY__
 
 	.irp	num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index bdb3eb4..72633b9 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -22,7 +22,9 @@
 #include <asm/cpu.h>
 #include <asm/cpufeature.h>
 #include <asm/processor.h>
+#include <asm/sysreg.h>
 
+static bool mixed_endian_el0 = true;
 unsigned long elf_hwcap __read_mostly;
 EXPORT_SYMBOL_GPL(elf_hwcap);
 
@@ -41,6 +43,26 @@ unsigned int compat_elf_hwcap2 __read_mostly;
 DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
 
 
+bool cpu_supports_mixed_endian_el0(void)
+{
+	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
+}
+
+bool system_supports_mixed_endian_el0(void)
+{
+	return mixed_endian_el0;
+}
+
+static void update_mixed_endian_el0_support(struct cpuinfo_arm64 *info)
+{
+	mixed_endian_el0 &= id_aa64mmfr0_mixed_endian_el0(info->reg_id_aa64mmfr0);
+}
+
+void update_cpu_features(struct cpuinfo_arm64 *info)
+{
+	update_mixed_endian_el0_support(info);
+}
+
 static bool
 feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry)
 {
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 75d5a86..8307b33 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -35,7 +35,6 @@
  */
 DEFINE_PER_CPU(struct cpuinfo_arm64, cpu_data);
 static struct cpuinfo_arm64 boot_cpu_data;
-static bool mixed_endian_el0 = true;
 
 static char *icache_policy_str[] = {
 	[ICACHE_POLICY_RESERVED] = "RESERVED/UNKNOWN",
@@ -69,26 +68,6 @@ static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info)
 	pr_info("Detected %s I-cache on CPU%d\n", icache_policy_str[l1ip], cpu);
 }
 
-bool cpu_supports_mixed_endian_el0(void)
-{
-	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
-}
-
-bool system_supports_mixed_endian_el0(void)
-{
-	return mixed_endian_el0;
-}
-
-static void update_mixed_endian_el0_support(struct cpuinfo_arm64 *info)
-{
-	mixed_endian_el0 &= id_aa64mmfr0_mixed_endian_el0(info->reg_id_aa64mmfr0);
-}
-
-static void update_cpu_features(struct cpuinfo_arm64 *info)
-{
-	update_mixed_endian_el0_support(info);
-}
-
 static int check_reg_mask(char *name, u64 mask, u64 boot, u64 cur, int cpu)
 {
 	if ((boot & mask) == (cur & mask))
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 05/22] arm64: Move /proc/cpuinfo handling code
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (3 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 04/22] arm64: Move mixed endian support detection Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 06/22] arm64: sys_reg: Define System register encoding Suzuki K. Poulose
                   ` (18 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

This patch moves the /proc/cpuinfo handling code:

arch/arm64/kernel/{setup.c to cpuinfo.c}

No functional changes

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/kernel/cpuinfo.c |  124 +++++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/setup.c   |  123 ------------------------------------------
 2 files changed, 124 insertions(+), 123 deletions(-)

diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 8307b33..0dadb69 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -24,8 +24,11 @@
 #include <linux/bug.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
+#include <linux/personality.h>
 #include <linux/preempt.h>
 #include <linux/printk.h>
+#include <linux/seq_file.h>
+#include <linux/sched.h>
 #include <linux/smp.h>
 
 /*
@@ -45,6 +48,127 @@ static char *icache_policy_str[] = {
 
 unsigned long __icache_flags;
 
+static const char *hwcap_str[] = {
+	"fp",
+	"asimd",
+	"evtstrm",
+	"aes",
+	"pmull",
+	"sha1",
+	"sha2",
+	"crc32",
+	"atomics",
+	NULL
+};
+
+#ifdef CONFIG_COMPAT
+static const char *compat_hwcap_str[] = {
+	"swp",
+	"half",
+	"thumb",
+	"26bit",
+	"fastmult",
+	"fpa",
+	"vfp",
+	"edsp",
+	"java",
+	"iwmmxt",
+	"crunch",
+	"thumbee",
+	"neon",
+	"vfpv3",
+	"vfpv3d16",
+	"tls",
+	"vfpv4",
+	"idiva",
+	"idivt",
+	"vfpd32",
+	"lpae",
+	"evtstrm"
+};
+
+static const char *compat_hwcap2_str[] = {
+	"aes",
+	"pmull",
+	"sha1",
+	"sha2",
+	"crc32",
+	NULL
+};
+#endif /* CONFIG_COMPAT */
+
+static int c_show(struct seq_file *m, void *v)
+{
+	int i, j;
+
+	for_each_online_cpu(i) {
+		struct cpuinfo_arm64 *cpuinfo = &per_cpu(cpu_data, i);
+		u32 midr = cpuinfo->reg_midr;
+
+		/*
+		 * glibc reads /proc/cpuinfo to determine the number of
+		 * online processors, looking for lines beginning with
+		 * "processor".  Give glibc what it expects.
+		 */
+		seq_printf(m, "processor\t: %d\n", i);
+
+		/*
+		 * Dump out the common processor features in a single line.
+		 * Userspace should read the hwcaps with getauxval(AT_HWCAP)
+		 * rather than attempting to parse this, but there's a body of
+		 * software which does already (at least for 32-bit).
+		 */
+		seq_puts(m, "Features\t:");
+		if (personality(current->personality) == PER_LINUX32) {
+#ifdef CONFIG_COMPAT
+			for (j = 0; compat_hwcap_str[j]; j++)
+				if (compat_elf_hwcap & (1 << j))
+					seq_printf(m, " %s", compat_hwcap_str[j]);
+
+			for (j = 0; compat_hwcap2_str[j]; j++)
+				if (compat_elf_hwcap2 & (1 << j))
+					seq_printf(m, " %s", compat_hwcap2_str[j]);
+#endif /* CONFIG_COMPAT */
+		} else {
+			for (j = 0; hwcap_str[j]; j++)
+				if (elf_hwcap & (1 << j))
+					seq_printf(m, " %s", hwcap_str[j]);
+		}
+		seq_puts(m, "\n");
+
+		seq_printf(m, "CPU implementer\t: 0x%02x\n",
+			   MIDR_IMPLEMENTOR(midr));
+		seq_printf(m, "CPU architecture: 8\n");
+		seq_printf(m, "CPU variant\t: 0x%x\n", MIDR_VARIANT(midr));
+		seq_printf(m, "CPU part\t: 0x%03x\n", MIDR_PARTNUM(midr));
+		seq_printf(m, "CPU revision\t: %d\n\n", MIDR_REVISION(midr));
+	}
+
+	return 0;
+}
+
+static void *c_start(struct seq_file *m, loff_t *pos)
+{
+	return *pos < 1 ? (void *)1 : NULL;
+}
+
+static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+{
+	++*pos;
+	return NULL;
+}
+
+static void c_stop(struct seq_file *m, void *v)
+{
+}
+
+const struct seq_operations cpuinfo_op = {
+	.start	= c_start,
+	.next	= c_next,
+	.stop	= c_stop,
+	.show	= c_show
+};
+
 static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info)
 {
 	unsigned int cpu = smp_processor_id();
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 4c7bca8..7b651cc 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -28,7 +28,6 @@
 #include <linux/console.h>
 #include <linux/cache.h>
 #include <linux/bootmem.h>
-#include <linux/seq_file.h>
 #include <linux/screen_info.h>
 #include <linux/init.h>
 #include <linux/kexec.h>
@@ -44,7 +43,6 @@
 #include <linux/of_fdt.h>
 #include <linux/of_platform.h>
 #include <linux/efi.h>
-#include <linux/personality.h>
 #include <linux/psci.h>
 
 #include <asm/acpi.h>
@@ -383,124 +381,3 @@ static int __init topology_init(void)
 	return 0;
 }
 subsys_initcall(topology_init);
-
-static const char *hwcap_str[] = {
-	"fp",
-	"asimd",
-	"evtstrm",
-	"aes",
-	"pmull",
-	"sha1",
-	"sha2",
-	"crc32",
-	"atomics",
-	NULL
-};
-
-#ifdef CONFIG_COMPAT
-static const char *compat_hwcap_str[] = {
-	"swp",
-	"half",
-	"thumb",
-	"26bit",
-	"fastmult",
-	"fpa",
-	"vfp",
-	"edsp",
-	"java",
-	"iwmmxt",
-	"crunch",
-	"thumbee",
-	"neon",
-	"vfpv3",
-	"vfpv3d16",
-	"tls",
-	"vfpv4",
-	"idiva",
-	"idivt",
-	"vfpd32",
-	"lpae",
-	"evtstrm"
-};
-
-static const char *compat_hwcap2_str[] = {
-	"aes",
-	"pmull",
-	"sha1",
-	"sha2",
-	"crc32",
-	NULL
-};
-#endif /* CONFIG_COMPAT */
-
-static int c_show(struct seq_file *m, void *v)
-{
-	int i, j;
-
-	for_each_online_cpu(i) {
-		struct cpuinfo_arm64 *cpuinfo = &per_cpu(cpu_data, i);
-		u32 midr = cpuinfo->reg_midr;
-
-		/*
-		 * glibc reads /proc/cpuinfo to determine the number of
-		 * online processors, looking for lines beginning with
-		 * "processor".  Give glibc what it expects.
-		 */
-		seq_printf(m, "processor\t: %d\n", i);
-
-		/*
-		 * Dump out the common processor features in a single line.
-		 * Userspace should read the hwcaps with getauxval(AT_HWCAP)
-		 * rather than attempting to parse this, but there's a body of
-		 * software which does already (at least for 32-bit).
-		 */
-		seq_puts(m, "Features\t:");
-		if (personality(current->personality) == PER_LINUX32) {
-#ifdef CONFIG_COMPAT
-			for (j = 0; compat_hwcap_str[j]; j++)
-				if (compat_elf_hwcap & (1 << j))
-					seq_printf(m, " %s", compat_hwcap_str[j]);
-
-			for (j = 0; compat_hwcap2_str[j]; j++)
-				if (compat_elf_hwcap2 & (1 << j))
-					seq_printf(m, " %s", compat_hwcap2_str[j]);
-#endif /* CONFIG_COMPAT */
-		} else {
-			for (j = 0; hwcap_str[j]; j++)
-				if (elf_hwcap & (1 << j))
-					seq_printf(m, " %s", hwcap_str[j]);
-		}
-		seq_puts(m, "\n");
-
-		seq_printf(m, "CPU implementer\t: 0x%02x\n",
-			   MIDR_IMPLEMENTOR(midr));
-		seq_printf(m, "CPU architecture: 8\n");
-		seq_printf(m, "CPU variant\t: 0x%x\n", MIDR_VARIANT(midr));
-		seq_printf(m, "CPU part\t: 0x%03x\n", MIDR_PARTNUM(midr));
-		seq_printf(m, "CPU revision\t: %d\n\n", MIDR_REVISION(midr));
-	}
-
-	return 0;
-}
-
-static void *c_start(struct seq_file *m, loff_t *pos)
-{
-	return *pos < 1 ? (void *)1 : NULL;
-}
-
-static void *c_next(struct seq_file *m, void *v, loff_t *pos)
-{
-	++*pos;
-	return NULL;
-}
-
-static void c_stop(struct seq_file *m, void *v)
-{
-}
-
-const struct seq_operations cpuinfo_op = {
-	.start	= c_start,
-	.next	= c_next,
-	.stop	= c_stop,
-	.show	= c_show
-};
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 06/22] arm64: sys_reg: Define System register encoding
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (4 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 05/22] arm64: Move /proc/cpuinfo handling code Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 07/22] arm64: Keep track of CPU feature registers Suzuki K. Poulose
                   ` (17 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

sys_reg defines the encoding of a system register as usable
in the mrs/msr instructions. i.e, the encoding is shifted to the left
by 5bits. Change it to the actual encoding of the register and
use shifted encoding in the mrs_s/msr_s macros. Also cleans up
some white space issues and alignments in the file.

This will be used in later patches for defining the encoding for the
CPU feature register infrastructure.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/sysreg.h |   51 +++++++++++++++++++++++++++++----------
 1 file changed, 38 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 4246e41..04e11b1 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -22,11 +22,11 @@
 
 #include <asm/opcodes.h>
 
-#define SCTLR_EL1_CP15BEN	(0x1 << 5)
-#define SCTLR_EL1_SED		(0x1 << 8)
-
 /*
- * ARMv8 ARM reserves the following encoding for system registers:
+ * sys_reg: Defines the ARMv8 ARM encoding for the System register.
+ *
+ * ARMv8 ARM reserves the following encoding for system registers in the
+ * instructions accessing them.
  * (Ref: ARMv8 ARM, Section: "System instruction class encoding overview",
  *  C5.2, version:ARM DDI 0487A.f)
  *	[20-19] : Op0
@@ -34,15 +34,40 @@
  *	[15-12] : CRn
  *	[11-8]  : CRm
  *	[7-5]   : Op2
+ * Hence we use [ sys_reg() << 5 ] in the mrs/msr instructions.
+ *
  */
+#define Op0_shift	14
+#define Op0_mask	0x3
+#define Op1_shift	11
+#define Op1_mask	0x7
+#define CRn_shift	7
+#define CRn_mask	0xf
+#define CRm_shift	3
+#define CRm_mask	0xf
+#define Op2_shift	0
+#define Op2_mask	0x7
+
+#define sys_reg_Op0(id)	(((id) >> Op0_shift) & Op0_mask)
+#define sys_reg_Op1(id)	(((id) >> Op1_shift) & Op1_mask)
+#define sys_reg_CRn(id)	(((id) >> CRn_shift) & CRn_mask)
+#define sys_reg_CRm(id)	(((id) >> CRm_shift) & CRm_mask)
+#define sys_reg_Op2(id)	(((id) >> Op2_shift) & Op2_mask)
+
 #define sys_reg(op0, op1, crn, crm, op2) \
-	((((op0)&3)<<19)|((op1)<<16)|((crn)<<12)|((crm)<<8)|((op2)<<5))
+	(((op0 & Op0_mask) << Op0_shift) | ((op1) << Op1_shift) | \
+	 ((crn) << CRn_shift) | ((crm) << CRm_shift) | ((op2) << Op2_shift))
+
+
+#define REG_PSTATE_PAN_IMM	sys_reg(0, 0, 4, 0, 4)
+#define SET_PSTATE_PAN(x)	__inst_arm(0xd5000000 |\
+					   (REG_PSTATE_PAN_IMM << 5) |\
+					   (!!x)<<8 | 0x1f)
 
-#define REG_PSTATE_PAN_IMM                     sys_reg(0, 0, 4, 0, 4)
-#define SCTLR_EL1_SPAN                         (1 << 23)
 
-#define SET_PSTATE_PAN(x) __inst_arm(0xd5000000 | REG_PSTATE_PAN_IMM |\
-				     (!!x)<<8 | 0x1f)
+#define SCTLR_EL1_CP15BEN	(0x1 << 5)
+#define SCTLR_EL1_SED		(0x1 << 8)
+#define SCTLR_EL1_SPAN		(0x1 << 23)
 
 #define ID_AA64MMFR0_BIGENDEL0_SHIFT	16
 #define ID_AA64MMFR0_BIGENDEL_SHIFT	8
@@ -55,11 +80,11 @@
 	.equ	__reg_num_xzr, 31
 
 	.macro	mrs_s, rt, sreg
-	.inst	0xd5200000|(\sreg)|(__reg_num_\rt)
+	.inst	0xd5200000|((\sreg) << 5)|(__reg_num_\rt)
 	.endm
 
 	.macro	msr_s, sreg, rt
-	.inst	0xd5000000|(\sreg)|(__reg_num_\rt)
+	.inst	0xd5000000|((\sreg) << 5)|(__reg_num_\rt)
 	.endm
 
 #else
@@ -71,11 +96,11 @@ asm(
 "	.equ	__reg_num_xzr, 31\n"
 "\n"
 "	.macro	mrs_s, rt, sreg\n"
-"	.inst	0xd5200000|(\\sreg)|(__reg_num_\\rt)\n"
+"	.inst	0xd5200000|((\\sreg) << 5)|(__reg_num_\\rt)\n"
 "	.endm\n"
 "\n"
 "	.macro	msr_s, sreg, rt\n"
-"	.inst	0xd5000000|(\\sreg)|(__reg_num_\\rt)\n"
+"	.inst	0xd5000000|((\\sreg) << 5)|(__reg_num_\\rt)\n"
 "	.endm\n"
 );
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 07/22] arm64: Keep track of CPU feature registers
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (5 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 06/22] arm64: sys_reg: Define System register encoding Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-25 11:38   ` Dave Martin
  2015-09-16 14:21 ` [PATCH 08/22] arm64: Consolidate CPU Sanity check to CPU Feature infrastructure Suzuki K. Poulose
                   ` (16 subsequent siblings)
  23 siblings, 1 reply; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

This patch adds an infrastructure to keep track of the CPU feature
registers on the system. For each register, the infrastructure keeps
track of the system wide safe value of the feature bits. Also, tracks
the which fields of a register should be matched strictly across all
the CPUs on the system for the SANITY check infrastructure.

The feature bits are classified as one of SCALAR_MIN, SCALAR_MAX and DISCRETE
depending on the implication of the possible values. This information
is used to decide the safe value for a feature.

SCALAR_MIN - The smaller value is safer
SCALAR_MAX - The bigger value is safer
DISCRETE - We can't decide between the two, so a predefined safe_value is used.

This infrastructure will be later used to make better decisions for:

 - Kernel features (e.g, KVM, Debug)
 - SANITY Check
 - CPU capability
 - ELF HWCAP
 - Exposing CPU Feature register to userspace.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |   49 ++++
 arch/arm64/include/asm/sysreg.h     |  120 ++++++++++
 arch/arm64/kernel/cpufeature.c      |  433 +++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/cpuinfo.c         |    3 +-
 4 files changed, 604 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index aaa84d9..14302e0 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -36,6 +36,38 @@
 #include <asm/cpu.h>
 #include <linux/kernel.h>
 
+/* CPU feature register tracking */
+enum ftr_type {
+	FTR_DISCRETE,
+	FTR_SCALAR_MIN,
+	FTR_SCALAR_MAX,
+};
+
+#define FTR_STRICT	true
+#define FTR_NONSTRICT	false
+
+struct arm64_ftr_bits {
+	bool		strict;		/* CPU Sanity check
+					 *  strict matching required ? */
+	enum ftr_type	type;
+	u8		shift;
+	u8		width;
+	s64		safe_val;	/* safe value for discrete features */
+};
+
+/*
+ * @arm64_ftr_reg - Feature register
+ * @strict_mask 	Bits which should match across all CPUs for sanity.
+ * @sys_val		Safe value across the CPUs (system view)
+ */
+struct arm64_ftr_reg {
+	u32			sys_id;
+	const char*		name;
+	u64			strict_mask;
+	u64			sys_val;
+	struct arm64_ftr_bits*	ftr_bits;
+};
+
 struct arm64_cpu_capabilities {
 	const char *desc;
 	u16 capability;
@@ -83,12 +115,29 @@ static inline int __attribute_const__ cpuid_feature_extract_field(u64 features,
 	return (s64)(features << (64 - 4 - field)) >> (64 - 4);
 }
 
+static inline s64 __attribute_const__
+cpuid_feature_extract_field_width(u64 features, int field, u8 width)
+{
+	return (s64)(features << (64 - width - field)) >> (64 - width);
+}
+
+static inline u64 ftr_mask(struct arm64_ftr_bits *ftrp)
+{
+	return (u64) GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift);
+}
+
+static inline s64 arm64_ftr_value(struct arm64_ftr_bits *ftrp, u64 val)
+{
+	return cpuid_feature_extract_field_width(val, ftrp->shift, ftrp->width);
+}
+
 static inline bool id_aa64mmfr0_mixed_endian_el0(u64 mmfr0)
 {
 	return cpuid_feature_extract_field(mmfr0, ID_AA64MMFR0_BIGENDEL_SHIFT) == 0x1 ||
 		cpuid_feature_extract_field(mmfr0, ID_AA64MMFR0_BIGENDEL0_SHIFT) == 0x1;
 }
 
+void __init init_cpu_features(struct cpuinfo_arm64 *info);
 void __init setup_cpu_features(void);
 void update_cpu_features(struct cpuinfo_arm64 *info);
 
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 04e11b1..dc72fc6 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -59,6 +59,46 @@
 	 ((crn) << CRn_shift) | ((crm) << CRm_shift) | ((op2) << Op2_shift))
 
 
+#define SYS_MIDR_EL1			sys_reg(3, 0, 0, 0, 0)
+#define SYS_MPIDR_EL1			sys_reg(3, 0, 0, 0, 5)
+#define SYS_REVIDR_EL1			sys_reg(3, 0, 0, 0, 6)
+
+#define SYS_ID_PFR0_EL1			sys_reg(3, 0, 0, 1, 0)
+#define SYS_ID_PFR1_EL1			sys_reg(3, 0, 0, 1, 1)
+#define SYS_ID_DFR0_EL1			sys_reg(3, 0, 0, 1, 2)
+#define SYS_ID_MMFR0_EL1		sys_reg(3, 0, 0, 1, 4)
+#define SYS_ID_MMFR1_EL1		sys_reg(3, 0, 0, 1, 5)
+#define SYS_ID_MMFR2_EL1		sys_reg(3, 0, 0, 1, 6)
+#define SYS_ID_MMFR3_EL1		sys_reg(3, 0, 0, 1, 7)
+
+#define SYS_ID_ISAR0_EL1		sys_reg(3, 0, 0, 2, 0)
+#define SYS_ID_ISAR1_EL1		sys_reg(3, 0, 0, 2, 1)
+#define SYS_ID_ISAR2_EL1		sys_reg(3, 0, 0, 2, 2)
+#define SYS_ID_ISAR3_EL1		sys_reg(3, 0, 0, 2, 3)
+#define SYS_ID_ISAR4_EL1		sys_reg(3, 0, 0, 2, 4)
+#define SYS_ID_ISAR5_EL1		sys_reg(3, 0, 0, 2, 5)
+#define SYS_ID_MMFR4_EL1		sys_reg(3, 0, 0, 2, 6)
+
+#define SYS_MVFR0_EL1			sys_reg(3, 0, 0, 3, 0)
+#define SYS_MVFR1_EL1			sys_reg(3, 0, 0, 3, 1)
+#define SYS_MVFR2_EL1			sys_reg(3, 0, 0, 3, 2)
+
+#define SYS_ID_AA64PFR0_EL1		sys_reg(3, 0, 0, 4, 0)
+#define SYS_ID_AA64PFR1_EL1		sys_reg(3, 0, 0, 4, 1)
+
+#define SYS_ID_AA64DFR0_EL1		sys_reg(3, 0, 0, 5, 0)
+#define SYS_ID_AA64DFR1_EL1		sys_reg(3, 0, 0, 5, 1)
+
+#define SYS_ID_AA64ISAR0_EL1		sys_reg(3, 0, 0, 6, 0)
+#define SYS_ID_AA64ISAR1_EL1		sys_reg(3, 0, 0, 6, 1)
+
+#define SYS_ID_AA64MMFR0_EL1		sys_reg(3, 0, 0, 7, 0)
+#define SYS_ID_AA64MMFR1_EL1		sys_reg(3, 0, 0, 7, 1)
+
+#define SYS_CNTFRQ_EL0			sys_reg(3, 3, 14, 0, 0)
+#define SYS_CTR_EL0			sys_reg(3, 3, 0, 0, 1)
+#define SYS_DCZID_EL0			sys_reg(3, 3, 0, 0, 7)
+
 #define REG_PSTATE_PAN_IMM	sys_reg(0, 0, 4, 0, 4)
 #define SET_PSTATE_PAN(x)	__inst_arm(0xd5000000 |\
 					   (REG_PSTATE_PAN_IMM << 5) |\
@@ -69,8 +109,88 @@
 #define SCTLR_EL1_SED		(0x1 << 8)
 #define SCTLR_EL1_SPAN		(0x1 << 23)
 
+/* id_aa64isar0 */
+#define ID_AA64ISAR0_RDM_SHIFT		28
+#define ID_AA64ISAR0_ATOMICS_SHIFT	20
+#define ID_AA64ISAR0_CRC32_SHIFT	16
+#define ID_AA64ISAR0_SHA2_SHIFT		12
+#define ID_AA64ISAR0_SHA1_SHIFT		8
+#define ID_AA64ISAR0_AES_SHIFT		4
+
+/* id_aa64pfr0 */
+#define ID_AA64PFR0_GIC_SHIFT		24
+#define ID_AA64PFR0_ASIMD_SHIFT		20
+#define ID_AA64PFR0_FP_SHIFT		16
+#define ID_AA64PFR0_EL3_SHIFT		12
+#define ID_AA64PFR0_EL2_SHIFT		8
+#define ID_AA64PFR0_EL1_SHIFT		4
+#define ID_AA64PFR0_EL0_SHIFT		0
+
+#define ID_AA64PFR0_FP_NI		0xf
+#define ID_AA64PFR0_FP_ON		0x0
+#define ID_AA64PFR0_ASIMD_NI		0xf
+#define ID_AA64PFR0_ASIMD_ON		0x0
+#define ID_AA64PFR0_EL1_64BIT_ONLY	0x1
+#define ID_AA64PFR0_EL0_64BIT_ONLY	0x1
+
+/* id_aa64mmfr0 */
+#define ID_AA64MMFR0_TGRAN4_SHIFT	28
+#define ID_AA64MMFR0_TGRAN64_SHIFT	24
+#define ID_AA64MMFR0_TGRAN16_SHIFT	20
 #define ID_AA64MMFR0_BIGENDEL0_SHIFT	16
+#define ID_AA64MMFR0_SNSMEM_SHIFT	12
 #define ID_AA64MMFR0_BIGENDEL_SHIFT	8
+#define ID_AA64MMFR0_ASID_SHIFT		4
+#define ID_AA64MMFR0_PARANGE_SHIFT	0
+
+#define ID_AA64MMFR0_TGRAN4_NI		0xf
+#define ID_AA64MMFR0_TGRAN4_ON		0x0
+#define ID_AA64MMFR0_TGRAN64_NI		0xf
+#define ID_AA64MMFR0_TGRAN64_ON		0x0
+#define ID_AA64MMFR0_TGRAN16_NI		0x0
+#define ID_AA64MMFR0_TGRAN16_ON		0x1
+
+/* id_aa64mmfr1 */
+#define ID_AA64MMFR1_PAN_SHIFT		20
+#define ID_AA64MMFR1_LOR_SHIFT		16
+#define ID_AA64MMFR1_HPD_SHIFT		12
+#define ID_AA64MMFR1_VHE_SHIFT		8
+#define ID_AA64MMFR1_VMIDBITS_SHIFT	4
+#define ID_AA64MMFR1_HADBS_SHIFT	0
+
+/* id_aa64dfr0 */
+#define ID_AA64DFR0_CTX_CMPS_SHIFT	28
+#define ID_AA64DFR0_WRPS_SHIFT		20
+#define ID_AA64DFR0_BRPS_SHIFT		12
+#define ID_AA64DFR0_PMUVER_SHIFT	8
+#define ID_AA64DFR0_TRACEVER_SHIFT	4
+#define ID_AA64DFR0_DEBUGVER_SHIFT	0
+
+#define ID_ISAR5_RDM_SHIFT		24
+#define ID_ISAR5_CRC32_SHIFT		16
+#define ID_ISAR5_SHA2_SHIFT		12
+#define ID_ISAR5_SHA1_SHIFT		8
+#define ID_ISAR5_AES_SHIFT		4
+#define ID_ISAR5_SEVL_SHIFT		0
+
+#define MVFR0_FPROUND_SHIFT		28
+#define MVFR0_FPSHVEC_SHIFT		24
+#define MVFR0_FPSQRT_SHIFT		20
+#define MVFR0_FPDIVIDE_SHIFT		16
+#define MVFR0_FPTRAP_SHIFT		12
+#define MVFR0_FPDP_SHIFT		8
+#define MVFR0_FPSP_SHIFT		4
+#define MVFR0_SIMD_SHIFT		0
+
+#define MVFR1_SIMDFMAC_SHIFT		28
+#define MVFR1_FPHP_SHIFT		24
+#define MVFR1_SIMDHP_SHIFT		20
+#define MVFR1_SIMDSP_SHIFT		16
+#define MVFR1_SIMDINT_SHIFT		12
+#define MVFR1_SIMDLS_SHIFT		8
+#define MVFR1_FPDNAN_SHIFT		4
+#define MVFR1_FPFTZ_SHIFT		0
+
 
 #ifdef __ASSEMBLY__
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 72633b9..9e7d886 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -58,8 +58,441 @@ static void update_mixed_endian_el0_support(struct cpuinfo_arm64 *info)
 	mixed_endian_el0 &= id_aa64mmfr0_mixed_endian_el0(info->reg_id_aa64mmfr0);
 }
 
+#define ARM64_FTR_BITS(ftr_strict, ftr_type, ftr_shift, ftr_width, ftr_safe_val) \
+	{							\
+		.strict = ftr_strict,				\
+		.type = ftr_type,				\
+		.shift = ftr_shift,				\
+		.width = ftr_width,				\
+		.safe_val = ftr_safe_val,			\
+	}
+
+#define ARM64_FTR_END					\
+	{						\
+		.width = 0,				\
+	}
+
+static struct arm64_ftr_bits ftr_id_aa64isar0[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 24, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_SHA1_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_AES_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// RAZ
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 28, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
+	/* Linux doesn't care about the EL3 */
+	ARM64_FTR_BITS(FTR_NONSTRICT, FTR_DISCRETE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0),
+	/* Linux shouldn't care about secure memory */
+	ARM64_FTR_BITS(FTR_NONSTRICT, FTR_DISCRETE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
+	/*
+	 * Differing PARange is fine as long as all peripherals and memory are mapped
+	 * within the minimum PARange of all CPUs
+	 */
+	ARM64_FTR_BITS(FTR_NONSTRICT, FTR_SCALAR_MIN, ID_AA64MMFR0_PARANGE_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR1_LOR_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR1_HPD_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_bits ftr_ctr[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 31, 1, 1),	// RAO
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 28, 3, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MAX, 24, 4, 0),	// CWG
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 20, 4, 0),	// ERG
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 16, 4, 1),	// DminLine
+	/*
+	 * Linux can handle differing I-cache policies. Userspace JITs will
+	 * make use of *minLine
+	 */
+	ARM64_FTR_BITS(FTR_NONSTRICT, FTR_DISCRETE, 14, 2, 0),	// L1Ip
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 4, 10, 0),	// RAZ
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 0, 4, 0),	// IminLine
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_bits ftr_id_mmfr0[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 28, 4, 0),	// InnerShr
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 24, 4, 0),	// FCSE
+	ARM64_FTR_BITS(FTR_NONSTRICT, FTR_SCALAR_MIN, 20, 4, 0),	// AuxReg
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 16, 4, 0),	// TCM
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 12, 4, 0),	// ShareLvl
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 8, 4, 0),	// OuterShr
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 4, 4, 0),	// PMSA
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// VMSA
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_bits ftr_id_aa64dfr0[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64DFR0_CTX_CMPS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64DFR0_WRPS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64DFR0_BRPS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64DFR0_PMUVER_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64DFR0_TRACEVER_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6),
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_bits ftr_mvfr2[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 8, 24, 0),	// RAZ
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 4, 4, 0),	// FPMisc
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// SIMDMisc
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_bits ftr_dczid[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 5, 27, 0),// RAZ
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 4, 1, 1),	// DZP
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 0, 4, 0),	// BS
+	ARM64_FTR_END,
+};
+
+
+static struct arm64_ftr_bits ftr_id_isar5[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_ISAR5_RDM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 20, 4, 0),	// RAZ
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_ISAR5_CRC32_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_ISAR5_SHA2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_ISAR5_SHA1_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_ISAR5_AES_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_ISAR5_SEVL_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_bits ftr_id_mmfr4[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 8, 24, 0),	// RAZ
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 4, 4, 0),	// ac2
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// RAZ
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_bits ftr_id_pfr0[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 16, 16, 0),	// RAZ
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 12, 4, 0),	// State3
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 8, 4, 0),	// State2
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 4, 4, 0),	// State1
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// State0
+	ARM64_FTR_END,
+};
+
+/*
+ * Common ftr bits for a 32bit register with all hidden, strict
+ * attributes, with 4bit feature fields and a default safe value of
+ * 0. Covers the following 32bit registers:
+ * id_isar[0-4], id_mmfr[1-3], id_pfr1, mvfr[0-1]
+ */
+static struct arm64_ftr_bits ftr_generic_scalar_32bit[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 28, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 24, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 20, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 16, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 12, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 8, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 4, 4, 0),
+	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 0, 4, 0),
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_bits ftr_generic[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 64, 0),
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_bits ftr_generic32[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 32, 0),
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_bits ftr_aa64raz[] = {
+	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 64, 0),
+	ARM64_FTR_END,
+};
+
+#define ARM64_FTR_REG(id, ftr_table)			\
+	[ sys_reg_Op2(id) ] = {				\
+		.sys_id = id,				\
+		.name = #id,				\
+		.ftr_bits = &((ftr_table)[0]),		\
+	}
+
+static struct arm64_ftr_reg crm_1[] = {
+	ARM64_FTR_REG(SYS_ID_PFR0_EL1, ftr_id_pfr0),
+	ARM64_FTR_REG(SYS_ID_PFR1_EL1, ftr_generic_scalar_32bit),
+	ARM64_FTR_REG(SYS_ID_DFR0_EL1, ftr_generic_scalar_32bit),
+	ARM64_FTR_REG(SYS_ID_MMFR0_EL1, ftr_id_mmfr0),
+	ARM64_FTR_REG(SYS_ID_MMFR1_EL1, ftr_generic_scalar_32bit),
+	ARM64_FTR_REG(SYS_ID_MMFR2_EL1, ftr_generic_scalar_32bit),
+	ARM64_FTR_REG(SYS_ID_MMFR3_EL1, ftr_generic_scalar_32bit),
+};
+
+static struct arm64_ftr_reg crm_2[] = {
+	ARM64_FTR_REG(SYS_ID_ISAR0_EL1, ftr_generic_scalar_32bit),
+	ARM64_FTR_REG(SYS_ID_ISAR1_EL1, ftr_generic_scalar_32bit),
+	ARM64_FTR_REG(SYS_ID_ISAR2_EL1, ftr_generic_scalar_32bit),
+	ARM64_FTR_REG(SYS_ID_ISAR3_EL1, ftr_generic_scalar_32bit),
+	ARM64_FTR_REG(SYS_ID_ISAR4_EL1, ftr_generic_scalar_32bit),
+	ARM64_FTR_REG(SYS_ID_ISAR5_EL1, ftr_id_isar5),
+	ARM64_FTR_REG(SYS_ID_MMFR4_EL1, ftr_id_mmfr4),
+};
+
+static struct arm64_ftr_reg crm_3[] = {
+	ARM64_FTR_REG(SYS_MVFR0_EL1, ftr_generic_scalar_32bit),
+	ARM64_FTR_REG(SYS_MVFR1_EL1, ftr_generic_scalar_32bit),
+	ARM64_FTR_REG(SYS_MVFR2_EL1, ftr_mvfr2),
+};
+
+static struct arm64_ftr_reg crm_4[] = {
+	ARM64_FTR_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0),
+	ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_aa64raz),
+};
+
+static struct arm64_ftr_reg crm_5[] = {
+	ARM64_FTR_REG(SYS_ID_AA64DFR0_EL1, ftr_id_aa64dfr0),
+	ARM64_FTR_REG(SYS_ID_AA64DFR1_EL1, ftr_generic),
+};
+
+static struct arm64_ftr_reg crm_6[] = {
+	ARM64_FTR_REG(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0),
+	ARM64_FTR_REG(SYS_ID_AA64ISAR1_EL1, ftr_aa64raz),
+};
+
+static struct arm64_ftr_reg crm_7[] = {
+	ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0),
+	ARM64_FTR_REG(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1),
+};
+
+static struct arm64_ftr_reg op1_3[] = {
+	ARM64_FTR_REG(SYS_CTR_EL0, ftr_ctr),
+	ARM64_FTR_REG(SYS_DCZID_EL0, ftr_dczid),
+	ARM64_FTR_REG(SYS_CNTFRQ_EL0, ftr_generic32),
+};
+
+#define ARM64_REG_TABLE(table) 		\
+	{ .n = ARRAY_SIZE(table), .regs = table }
+static struct arm64_reg_table {
+	int n;
+	struct arm64_ftr_reg *regs;
+} op1_0[] = {
+	ARM64_REG_TABLE(crm_1),
+	ARM64_REG_TABLE(crm_2),
+	ARM64_REG_TABLE(crm_3),
+	ARM64_REG_TABLE(crm_4),
+	ARM64_REG_TABLE(crm_5),
+	ARM64_REG_TABLE(crm_6),
+	ARM64_REG_TABLE(crm_7),
+};
+
+/*
+ * get_arm6_sys_reg - Lookup a feature register entry using its
+ * sys_reg() encoding.
+ *
+ * We track only the following space:
+ * Op0 = 3, Op1 = 0, CRn = 0, CRm = [1 - 7], Op2 = [0 - 7]
+ * Op0 = 3, Op1 = 3, CRn = 0, CRm = 0, Op2 = { 1, 7 } 	(CTR, DCZID)
+ * Op0 = 3, Op1 = 3, CRn = 14, CRm = 0, Op2 = 0		(CNTFRQ)
+ *
+ * The space (3, 0, 0, {1-7}, {0-7}) is arranged in a 2D array op1_0,
+ * indexed by CRm and Op2. Since not all CRm's have fully allocated Op2's
+ * arm64_reg_table[CRm-1].n indicates the largest Op2 tracked for CRm.
+ *
+ * Since we have limited number of entries with Op1 = 3, we use linear search
+ * to find the reg.
+ *
+ */
+static struct arm64_ftr_reg* get_arm64_sys_reg(u32 sys_id)
+{
+	int i;
+	u8 op2, crn, crm;
+	u8 op1 = sys_reg_Op1(sys_id);
+
+	if (sys_reg_Op0(sys_id) != 3)
+		return NULL;
+	switch (op1) {
+	case 0:
+
+		crm = sys_reg_CRm(sys_id);
+		op2 = sys_reg_Op2(sys_id);
+		crn = sys_reg_CRn(sys_id);
+		if (crn || !crm || crm > 7)
+			return NULL;
+		if (op2 < op1_0[crm - 1].n &&
+			op1_0[crm - 1].regs[op2].sys_id == sys_id)
+			return &op1_0[crm - 1].regs[op2];
+		return NULL;
+	case 3:
+		for (i = 0; i < ARRAY_SIZE(op1_3); i++)
+			if (op1_3[i].sys_id == sys_id)
+				return &op1_3[i];
+	}
+	return NULL;
+}
+
+static u64 arm64_ftr_set_value(struct arm64_ftr_bits *ftrp, s64 reg, s64 ftr_val)
+{
+	u64 mask = ftr_mask(ftrp);
+
+	reg &= ~mask;
+	reg |= (ftr_val << ftrp->shift) & mask;
+	return reg;
+}
+
+static s64 arm64_ftr_safe_value(struct arm64_ftr_bits *ftrp, s64 new, s64 cur)
+{
+	switch(ftrp->type) {
+	case FTR_DISCRETE:
+		return ftrp->safe_val;
+	case FTR_SCALAR_MIN:
+		return new < cur ? new : cur;
+	case FTR_SCALAR_MAX:
+		return new > cur ? new : cur;
+	}
+
+	BUG();
+	return 0;
+}
+
+/*
+ * Initialise the CPU feature register from Boot CPU values.
+ * Also initiliases the strict_mask for the register.
+ */
+static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new)
+{
+	u64 val = 0;
+	u64 strict_mask = ~0x0ULL;
+	struct arm64_ftr_bits *ftrp;
+	struct arm64_ftr_reg *reg = get_arm64_sys_reg(sys_reg);
+
+	BUG_ON(!reg);
+
+	for(ftrp  = reg->ftr_bits; ftrp->width; ftrp++) {
+		s64 ftr_new = arm64_ftr_value(ftrp, new);
+
+		val = arm64_ftr_set_value(ftrp, val, ftr_new);
+		if (!ftrp->strict)
+			strict_mask &= ~ftr_mask(ftrp);
+	}
+	reg->sys_val = val;
+	reg->strict_mask = strict_mask;
+}
+
+void __init init_cpu_features(struct cpuinfo_arm64 *info)
+{
+	init_cpu_ftr_reg(SYS_CTR_EL0, info->reg_ctr);
+	init_cpu_ftr_reg(SYS_DCZID_EL0, info->reg_dczid);
+	init_cpu_ftr_reg(SYS_CNTFRQ_EL0, info->reg_cntfrq);
+	init_cpu_ftr_reg(SYS_ID_AA64DFR0_EL1, info->reg_id_aa64dfr0);
+	init_cpu_ftr_reg(SYS_ID_AA64DFR1_EL1, info->reg_id_aa64dfr1);
+	init_cpu_ftr_reg(SYS_ID_AA64ISAR0_EL1, info->reg_id_aa64isar0);
+	init_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1, info->reg_id_aa64isar1);
+	init_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1, info->reg_id_aa64mmfr0);
+	init_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1, info->reg_id_aa64mmfr1);
+	init_cpu_ftr_reg(SYS_ID_AA64PFR0_EL1, info->reg_id_aa64pfr0);
+	init_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1);
+	init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
+	init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0);
+	init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1);
+	init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2);
+	init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3);
+	init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4);
+	init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5);
+	init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0);
+	init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1);
+	init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2);
+	init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3);
+	init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0);
+	init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1);
+	init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0);
+	init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1);
+	init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
+
+	/* This will be removed later, once we start using the infrastructure */
+	update_mixed_endian_el0_support(info);
+}
+
+static void update_cpu_ftr_reg(u32 sys_reg, u64 new)
+{
+	struct arm64_ftr_bits *ftrp;
+	struct arm64_ftr_reg *reg = get_arm64_sys_reg(sys_reg);
+
+	BUG_ON(!reg);
+
+	for(ftrp = reg->ftr_bits; ftrp->width; ftrp++) {
+		s64 ftr_cur = arm64_ftr_value(ftrp, reg->sys_val);
+		s64 ftr_new = arm64_ftr_value(ftrp, new);
+
+		if (ftr_cur == ftr_new)
+			continue;
+		/* Find a safe value */
+		ftr_new = arm64_ftr_safe_value(ftrp, ftr_new, ftr_cur);
+		reg->sys_val = arm64_ftr_set_value(ftrp, reg->sys_val, ftr_new);
+	}
+
+}
+
+/* Update CPU feature register from non-boot CPU */
 void update_cpu_features(struct cpuinfo_arm64 *info)
 {
+	update_cpu_ftr_reg(SYS_CTR_EL0, info->reg_ctr);
+	update_cpu_ftr_reg(SYS_DCZID_EL0, info->reg_dczid);
+	update_cpu_ftr_reg(SYS_CNTFRQ_EL0, info->reg_cntfrq);
+	update_cpu_ftr_reg(SYS_ID_AA64DFR0_EL1, info->reg_id_aa64dfr0);
+	update_cpu_ftr_reg(SYS_ID_AA64DFR1_EL1, info->reg_id_aa64dfr1);
+	update_cpu_ftr_reg(SYS_ID_AA64ISAR0_EL1, info->reg_id_aa64isar0);
+	update_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1, info->reg_id_aa64isar1);
+	update_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1, info->reg_id_aa64mmfr0);
+	update_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1, info->reg_id_aa64mmfr1);
+	update_cpu_ftr_reg(SYS_ID_AA64PFR0_EL1, info->reg_id_aa64pfr0);
+	update_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1);
+	update_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
+	update_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0);
+	update_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1);
+	update_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2);
+	update_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3);
+	update_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4);
+	update_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5);
+	update_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0);
+	update_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1);
+	update_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2);
+	update_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3);
+	update_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0);
+	update_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1);
+	update_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0);
+	update_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1);
+	update_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
+
 	update_mixed_endian_el0_support(info);
 }
 
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 0dadb69..857aaf0 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -340,7 +340,6 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
 
 	check_local_cpu_errata();
 	check_local_cpu_features();
-	update_cpu_features(info);
 }
 
 void cpuinfo_store_cpu(void)
@@ -348,6 +347,7 @@ void cpuinfo_store_cpu(void)
 	struct cpuinfo_arm64 *info = this_cpu_ptr(&cpu_data);
 	__cpuinfo_store_cpu(info);
 	cpuinfo_sanity_check(info);
+	update_cpu_features(info);
 }
 
 void __init cpuinfo_store_boot_cpu(void)
@@ -356,4 +356,5 @@ void __init cpuinfo_store_boot_cpu(void)
 	__cpuinfo_store_cpu(info);
 
 	boot_cpu_data = *info;
+	init_cpu_features(&boot_cpu_data);
 }
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 08/22] arm64: Consolidate CPU Sanity check to CPU Feature infrastructure
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (6 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 07/22] arm64: Keep track of CPU feature registers Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 09/22] arm64: Read system wide CPUID value Suzuki K. Poulose
                   ` (15 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

This patch consolidates the CPU Sanity check to the new infrastructure.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |    3 +-
 arch/arm64/kernel/cpufeature.c      |  163 ++++++++++++++++++++++++++++-------
 arch/arm64/kernel/cpuinfo.c         |  113 +-----------------------
 3 files changed, 133 insertions(+), 146 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 14302e0..7927ac2 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -139,7 +139,8 @@ static inline bool id_aa64mmfr0_mixed_endian_el0(u64 mmfr0)
 
 void __init init_cpu_features(struct cpuinfo_arm64 *info);
 void __init setup_cpu_features(void);
-void update_cpu_features(struct cpuinfo_arm64 *info);
+void update_cpu_features(int cpu, struct cpuinfo_arm64 *info,
+				 struct cpuinfo_arm64 *boot);
 
 void check_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 			    const char *info);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9e7d886..960a62f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -442,12 +442,9 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
 	update_mixed_endian_el0_support(info);
 }
 
-static void update_cpu_ftr_reg(u32 sys_reg, u64 new)
+static void update_cpu_ftr_reg(struct arm64_ftr_reg *reg, u64 new)
 {
 	struct arm64_ftr_bits *ftrp;
-	struct arm64_ftr_reg *reg = get_arm64_sys_reg(sys_reg);
-
-	BUG_ON(!reg);
 
 	for(ftrp = reg->ftr_bits; ftrp->width; ftrp++) {
 		s64 ftr_cur = arm64_ftr_value(ftrp, reg->sys_val);
@@ -462,36 +459,136 @@ static void update_cpu_ftr_reg(u32 sys_reg, u64 new)
 
 }
 
-/* Update CPU feature register from non-boot CPU */
-void update_cpu_features(struct cpuinfo_arm64 *info)
+static int check_update_ftr_reg(u32 sys_reg, int cpu, u64 val, u64 boot)
 {
-	update_cpu_ftr_reg(SYS_CTR_EL0, info->reg_ctr);
-	update_cpu_ftr_reg(SYS_DCZID_EL0, info->reg_dczid);
-	update_cpu_ftr_reg(SYS_CNTFRQ_EL0, info->reg_cntfrq);
-	update_cpu_ftr_reg(SYS_ID_AA64DFR0_EL1, info->reg_id_aa64dfr0);
-	update_cpu_ftr_reg(SYS_ID_AA64DFR1_EL1, info->reg_id_aa64dfr1);
-	update_cpu_ftr_reg(SYS_ID_AA64ISAR0_EL1, info->reg_id_aa64isar0);
-	update_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1, info->reg_id_aa64isar1);
-	update_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1, info->reg_id_aa64mmfr0);
-	update_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1, info->reg_id_aa64mmfr1);
-	update_cpu_ftr_reg(SYS_ID_AA64PFR0_EL1, info->reg_id_aa64pfr0);
-	update_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1);
-	update_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
-	update_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0);
-	update_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1);
-	update_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2);
-	update_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3);
-	update_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4);
-	update_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5);
-	update_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0);
-	update_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1);
-	update_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2);
-	update_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3);
-	update_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0);
-	update_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1);
-	update_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0);
-	update_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1);
-	update_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
+	struct arm64_ftr_reg *regp = get_arm64_sys_reg(sys_reg);
+
+	BUG_ON(!regp);
+	update_cpu_ftr_reg(regp, val);
+	if ((boot & regp->strict_mask) == (val & regp->strict_mask))
+		return 0;
+	pr_warn("SANITY CHECK: Unexpected variation in %s. Boot CPU: %#016llx, CPU%d: %#016llx\n",
+			regp->name, boot, cpu, val);
+	return 1;
+}
+
+/*
+ * Update CPU feature register from non-boot CPU.
+ * Also performs SANITY check w.r.t the values from boot CPU.
+ */
+void update_cpu_features(int cpu,
+			 struct cpuinfo_arm64 *info,
+			 struct cpuinfo_arm64 *boot)
+{
+	int taint = 0;
+
+	/*
+	 * The kernel can handle differing I-cache policies, but otherwise
+	 * caches should look identical. Userspace JITs will make use of
+	 * *minLine.
+	 */
+	taint |= check_update_ftr_reg(SYS_CTR_EL0, cpu,
+				      info->reg_ctr, boot->reg_ctr);
+
+	/*
+	 * Userspace may perform DC ZVA instructions. Mismatched block sizes
+	 * could result in too much or too little memory being zeroed if a
+	 * process is preempted and migrated between CPUs.
+	 */
+	taint |= check_update_ftr_reg(SYS_DCZID_EL0, cpu,
+				      info->reg_dczid, boot->reg_dczid);
+
+	/* If different, timekeeping will be broken (especially with KVM) */
+	taint |= check_update_ftr_reg(SYS_CNTFRQ_EL0, cpu,
+				      info->reg_cntfrq, boot->reg_cntfrq);
+
+	/*
+	 * The kernel uses self-hosted debug features and expects CPUs to
+	 * support identical debug features. We presently need CTX_CMPs, WRPs,
+	 * and BRPs to be identical.
+	 * ID_AA64DFR1 is currently RES0.
+	 */
+	taint |= check_update_ftr_reg(SYS_ID_AA64DFR0_EL1, cpu,
+				      info->reg_id_aa64dfr0, boot->reg_id_aa64dfr0);
+	taint |= check_update_ftr_reg(SYS_ID_AA64DFR1_EL1, cpu,
+				      info->reg_id_aa64dfr1, boot->reg_id_aa64dfr1);
+	/*
+	 * Even in big.LITTLE, processors should be identical instruction-set
+	 * wise.
+	 */
+	taint |= check_update_ftr_reg(SYS_ID_AA64ISAR0_EL1, cpu,
+				      info->reg_id_aa64isar0, boot->reg_id_aa64isar0);
+	taint |= check_update_ftr_reg(SYS_ID_AA64ISAR1_EL1, cpu,
+				      info->reg_id_aa64isar1, boot->reg_id_aa64isar1);
+
+	/*
+	 * Differing PARange support is fine as long as all peripherals and
+	 * memory are mapped within the minimum PARange of all CPUs.
+	 * Linux should not care about secure memory.
+	 */
+	taint |= check_update_ftr_reg(SYS_ID_AA64MMFR0_EL1, cpu,
+				      info->reg_id_aa64mmfr0, boot->reg_id_aa64mmfr0);
+	taint |= check_update_ftr_reg(SYS_ID_AA64MMFR1_EL1, cpu,
+				      info->reg_id_aa64mmfr1, boot->reg_id_aa64mmfr1);
+
+	/*
+	 * EL3 is not our concern.
+	 * ID_AA64PFR1 is currently RES0.
+	 */
+	taint |= check_update_ftr_reg(SYS_ID_AA64PFR0_EL1, cpu,
+				      info->reg_id_aa64pfr0, boot->reg_id_aa64pfr0);
+	taint |= check_update_ftr_reg(SYS_ID_AA64PFR1_EL1, cpu,
+				      info->reg_id_aa64pfr1, boot->reg_id_aa64pfr1);
+
+	/*
+	 * If we have AArch32, we care about 32-bit features for compat. These
+	 * registers should be RES0 otherwise.
+	 */
+	taint |= check_update_ftr_reg(SYS_ID_DFR0_EL1, cpu,
+				      info->reg_id_dfr0, boot->reg_id_dfr0);
+	taint |= check_update_ftr_reg(SYS_ID_ISAR0_EL1, cpu,
+				      info->reg_id_isar0, boot->reg_id_isar0);
+	taint |= check_update_ftr_reg(SYS_ID_ISAR1_EL1, cpu,
+				      info->reg_id_isar1, boot->reg_id_isar1);
+	taint |= check_update_ftr_reg(SYS_ID_ISAR2_EL1, cpu,
+				      info->reg_id_isar2, boot->reg_id_isar2);
+	taint |= check_update_ftr_reg(SYS_ID_ISAR3_EL1, cpu,
+				      info->reg_id_isar3, boot->reg_id_isar3);
+	taint |= check_update_ftr_reg(SYS_ID_ISAR4_EL1, cpu,
+			    	      info->reg_id_isar4, boot->reg_id_isar4);
+	taint |= check_update_ftr_reg(SYS_ID_ISAR5_EL1, cpu,
+				      info->reg_id_isar5, boot->reg_id_isar5);
+
+	/*
+	 * Regardless of the value of the AuxReg field, the AIFSR, ADFSR, and
+	 * ACTLR formats could differ across CPUs and therefore would have to
+	 * be trapped for virtualization anyway.
+	 */
+	taint |= check_update_ftr_reg(SYS_ID_MMFR0_EL1, cpu,
+				     info->reg_id_mmfr0, boot->reg_id_mmfr0);
+	taint |= check_update_ftr_reg(SYS_ID_MMFR1_EL1, cpu,
+				      info->reg_id_mmfr1, boot->reg_id_mmfr1);
+	taint |= check_update_ftr_reg(SYS_ID_MMFR2_EL1, cpu,
+				      info->reg_id_mmfr2, boot->reg_id_mmfr2);
+	taint |= check_update_ftr_reg(SYS_ID_MMFR3_EL1, cpu,
+				      info->reg_id_mmfr3, boot->reg_id_mmfr3);
+	taint |= check_update_ftr_reg(SYS_ID_PFR0_EL1, cpu,
+				      info->reg_id_pfr0, boot->reg_id_pfr0);
+	taint |= check_update_ftr_reg(SYS_ID_PFR1_EL1, cpu,
+				      info->reg_id_pfr1, boot->reg_id_pfr1);
+	taint |= check_update_ftr_reg(SYS_MVFR0_EL1, cpu,
+				      info->reg_mvfr0, boot->reg_mvfr0);
+	taint |= check_update_ftr_reg(SYS_MVFR1_EL1, cpu,
+				      info->reg_mvfr1, boot->reg_mvfr1);
+	taint |= check_update_ftr_reg(SYS_MVFR2_EL1, cpu,
+				      info->reg_mvfr2, boot->reg_mvfr2);
+
+	/*
+	 * Mismatched CPU features are a recipe for disaster. Don't even
+	 * pretend to support them.
+	 */
+	WARN_TAINT_ONCE(taint, TAINT_CPU_OUT_OF_SPEC,
+			"Unsupported CPU feature variation.\n");
 
 	update_mixed_endian_el0_support(info);
 }
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 857aaf0..f25869e 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -192,116 +192,6 @@ static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info)
 	pr_info("Detected %s I-cache on CPU%d\n", icache_policy_str[l1ip], cpu);
 }
 
-static int check_reg_mask(char *name, u64 mask, u64 boot, u64 cur, int cpu)
-{
-	if ((boot & mask) == (cur & mask))
-		return 0;
-
-	pr_warn("SANITY CHECK: Unexpected variation in %s. Boot CPU: %#016lx, CPU%d: %#016lx\n",
-		name, (unsigned long)boot, cpu, (unsigned long)cur);
-
-	return 1;
-}
-
-#define CHECK_MASK(field, mask, boot, cur, cpu) \
-	check_reg_mask(#field, mask, (boot)->reg_ ## field, (cur)->reg_ ## field, cpu)
-
-#define CHECK(field, boot, cur, cpu) \
-	CHECK_MASK(field, ~0ULL, boot, cur, cpu)
-
-/*
- * Verify that CPUs don't have unexpected differences that will cause problems.
- */
-static void cpuinfo_sanity_check(struct cpuinfo_arm64 *cur)
-{
-	unsigned int cpu = smp_processor_id();
-	struct cpuinfo_arm64 *boot = &boot_cpu_data;
-	unsigned int diff = 0;
-
-	/*
-	 * The kernel can handle differing I-cache policies, but otherwise
-	 * caches should look identical. Userspace JITs will make use of
-	 * *minLine.
-	 */
-	diff |= CHECK_MASK(ctr, 0xffff3fff, boot, cur, cpu);
-
-	/*
-	 * Userspace may perform DC ZVA instructions. Mismatched block sizes
-	 * could result in too much or too little memory being zeroed if a
-	 * process is preempted and migrated between CPUs.
-	 */
-	diff |= CHECK(dczid, boot, cur, cpu);
-
-	/* If different, timekeeping will be broken (especially with KVM) */
-	diff |= CHECK(cntfrq, boot, cur, cpu);
-
-	/*
-	 * The kernel uses self-hosted debug features and expects CPUs to
-	 * support identical debug features. We presently need CTX_CMPs, WRPs,
-	 * and BRPs to be identical.
-	 * ID_AA64DFR1 is currently RES0.
-	 */
-	diff |= CHECK(id_aa64dfr0, boot, cur, cpu);
-	diff |= CHECK(id_aa64dfr1, boot, cur, cpu);
-
-	/*
-	 * Even in big.LITTLE, processors should be identical instruction-set
-	 * wise.
-	 */
-	diff |= CHECK(id_aa64isar0, boot, cur, cpu);
-	diff |= CHECK(id_aa64isar1, boot, cur, cpu);
-
-	/*
-	 * Differing PARange support is fine as long as all peripherals and
-	 * memory are mapped within the minimum PARange of all CPUs.
-	 * Linux should not care about secure memory.
-	 * ID_AA64MMFR1 is currently RES0.
-	 */
-	diff |= CHECK_MASK(id_aa64mmfr0, 0xffffffffffff0ff0, boot, cur, cpu);
-	diff |= CHECK(id_aa64mmfr1, boot, cur, cpu);
-
-	/*
-	 * EL3 is not our concern.
-	 * ID_AA64PFR1 is currently RES0.
-	 */
-	diff |= CHECK_MASK(id_aa64pfr0, 0xffffffffffff0fff, boot, cur, cpu);
-	diff |= CHECK(id_aa64pfr1, boot, cur, cpu);
-
-	/*
-	 * If we have AArch32, we care about 32-bit features for compat. These
-	 * registers should be RES0 otherwise.
-	 */
-	diff |= CHECK(id_dfr0, boot, cur, cpu);
-	diff |= CHECK(id_isar0, boot, cur, cpu);
-	diff |= CHECK(id_isar1, boot, cur, cpu);
-	diff |= CHECK(id_isar2, boot, cur, cpu);
-	diff |= CHECK(id_isar3, boot, cur, cpu);
-	diff |= CHECK(id_isar4, boot, cur, cpu);
-	diff |= CHECK(id_isar5, boot, cur, cpu);
-	/*
-	 * Regardless of the value of the AuxReg field, the AIFSR, ADFSR, and
-	 * ACTLR formats could differ across CPUs and therefore would have to
-	 * be trapped for virtualization anyway.
-	 */
-	diff |= CHECK_MASK(id_mmfr0, 0xff0fffff, boot, cur, cpu);
-	diff |= CHECK(id_mmfr1, boot, cur, cpu);
-	diff |= CHECK(id_mmfr2, boot, cur, cpu);
-	diff |= CHECK(id_mmfr3, boot, cur, cpu);
-	diff |= CHECK(id_pfr0, boot, cur, cpu);
-	diff |= CHECK(id_pfr1, boot, cur, cpu);
-
-	diff |= CHECK(mvfr0, boot, cur, cpu);
-	diff |= CHECK(mvfr1, boot, cur, cpu);
-	diff |= CHECK(mvfr2, boot, cur, cpu);
-
-	/*
-	 * Mismatched CPU features are a recipe for disaster. Don't even
-	 * pretend to support them.
-	 */
-	WARN_TAINT_ONCE(diff, TAINT_CPU_OUT_OF_SPEC,
-			"Unsupported CPU feature variation.\n");
-}
-
 static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
 {
 	info->reg_cntfrq = arch_timer_get_cntfrq();
@@ -346,8 +236,7 @@ void cpuinfo_store_cpu(void)
 {
 	struct cpuinfo_arm64 *info = this_cpu_ptr(&cpu_data);
 	__cpuinfo_store_cpu(info);
-	cpuinfo_sanity_check(info);
-	update_cpu_features(info);
+	update_cpu_features(smp_processor_id(), info, &boot_cpu_data);
 }
 
 void __init cpuinfo_store_boot_cpu(void)
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 09/22] arm64: Read system wide CPUID value
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (7 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 08/22] arm64: Consolidate CPU Sanity check to CPU Feature infrastructure Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 10/22] arm64: Cleanup mixed endian support detection Suzuki K. Poulose
                   ` (14 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Add an API for reading the safe CPUID value across the
system from the new infrastructure.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |    2 ++
 arch/arm64/kernel/cpufeature.c      |    9 +++++++++
 2 files changed, 11 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 7927ac2..f59e90a 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -149,6 +149,8 @@ void check_local_cpu_features(void);
 bool cpu_supports_mixed_endian_el0(void);
 bool system_supports_mixed_endian_el0(void);
 
+u64 read_system_reg(u32 id);
+
 #endif /* __ASSEMBLY__ */
 
 #endif
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 960a62f..a736c13 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -593,6 +593,15 @@ void update_cpu_features(int cpu,
 	update_mixed_endian_el0_support(info);
 }
 
+u64 read_system_reg(u32 id)
+{
+	struct arm64_ftr_reg *regp = get_arm64_sys_reg(id);
+
+	/* We shouldn't get a request for an unsupported register */
+	BUG_ON(!regp);
+	return regp->sys_val;
+}
+
 static bool
 feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry)
 {
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 10/22] arm64: Cleanup mixed endian support detection
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (8 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 09/22] arm64: Read system wide CPUID value Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 11/22] arm64: Populate cpuinfo after notify_cpu_starting Suzuki K. Poulose
                   ` (13 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Make use of the system wide safe register to decide the support
for mixed endian.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/kernel/cpufeature.c |   32 ++++++++++----------------------
 1 file changed, 10 insertions(+), 22 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a736c13..7010617 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -24,7 +24,6 @@
 #include <asm/processor.h>
 #include <asm/sysreg.h>
 
-static bool mixed_endian_el0 = true;
 unsigned long elf_hwcap __read_mostly;
 EXPORT_SYMBOL_GPL(elf_hwcap);
 
@@ -42,22 +41,6 @@ unsigned int compat_elf_hwcap2 __read_mostly;
 
 DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
 
-
-bool cpu_supports_mixed_endian_el0(void)
-{
-	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
-}
-
-bool system_supports_mixed_endian_el0(void)
-{
-	return mixed_endian_el0;
-}
-
-static void update_mixed_endian_el0_support(struct cpuinfo_arm64 *info)
-{
-	mixed_endian_el0 &= id_aa64mmfr0_mixed_endian_el0(info->reg_id_aa64mmfr0);
-}
-
 #define ARM64_FTR_BITS(ftr_strict, ftr_type, ftr_shift, ftr_width, ftr_safe_val) \
 	{							\
 		.strict = ftr_strict,				\
@@ -437,9 +420,6 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
 	init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0);
 	init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1);
 	init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
-
-	/* This will be removed later, once we start using the infrastructure */
-	update_mixed_endian_el0_support(info);
 }
 
 static void update_cpu_ftr_reg(struct arm64_ftr_reg *reg, u64 new)
@@ -589,8 +569,6 @@ void update_cpu_features(int cpu,
 	 */
 	WARN_TAINT_ONCE(taint, TAINT_CPU_OUT_OF_SPEC,
 			"Unsupported CPU feature variation.\n");
-
-	update_mixed_endian_el0_support(info);
 }
 
 u64 read_system_reg(u32 id)
@@ -680,6 +658,16 @@ void check_local_cpu_features(void)
 	check_cpu_capabilities(arm64_features, "detected feature:");
 }
 
+bool cpu_supports_mixed_endian_el0(void)
+{
+	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
+}
+
+bool system_supports_mixed_endian_el0(void)
+{
+	return id_aa64mmfr0_mixed_endian_el0(read_system_reg(SYS_ID_AA64MMFR0_EL1));
+}
+
 void __init setup_cpu_features(void)
 {
 	u64 features;
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 11/22] arm64: Populate cpuinfo after notify_cpu_starting
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (9 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 10/22] arm64: Cleanup mixed endian support detection Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 12/22] arm64: Delay cpu feature checks Suzuki K. Poulose
                   ` (12 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

This patch delays populating the cpuinfo for a new (hotplugged)
CPU until the notifiers have executed. This will enable us to verify
if the new (hotplugged) CPU has all the capabilities which the system
already has. If it doesn't, we could prevent it from turning online and
also modifying the system wide feature register status.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/kernel/smp.c |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index cb3e0d8..6987de4 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -163,14 +163,14 @@ asmlinkage void secondary_start_kernel(void)
 		cpu_ops[cpu]->cpu_postboot();
 
 	/*
-	 * Log the CPU info before it is marked online and might get read.
+	 * Enable GIC and timers.
 	 */
-	cpuinfo_store_cpu();
+	notify_cpu_starting(cpu);
 
 	/*
-	 * Enable GIC and timers.
+	 * Log the CPU info before it is marked online and might get read.
 	 */
-	notify_cpu_starting(cpu);
+	cpuinfo_store_cpu();
 
 	smp_store_cpu_info(cpu);
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 12/22] arm64: Delay cpu feature checks
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (10 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 11/22] arm64: Populate cpuinfo after notify_cpu_starting Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 13/22] arm64: Make use of system wide capability checks Suzuki K. Poulose
                   ` (11 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

At the moment we run through the arm64_features capability list for
each CPU and set the capability if one of the CPU supports it. This
could be problematic in a heterogeneous system with differing capabilities.
Delay the CPU feature checks until all the enabled CPUs are up, so that
we can make better decisions based on the overall system capability.

The CPU errata checks are not delayed and is still executed per-CPU
to detect the respective capabilities. If we ever come across a non-errata
capability that needs to be checked on each-CPU, we could introduce them via
a new capability table(or introduce a flag), which can be processed per CPU.

Registers a hotplug notifier to run through the system capabilities and
and ensure that the new hotplugged CPU has all the enabled capabilities
and enables it on this CPU. If the CPU is missing at least one of the
capabilities established at boot up, the CPU is prevented from being marked
online to avoid unexpected system behavior.

Renames the check_local_cpu_features() => check_cpu_features().
The next patch will make the feature checks use the system wide
safe value of a feature register.

NOTE: The enable() methods associated with the capability is scheduled
on all the CPUs (which is the only use case). If we need a different type
of 'enable()' which only needs to be run once on any CPU, we should be
able to handle that when needed.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |    3 +-
 arch/arm64/include/asm/processor.h  |    2 +-
 arch/arm64/kernel/cpufeature.c      |  115 +++++++++++++++++++++++++++++++++--
 arch/arm64/kernel/cpuinfo.c         |    1 -
 arch/arm64/mm/fault.c               |    2 +-
 5 files changed, 113 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index f59e90a..e2b5a21 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -72,7 +72,7 @@ struct arm64_cpu_capabilities {
 	const char *desc;
 	u16 capability;
 	bool (*matches)(const struct arm64_cpu_capabilities *);
-	void (*enable)(void);
+	void (*enable)(void *);
 	union {
 		struct {	/* To be used for erratum handling only */
 			u32 midr_model;
@@ -145,7 +145,6 @@ void update_cpu_features(int cpu, struct cpuinfo_arm64 *info,
 void check_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 			    const char *info);
 void check_local_cpu_errata(void);
-void check_local_cpu_features(void);
 bool cpu_supports_mixed_endian_el0(void);
 bool system_supports_mixed_endian_el0(void);
 
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 98f3235..4acb7ca 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -186,6 +186,6 @@ static inline void spin_lock_prefetch(const void *x)
 
 #endif
 
-void cpu_enable_pan(void);
+void cpu_enable_pan(void *__unused);
 
 #endif /* __ASM_PROCESSOR_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7010617..c68d15e 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -18,6 +18,7 @@
 
 #define pr_fmt(fmt) "CPU features: " fmt
 
+#include <linux/notifier.h>
 #include <linux/types.h>
 #include <asm/cpu.h>
 #include <asm/cpufeature.h>
@@ -646,16 +647,119 @@ void check_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 		cpus_set_cap(caps[i].capability);
 	}
 
-	/* second pass allows enable() to consider interacting capabilities */
-	for (i = 0; caps[i].desc; i++) {
-		if (cpus_have_cap(caps[i].capability) && caps[i].enable)
-			caps[i].enable();
+	/*
+	 * second pass allows enable() invoked on active each CPU
+	 * to consider interacting capabilities.
+	 */
+	for (i = 0; caps[i].desc; i++)
+		if (caps[i].enable && cpus_have_cap(caps[i].capability))
+			on_each_cpu(caps[i].enable, NULL, true);
+}
+
+/*
+ * read_cpu_sysreg() - Used by a STARTING cpu before cpuinfo is populated.
+ */
+static u64 read_cpu_sysreg(u32 sys_id)
+{
+	switch(sys_id) {
+	case SYS_ID_PFR0_EL1:		return (u64)read_cpuid(ID_PFR0_EL1);
+	case SYS_ID_PFR1_EL1:		return (u64)read_cpuid(ID_PFR1_EL1);
+	case SYS_ID_DFR0_EL1:		return (u64)read_cpuid(ID_DFR0_EL1);
+	case SYS_ID_MMFR0_EL1:		return (u64)read_cpuid(ID_MMFR0_EL1);
+	case SYS_ID_MMFR1_EL1:		return (u64)read_cpuid(ID_MMFR1_EL1);
+	case SYS_ID_MMFR2_EL1:		return (u64)read_cpuid(ID_MMFR2_EL1);
+	case SYS_ID_MMFR3_EL1:		return (u64)read_cpuid(ID_MMFR3_EL1);
+	case SYS_ID_ISAR0_EL1:		return (u64)read_cpuid(ID_ISAR0_EL1);
+	case SYS_ID_ISAR1_EL1:		return (u64)read_cpuid(ID_ISAR1_EL1);
+	case SYS_ID_ISAR2_EL1:		return (u64)read_cpuid(ID_ISAR2_EL1);
+	case SYS_ID_ISAR3_EL1:		return (u64)read_cpuid(ID_ISAR3_EL1);
+	case SYS_ID_ISAR4_EL1:		return (u64)read_cpuid(ID_ISAR4_EL1);
+	case SYS_ID_ISAR5_EL1:		return (u64)read_cpuid(ID_ISAR4_EL1);
+	case SYS_MVFR0_EL1:		return (u64)read_cpuid(MVFR0_EL1);
+	case SYS_MVFR1_EL1:		return (u64)read_cpuid(MVFR1_EL1);
+	case SYS_MVFR2_EL1:		return (u64)read_cpuid(MVFR2_EL1);
+
+	case SYS_ID_AA64PFR0_EL1:	return (u64)read_cpuid(ID_AA64PFR0_EL1);
+	case SYS_ID_AA64PFR1_EL1:	return (u64)read_cpuid(ID_AA64PFR0_EL1);
+	case SYS_ID_AA64DFR0_EL1:	return (u64)read_cpuid(ID_AA64DFR0_EL1);
+	case SYS_ID_AA64DFR1_EL1:	return (u64)read_cpuid(ID_AA64DFR0_EL1);
+	case SYS_ID_AA64MMFR0_EL1:	return (u64)read_cpuid(ID_AA64MMFR0_EL1);
+	case SYS_ID_AA64MMFR1_EL1:	return (u64)read_cpuid(ID_AA64MMFR1_EL1);
+	case SYS_ID_AA64ISAR0_EL1:	return (u64)read_cpuid(ID_AA64ISAR0_EL1);
+	case SYS_ID_AA64ISAR1_EL1:	return (u64)read_cpuid(ID_AA64ISAR1_EL1);
+
+	case SYS_CNTFRQ_EL0:		return (u64)read_cpuid(CNTFRQ_EL0);
+	case SYS_CTR_EL0:		return (u64)read_cpuid(CTR_EL0);
+	case SYS_DCZID_EL0:		return (u64)read_cpuid(DCZID_EL0);
+	default:
+		BUG();
+		return 0;
 	}
 }
 
-void check_local_cpu_features(void)
+/*
+ * Park the CPU which doesn't have the capability as advertised
+ * by the system.
+ */
+static void fail_incapable_cpu(char *cap_type,
+				 const struct arm64_cpu_capabilities *cap)
+{
+	/*XXX: Are we really safe to call printk here ? */
+	pr_crit("FATAL: CPU%d is missing %s : %s \n",
+			smp_processor_id(), cap_type, cap->desc);
+	asm volatile(
+			" 1:	wfe \n\t"
+		     	"	b 1b\n"
+		    );
+}
+/*
+ * Run through the enabled system capabilities and enable() it on this CPU.
+ * The capabilities were decided based on the available CPUs at the boot time.
+ * Any new CPU should match the system wide status of the capability. If the
+ * new CPU doesn't have a capability which the system now has enabled, we
+ * cannot do anything to fix it up and could cause unexpected failures. So
+ * we hold the CPU in a black hole.
+ */
+void cpu_enable_features(void)
+{
+	int i;
+	const struct arm64_cpu_capabilities *caps = arm64_features;
+
+	for(i = 0; caps[i].desc; i++)
+		if (caps[i].enable && cpus_have_cap(caps[i].capability))
+			caps[i].enable(NULL);
+	for(i = 0; caps[i].desc; i++) {
+		if(!cpus_have_cap(caps[i].capability) || !caps[i].sys_reg)
+			continue;
+		/*
+		 * If the new CPU misses an advertised feature, we cannot proceed
+		 * further, park the cpu.
+		 */
+		if (!feature_matches(read_cpu_sysreg(caps[i].sys_reg), &caps[i]))
+			fail_incapable_cpu("arm64_features", &caps[i]);
+		if (caps[i].enable)
+			caps[i].enable(NULL);
+	}
+}
+
+static int cpu_feature_hotplug_notify(struct notifier_block *nb,
+				unsigned long action, void *hcpu)
+{
+	if ((action & ~CPU_TASKS_FROZEN) == CPU_STARTING)
+		cpu_enable_features();
+	return notifier_from_errno(0);
+}
+
+/* Run the notifier before initialising GIC CPU interface. */
+static struct notifier_block cpu_feature_notifier = {
+	.notifier_call = cpu_feature_hotplug_notify,
+	.priority = 101,
+};
+
+void check_cpu_features(void)
 {
 	check_cpu_capabilities(arm64_features, "detected feature:");
+	register_cpu_notifier(&cpu_feature_notifier);
 }
 
 bool cpu_supports_mixed_endian_el0(void)
@@ -675,6 +779,7 @@ void __init setup_cpu_features(void)
 	u32 cwg;
 	int cls;
 
+	check_cpu_features();
 	/*
 	 * Check for sane CTR_EL0.CWG value.
 	 */
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index f25869e..789fbea 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -229,7 +229,6 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
 	cpuinfo_detect_icache_policy(info);
 
 	check_local_cpu_errata();
-	check_local_cpu_features();
 }
 
 void cpuinfo_store_cpu(void)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index aba9ead..066d5ae 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -555,7 +555,7 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
 }
 
 #ifdef CONFIG_ARM64_PAN
-void cpu_enable_pan(void)
+void cpu_enable_pan(void *__unused)
 {
 	config_sctlr_el1(SCTLR_EL1_SPAN, 0);
 }
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 13/22] arm64: Make use of system wide capability checks
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (11 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 12/22] arm64: Delay cpu feature checks Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 14/22] arm64: Cleanup HWCAP handling Suzuki K. Poulose
                   ` (10 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Now that we can reliably read the system wide safe value for a
feature register, use that to compute the system capability.
This patch also replaces the 'feature-register-specific'
methods with a generic routine to check the capability.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |    1 +
 arch/arm64/kernel/cpufeature.c      |   34 ++++++++++++++++------------------
 2 files changed, 17 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index e2b5a21..e74a2ac 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -80,6 +80,7 @@ struct arm64_cpu_capabilities {
 		};
 
 		struct {	/* Feature register checking */
+			u32 sys_reg;
 			int field_pos;
 			int min_field_value;
 		};
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c68d15e..3582af9 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -589,34 +589,31 @@ feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry)
 	return val >= entry->min_field_value;
 }
 
-#define __ID_FEAT_CHK(reg)						\
-static bool __maybe_unused						\
-has_##reg##_feature(const struct arm64_cpu_capabilities *entry)		\
-{									\
-	u64 val;							\
-									\
-	val = read_cpuid(reg##_el1);					\
-	return feature_matches(val, entry);				\
-}
+static bool
+has_cpuid_feature(const struct arm64_cpu_capabilities *entry)
+{
+	u64 val;
 
-__ID_FEAT_CHK(id_aa64pfr0);
-__ID_FEAT_CHK(id_aa64mmfr1);
-__ID_FEAT_CHK(id_aa64isar0);
+	val = read_system_reg(entry->sys_reg);
+	return feature_matches(val, entry);
+}
 
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
 		.capability = ARM64_HAS_SYSREG_GIC_CPUIF,
-		.matches = has_id_aa64pfr0_feature,
-		.field_pos = 24,
+		.matches = has_cpuid_feature,
+		.sys_reg = SYS_ID_AA64PFR0_EL1,
+		.field_pos = ID_AA64PFR0_GIC_SHIFT,
 		.min_field_value = 1,
 	},
 #ifdef CONFIG_ARM64_PAN
 	{
 		.desc = "Privileged Access Never",
 		.capability = ARM64_HAS_PAN,
-		.matches = has_id_aa64mmfr1_feature,
-		.field_pos = 20,
+		.matches = has_cpuid_feature,
+		.sys_reg = SYS_ID_AA64MMFR1_EL1,
+		.field_pos = ID_AA64MMFR1_PAN_SHIFT,
 		.min_field_value = 1,
 		.enable = cpu_enable_pan,
 	},
@@ -625,8 +622,9 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "LSE atomic instructions",
 		.capability = ARM64_HAS_LSE_ATOMICS,
-		.matches = has_id_aa64isar0_feature,
-		.field_pos = 20,
+		.matches = has_cpuid_feature,
+		.sys_reg = SYS_ID_AA64ISAR0_EL1,
+		.field_pos = ID_AA64ISAR0_ATOMICS_SHIFT,
 		.min_field_value = 2,
 	},
 #endif /* CONFIG_AS_LSE && CONFIG_ARM64_LSE_ATOMICS */
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 14/22] arm64: Cleanup HWCAP handling
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (12 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 13/22] arm64: Make use of system wide capability checks Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 15/22] arm64: Move FP/ASIMD hwcap handling to common code Suzuki K. Poulose
                   ` (9 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Extend struct arm64_cpu_capabilities to handle the HWCAP detection
and make use of the system wide value for the feature register.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |    2 +
 arch/arm64/include/asm/hwcap.h      |    8 ++
 arch/arm64/kernel/cpufeature.c      |  153 ++++++++++++++++++-----------------
 3 files changed, 91 insertions(+), 72 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index e74a2ac..089c742 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -83,6 +83,8 @@ struct arm64_cpu_capabilities {
 			u32 sys_reg;
 			int field_pos;
 			int min_field_value;
+			int hwcap_type;
+			unsigned long hwcap;
 		};
 	};
 };
diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
index 0ad7351..400b80b 100644
--- a/arch/arm64/include/asm/hwcap.h
+++ b/arch/arm64/include/asm/hwcap.h
@@ -52,6 +52,14 @@
 extern unsigned int compat_elf_hwcap, compat_elf_hwcap2;
 #endif
 
+enum {
+	CAP_HWCAP = 1,
+#ifdef CONFIG_COMPAT
+	CAP_COMPAT_HWCAP,
+	CAP_COMPAT_HWCAP2,
+#endif
+};
+
 extern unsigned long elf_hwcap;
 #endif
 #endif
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3582af9..3f273a3 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -631,6 +631,79 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	{},
 };
 
+#define HWCAP_CAP(reg, field, min_value, type, cap)		\
+	{							\
+		.desc = #cap,					\
+		.matches = has_cpuid_feature,			\
+		.sys_reg = reg,					\
+		.field_pos = field,				\
+		.min_field_value = min_value,			\
+		.hwcap_type = type,				\
+		.hwcap = cap,					\
+	}
+
+static const struct arm64_cpu_capabilities arm64_hwcaps[] = {
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, 2, CAP_HWCAP, HWCAP_PMULL),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, 1, CAP_HWCAP, HWCAP_AES),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, 1, CAP_HWCAP, HWCAP_SHA1),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, 1, CAP_HWCAP, HWCAP_SHA2),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, 1, CAP_HWCAP, HWCAP_CRC32),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, 2, CAP_HWCAP, HWCAP_ATOMICS),
+#ifdef CONFIG_COMPAT
+	HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, 2, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_PMULL),
+	HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_AES),
+	HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_SHA1_SHIFT, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA1),
+	HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_SHA2_SHIFT, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA2),
+	HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_CRC32_SHIFT, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_CRC32),
+#endif
+};
+
+static void cap_set_hwcap(const struct arm64_cpu_capabilities * cap)
+{
+	switch(cap->hwcap_type) {
+	case CAP_HWCAP:
+		elf_hwcap |= cap->hwcap;
+		break;
+#ifdef CONFIG_COMPAT
+	case CAP_COMPAT_HWCAP:
+		compat_elf_hwcap |= (u32)cap->hwcap;
+		break;
+	case CAP_COMPAT_HWCAP2:
+		compat_elf_hwcap2 |= (u32)cap->hwcap;
+		break;
+#endif
+	default:
+		BUG();
+		break;
+	}
+}
+
+static bool cpus_have_hwcap(const struct arm64_cpu_capabilities *cap)
+{
+	switch(cap->hwcap_type) {
+	case CAP_HWCAP:
+		return !!(elf_hwcap & cap->hwcap);
+#ifdef CONFIG_COMPAT
+	case CAP_COMPAT_HWCAP:
+		return !!(compat_elf_hwcap & (u32)cap->hwcap);
+	case CAP_COMPAT_HWCAP2:
+		return !!(compat_elf_hwcap2 & (u32)cap->hwcap);
+#endif
+	default:
+		BUG();
+		return false;
+	}
+}
+
+void check_cpu_hwcaps(void)
+{
+	int i;
+	const struct arm64_cpu_capabilities *hwcaps = arm64_hwcaps;
+	for(i = 0; i < ARRAY_SIZE(arm64_hwcaps); i ++)
+		if (hwcaps[i].matches(&hwcaps[i]))
+			cap_set_hwcap(&hwcaps[i]);
+}
+
 void check_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 			    const char *info)
 {
@@ -738,6 +811,13 @@ void cpu_enable_features(void)
 		if (caps[i].enable)
 			caps[i].enable(NULL);
 	}
+
+	for(i =0, caps = arm64_hwcaps; caps[i].desc; i++) {
+		if (!cpus_have_hwcap(&caps[i]))
+			continue;
+		if (!feature_matches(read_cpu_sysreg(caps[i].sys_reg), &caps[i]))
+			fail_incapable_cpu("arm64_hwcaps", &caps[i]);
+	}
 }
 
 static int cpu_feature_hotplug_notify(struct notifier_block *nb,
@@ -772,12 +852,11 @@ bool system_supports_mixed_endian_el0(void)
 
 void __init setup_cpu_features(void)
 {
-	u64 features;
-	s64 block;
 	u32 cwg;
 	int cls;
 
 	check_cpu_features();
+	check_cpu_hwcaps();
 	/*
 	 * Check for sane CTR_EL0.CWG value.
 	 */
@@ -789,75 +868,5 @@ void __init setup_cpu_features(void)
 	if (L1_CACHE_BYTES < cls)
 		pr_warn("L1_CACHE_BYTES smaller than the Cache Writeback Granule (%d < %d)\n",
 			L1_CACHE_BYTES, cls);
-
-	/*
-	 * ID_AA64ISAR0_EL1 contains 4-bit wide signed feature blocks.
-	 * The blocks we test below represent incremental functionality
-	 * for non-negative values. Negative values are reserved.
-	 */
-	features = read_cpuid(ID_AA64ISAR0_EL1);
-	block = cpuid_feature_extract_field(features, 4);
-	if (block > 0) {
-		switch (block) {
-		default:
-		case 2:
-			elf_hwcap |= HWCAP_PMULL;
-		case 1:
-			elf_hwcap |= HWCAP_AES;
-		case 0:
-			break;
-		}
-	}
-
-	if (cpuid_feature_extract_field(features, 8) > 0)
-		elf_hwcap |= HWCAP_SHA1;
-
-	if (cpuid_feature_extract_field(features, 12) > 0)
-		elf_hwcap |= HWCAP_SHA2;
-
-	if (cpuid_feature_extract_field(features, 16) > 0)
-		elf_hwcap |= HWCAP_CRC32;
-
-	block = cpuid_feature_extract_field(features, 20);
-	if (block > 0) {
-		switch (block) {
-		default:
-		case 2:
-			elf_hwcap |= HWCAP_ATOMICS;
-		case 1:
-			/* RESERVED */
-		case 0:
-			break;
-		}
-	}
-
-#ifdef CONFIG_COMPAT
-	/*
-	 * ID_ISAR5_EL1 carries similar information as above, but pertaining to
-	 * the AArch32 32-bit execution state.
-	 */
-	features = read_cpuid(ID_ISAR5_EL1);
-	block = cpuid_feature_extract_field(features, 4);
-	if (block > 0) {
-		switch (block) {
-		default:
-		case 2:
-			compat_elf_hwcap2 |= COMPAT_HWCAP2_PMULL;
-		case 1:
-			compat_elf_hwcap2 |= COMPAT_HWCAP2_AES;
-		case 0:
-			break;
-		}
-	}
-
-	if (cpuid_feature_extract_field(features, 8) > 0)
-		compat_elf_hwcap2 |= COMPAT_HWCAP2_SHA1;
-
-	if (cpuid_feature_extract_field(features, 12) > 0)
-		compat_elf_hwcap2 |= COMPAT_HWCAP2_SHA2;
-
-	if (cpuid_feature_extract_field(features, 16) > 0)
-		compat_elf_hwcap2 |= COMPAT_HWCAP2_CRC32;
-#endif
 }
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 15/22] arm64: Move FP/ASIMD hwcap handling to common code
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (13 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 14/22] arm64: Cleanup HWCAP handling Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 16/22] arm64/debug: Make use of the system wide safe value Suzuki K. Poulose
                   ` (8 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

The FP/ASIMD is detected in fpsimd_init(), which is built-in
unconditionally. Lets move the hwcap handling to the central place.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/kernel/cpufeature.c |    2 ++
 arch/arm64/kernel/fpsimd.c     |   16 +++++-----------
 2 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3f273a3..34fc2a2 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -649,6 +649,8 @@ static const struct arm64_cpu_capabilities arm64_hwcaps[] = {
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, 1, CAP_HWCAP, HWCAP_SHA2),
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, 1, CAP_HWCAP, HWCAP_CRC32),
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, 2, CAP_HWCAP, HWCAP_ATOMICS),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, 0, CAP_HWCAP, HWCAP_FP),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, 0, CAP_HWCAP, HWCAP_ASIMD),
 #ifdef CONFIG_COMPAT
 	HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, 2, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_PMULL),
 	HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_AES),
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index c56956a..4c46c54 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -332,21 +332,15 @@ static inline void fpsimd_hotplug_init(void) { }
  */
 static int __init fpsimd_init(void)
 {
-	u64 pfr = read_cpuid(ID_AA64PFR0_EL1);
-
-	if (pfr & (0xf << 16)) {
+	if (elf_hwcap & HWCAP_FP) {
+		fpsimd_pm_init();
+		fpsimd_hotplug_init();
+	} else {
 		pr_notice("Floating-point is not implemented\n");
-		return 0;
 	}
-	elf_hwcap |= HWCAP_FP;
 
-	if (pfr & (0xf << 20))
+	if (!(elf_hwcap & HWCAP_ASIMD))
 		pr_notice("Advanced SIMD is not implemented\n");
-	else
-		elf_hwcap |= HWCAP_ASIMD;
-
-	fpsimd_pm_init();
-	fpsimd_hotplug_init();
 
 	return 0;
 }
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 16/22] arm64/debug: Make use of the system wide safe value
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (14 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 15/22] arm64: Move FP/ASIMD hwcap handling to common code Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-29 12:17   ` Vladimir Murzin
  2015-09-16 14:21 ` [PATCH 17/22] arm64/kvm: Make use of the system wide safe values Suzuki K. Poulose
                   ` (7 subsequent siblings)
  23 siblings, 1 reply; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Use the system wide value of ID_AA64DFR0 to make safer decisions

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/hw_breakpoint.h |   14 ++------------
 arch/arm64/kernel/debug-monitors.c     |    6 ++++--
 arch/arm64/kernel/hw_breakpoint.c      |   19 ++++++++++++++++++-
 3 files changed, 24 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/hw_breakpoint.h b/arch/arm64/include/asm/hw_breakpoint.h
index 4c47cb2..0251768 100644
--- a/arch/arm64/include/asm/hw_breakpoint.h
+++ b/arch/arm64/include/asm/hw_breakpoint.h
@@ -119,6 +119,8 @@ extern int arch_install_hw_breakpoint(struct perf_event *bp);
 extern void arch_uninstall_hw_breakpoint(struct perf_event *bp);
 extern void hw_breakpoint_pmu_read(struct perf_event *bp);
 extern int hw_breakpoint_slots(int type);
+extern int get_num_brps(void);
+extern int get_num_wrps(void);
 
 #ifdef CONFIG_HAVE_HW_BREAKPOINT
 extern void hw_breakpoint_thread_switch(struct task_struct *next);
@@ -134,17 +136,5 @@ static inline void ptrace_hw_copy_thread(struct task_struct *task)
 
 extern struct pmu perf_ops_bp;
 
-/* Determine number of BRP registers available. */
-static inline int get_num_brps(void)
-{
-	return ((read_cpuid(ID_AA64DFR0_EL1) >> 12) & 0xf) + 1;
-}
-
-/* Determine number of WRP registers available. */
-static inline int get_num_wrps(void)
-{
-	return ((read_cpuid(ID_AA64DFR0_EL1) >> 20) & 0xf) + 1;
-}
-
 #endif	/* __KERNEL__ */
 #endif	/* __ASM_BREAKPOINT_H */
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 9b3b62a..9ca5f77 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -26,14 +26,16 @@
 #include <linux/stat.h>
 #include <linux/uaccess.h>
 
-#include <asm/debug-monitors.h>
+#include <asm/cpufeature.h>
 #include <asm/cputype.h>
+#include <asm/debug-monitors.h>
 #include <asm/system_misc.h>
 
 /* Determine debug architecture. */
 u8 debug_monitors_arch(void)
 {
-	return read_cpuid(ID_AA64DFR0_EL1) & 0xf;
+	return cpuid_feature_extract_field(read_system_reg(SYS_ID_AA64DFR0_EL1),
+						ID_AA64DFR0_DEBUGVER_SHIFT);
 }
 
 /*
diff --git a/arch/arm64/kernel/hw_breakpoint.c b/arch/arm64/kernel/hw_breakpoint.c
index c97040e..1fa0476 100644
--- a/arch/arm64/kernel/hw_breakpoint.c
+++ b/arch/arm64/kernel/hw_breakpoint.c
@@ -28,11 +28,12 @@
 #include <linux/ptrace.h>
 #include <linux/smp.h>
 
+#include <asm/cpufeature.h>
+#include <asm/cputype.h>
 #include <asm/current.h>
 #include <asm/debug-monitors.h>
 #include <asm/hw_breakpoint.h>
 #include <asm/traps.h>
-#include <asm/cputype.h>
 #include <asm/system_misc.h>
 
 /* Breakpoint currently in use for each BRP. */
@@ -48,6 +49,22 @@ static DEFINE_PER_CPU(int, stepping_kernel_bp);
 static int core_num_brps;
 static int core_num_wrps;
 
+/* Determine number of BRP registers available. */
+int get_num_brps(void)
+{
+	return 1 +
+	      	cpuid_feature_extract_field(read_system_reg(SYS_ID_AA64DFR0_EL1),
+						ID_AA64DFR0_BRPS_SHIFT);
+}
+
+/* Determine number of WRP registers available. */
+int get_num_wrps(void)
+{
+	return 1 +
+	      	cpuid_feature_extract_field(read_system_reg(SYS_ID_AA64DFR0_EL1),
+						ID_AA64DFR0_WRPS_SHIFT);
+}
+
 int hw_breakpoint_slots(int type)
 {
 	/*
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 17/22] arm64/kvm: Make use of the system wide safe values
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (15 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 16/22] arm64/debug: Make use of the system wide safe value Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 18/22] arm64: Add helper to decode register from instruction Suzuki K. Poulose
                   ` (6 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Use the system wide safe value from the new API for safer
decisions

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/kvm/reset.c    |    2 +-
 arch/arm64/kvm/sys_regs.c |   12 ++++++------
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 91cf535..f34745c 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -53,7 +53,7 @@ static bool cpu_has_32bit_el1(void)
 {
 	u64 pfr0;
 
-	pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+	pfr0 = read_system_reg(SYS_ID_AA64PFR0_EL1);
 	return !!(pfr0 & 0x20);
 }
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b41607d..fe19206 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -700,13 +700,13 @@ static bool trap_dbgidr(struct kvm_vcpu *vcpu,
 	if (p->is_write) {
 		return ignore_write(vcpu, p);
 	} else {
-		u64 dfr = read_cpuid(ID_AA64DFR0_EL1);
-		u64 pfr = read_cpuid(ID_AA64PFR0_EL1);
-		u32 el3 = !!((pfr >> 12) & 0xf);
+		u64 dfr = read_system_reg(SYS_ID_AA64DFR0_EL1);
+		u64 pfr = read_system_reg(SYS_ID_AA64PFR0_EL1);
+		u32 el3 = !!cpuid_feature_extract_field(pfr, ID_AA64PFR0_EL3_SHIFT);
 
-		*vcpu_reg(vcpu, p->Rt) = ((((dfr >> 20) & 0xf) << 28) |
-					  (((dfr >> 12) & 0xf) << 24) |
-					  (((dfr >> 28) & 0xf) << 20) |
+		*vcpu_reg(vcpu, p->Rt) = ((((dfr >> ID_AA64DFR0_WRPS_SHIFT) & 0xf) << 28) |
+					  (((dfr >> ID_AA64DFR0_BRPS_SHIFT) & 0xf) << 24) |
+					  (((dfr >> ID_AA64DFR0_CTX_CMPS_SHIFT) & 0xf) << 20) |
 					  (6 << 16) | (el3 << 14) | (el3 << 12));
 		return true;
 	}
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 18/22] arm64: Add helper to decode register from instruction
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (16 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 17/22] arm64/kvm: Make use of the system wide safe values Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 19/22] arm64: cpufeature: Track the user visible fields Suzuki K. Poulose
                   ` (5 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Add a helper to extract the register field from a given
instruction.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/insn.h |    2 ++
 arch/arm64/kernel/insn.c      |   29 +++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 30e50eb..6dea3bc 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -289,6 +289,8 @@ enum aarch64_insn_encoding_class aarch64_get_insn_class(u32 insn);
 u64 aarch64_insn_decode_immediate(enum aarch64_insn_imm_type type, u32 insn);
 u32 aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type,
 				  u32 insn, u64 imm);
+u32 aarch64_insn_decode_register(enum aarch64_insn_register_type type,
+					 u32 insn);
 u32 aarch64_insn_gen_branch_imm(unsigned long pc, unsigned long addr,
 				enum aarch64_insn_branch_type type);
 u32 aarch64_insn_gen_comp_branch_imm(unsigned long pc, unsigned long addr,
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index f341866..4286fed 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -388,6 +388,35 @@ u32 __kprobes aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type,
 	return insn;
 }
 
+u32 aarch64_insn_decode_register(enum aarch64_insn_register_type type,
+					u32 insn)
+{
+	int shift;
+
+	switch (type) {
+	case AARCH64_INSN_REGTYPE_RT:
+	case AARCH64_INSN_REGTYPE_RD:
+		shift = 0;
+		break;
+	case AARCH64_INSN_REGTYPE_RN:
+		shift = 5;
+		break;
+	case AARCH64_INSN_REGTYPE_RT2:
+	case AARCH64_INSN_REGTYPE_RA:
+		shift = 10;
+		break;
+	case AARCH64_INSN_REGTYPE_RM:
+		shift = 16;
+		break;
+	default:
+		pr_err("%s: unknown register type encoding %d\n", __func__,
+		       type);
+		return 0;
+	}
+
+	return (insn >> shift) & GENMASK(4, 0);
+}
+
 static u32 aarch64_insn_encode_register(enum aarch64_insn_register_type type,
 					u32 insn,
 					enum aarch64_insn_register reg)
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 19/22] arm64: cpufeature: Track the user visible fields
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (17 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 18/22] arm64: Add helper to decode register from instruction Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 20/22] arm64: Expose feature registers by emulating MRS Suzuki K. Poulose
                   ` (4 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Track the user visible fields of a CPU feature register.
This will be used later for exposing the value to the userspace
via emulation of MRS instruction. For more information, check
the documentation (patch follows).

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |   10 +-
 arch/arm64/kernel/cpufeature.c      |  190 ++++++++++++++++++-----------------
 2 files changed, 108 insertions(+), 92 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 089c742..c7fef3d 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -45,25 +45,33 @@ enum ftr_type {
 
 #define FTR_STRICT	true
 #define FTR_NONSTRICT	false
+#define FTR_VISIBLE	true
+#define FTR_HIDDEN	false
 
 struct arm64_ftr_bits {
+	bool		visible;	/* visible to userspace ? */
 	bool		strict;		/* CPU Sanity check
 					 *  strict matching required ? */
 	enum ftr_type	type;
 	u8		shift;
 	u8		width;
-	s64		safe_val;	/* safe value for discrete features */
+	s64		safe_val;	/* safe value for discrete or
+					 * user invisible features */
 };
 
 /*
  * @arm64_ftr_reg - Feature register
+ * @user_mask		Bits of @sys_val visible to user space.
  * @strict_mask 	Bits which should match across all CPUs for sanity.
+ * @user_val		Safe value for user invisible fields.
  * @sys_val		Safe value across the CPUs (system view)
  */
 struct arm64_ftr_reg {
 	u32			sys_id;
 	const char*		name;
+	u64			user_mask;
 	u64			strict_mask;
+	u64			user_val;
 	u64			sys_val;
 	struct arm64_ftr_bits*	ftr_bits;
 };
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 34fc2a2..c898ef1 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -42,8 +42,9 @@ unsigned int compat_elf_hwcap2 __read_mostly;
 
 DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
 
-#define ARM64_FTR_BITS(ftr_strict, ftr_type, ftr_shift, ftr_width, ftr_safe_val) \
+#define ARM64_FTR_BITS(ftr_visible, ftr_strict, ftr_type, ftr_shift, ftr_width, ftr_safe_val)	\
 	{							\
+		.visible = ftr_visible,				\
 		.strict = ftr_strict,				\
 		.type = ftr_type,				\
 		.shift = ftr_shift,				\
@@ -57,138 +58,138 @@ DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
 	}
 
 static struct arm64_ftr_bits ftr_id_aa64isar0[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 24, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_SHA1_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_AES_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// RAZ
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 24, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_SHA1_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_SCALAR_MIN, ID_AA64ISAR0_AES_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// RAZ
 	ARM64_FTR_END,
 };
 
 static struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 28, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 28, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_SCALAR_MIN, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_SCALAR_MIN, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
 	/* Linux doesn't care about the EL3 */
-	ARM64_FTR_BITS(FTR_NONSTRICT, FTR_DISCRETE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_DISCRETE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
 	ARM64_FTR_END,
 };
 
 static struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0),
 	/* Linux shouldn't care about secure memory */
-	ARM64_FTR_BITS(FTR_NONSTRICT, FTR_DISCRETE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_DISCRETE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
 	/*
 	 * Differing PARange is fine as long as all peripherals and memory are mapped
 	 * within the minimum PARange of all CPUs
 	 */
-	ARM64_FTR_BITS(FTR_NONSTRICT, FTR_SCALAR_MIN, ID_AA64MMFR0_PARANGE_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_SCALAR_MIN, ID_AA64MMFR0_PARANGE_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
 
 static struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR1_LOR_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR1_HPD_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR1_LOR_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR1_HPD_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
 
 static struct arm64_ftr_bits ftr_ctr[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 31, 1, 1),	// RAO
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 28, 3, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MAX, 24, 4, 0),	// CWG
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 20, 4, 0),	// ERG
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 16, 4, 1),	// DminLine
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 31, 1, 1),	// RAO
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 28, 3, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MAX, 24, 4, 0),	// CWG
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, 20, 4, 0),	// ERG
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, 16, 4, 1),	// DminLine
 	/*
 	 * Linux can handle differing I-cache policies. Userspace JITs will
 	 * make use of *minLine
 	 */
-	ARM64_FTR_BITS(FTR_NONSTRICT, FTR_DISCRETE, 14, 2, 0),	// L1Ip
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 4, 10, 0),	// RAZ
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 0, 4, 0),	// IminLine
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_DISCRETE, 14, 2, 0),	// L1Ip
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 4, 10, 0),	// RAZ
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, 0, 4, 0),	// IminLine
 	ARM64_FTR_END,
 };
 
 static struct arm64_ftr_bits ftr_id_mmfr0[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 28, 4, 0),	// InnerShr
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 24, 4, 0),	// FCSE
-	ARM64_FTR_BITS(FTR_NONSTRICT, FTR_SCALAR_MIN, 20, 4, 0),	// AuxReg
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 16, 4, 0),	// TCM
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 12, 4, 0),	// ShareLvl
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 8, 4, 0),	// OuterShr
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 4, 4, 0),	// PMSA
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// VMSA
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 28, 4, 0),	// InnerShr
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 24, 4, 0),	// FCSE
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_SCALAR_MIN, 20, 4, 0),	// AuxReg
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 16, 4, 0),	// TCM
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 12, 4, 0),	// ShareLvl
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 8, 4, 0),	// OuterShr
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 4, 4, 0),	// PMSA
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// VMSA
 	ARM64_FTR_END,
 };
 
 static struct arm64_ftr_bits ftr_id_aa64dfr0[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64DFR0_CTX_CMPS_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64DFR0_WRPS_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, ID_AA64DFR0_BRPS_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64DFR0_PMUVER_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64DFR0_TRACEVER_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 32, 32, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, ID_AA64DFR0_CTX_CMPS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, ID_AA64DFR0_WRPS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, ID_AA64DFR0_BRPS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64DFR0_PMUVER_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64DFR0_TRACEVER_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6),
 	ARM64_FTR_END,
 };
 
 static struct arm64_ftr_bits ftr_mvfr2[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 8, 24, 0),	// RAZ
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 4, 4, 0),	// FPMisc
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// SIMDMisc
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 8, 24, 0),	// RAZ
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 4, 4, 0),	// FPMisc
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// SIMDMisc
 	ARM64_FTR_END,
 };
 
 static struct arm64_ftr_bits ftr_dczid[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 5, 27, 0),// RAZ
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 4, 1, 1),	// DZP
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 0, 4, 0),	// BS
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 5, 27, 0),// RAZ
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 4, 1, 1),	// DZP
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, 0, 4, 0),	// BS
 	ARM64_FTR_END,
 };
 
 
 static struct arm64_ftr_bits ftr_id_isar5[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_ISAR5_RDM_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 20, 4, 0),	// RAZ
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_ISAR5_CRC32_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_ISAR5_SHA2_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_ISAR5_SHA1_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_ISAR5_AES_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, ID_ISAR5_SEVL_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_ISAR5_RDM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 20, 4, 0),	// RAZ
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_ISAR5_CRC32_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_ISAR5_SHA2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_ISAR5_SHA1_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_ISAR5_AES_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, ID_ISAR5_SEVL_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
 
 static struct arm64_ftr_bits ftr_id_mmfr4[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 8, 24, 0),	// RAZ
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 4, 4, 0),	// ac2
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// RAZ
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 8, 24, 0),	// RAZ
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 4, 4, 0),	// ac2
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// RAZ
 	ARM64_FTR_END,
 };
 
 static struct arm64_ftr_bits ftr_id_pfr0[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 16, 16, 0),	// RAZ
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 12, 4, 0),	// State3
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 8, 4, 0),	// State2
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 4, 4, 0),	// State1
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// State0
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 16, 16, 0),	// RAZ
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 12, 4, 0),	// State3
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 8, 4, 0),	// State2
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 4, 4, 0),	// State1
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 0, 4, 0),	// State0
 	ARM64_FTR_END,
 };
 
@@ -199,29 +200,29 @@ static struct arm64_ftr_bits ftr_id_pfr0[] = {
  * id_isar[0-4], id_mmfr[1-3], id_pfr1, mvfr[0-1]
  */
 static struct arm64_ftr_bits ftr_generic_scalar_32bit[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 28, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 24, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 20, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 16, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 12, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 8, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 4, 4, 0),
-	ARM64_FTR_BITS(FTR_STRICT, FTR_SCALAR_MIN, 0, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, 28, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, 24, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, 20, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, 16, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, 12, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, 8, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, 4, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_SCALAR_MIN, 0, 4, 0),
 	ARM64_FTR_END,
 };
 
 static struct arm64_ftr_bits ftr_generic[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 64, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 0, 64, 0),
 	ARM64_FTR_END,
 };
 
 static struct arm64_ftr_bits ftr_generic32[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 32, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_DISCRETE, 0, 32, 0),
 	ARM64_FTR_END,
 };
 
 static struct arm64_ftr_bits ftr_aa64raz[] = {
-	ARM64_FTR_BITS(FTR_STRICT, FTR_DISCRETE, 0, 64, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_DISCRETE, 0, 64, 0),
 	ARM64_FTR_END,
 };
 
@@ -370,12 +371,13 @@ static s64 arm64_ftr_safe_value(struct arm64_ftr_bits *ftrp, s64 new, s64 cur)
 
 /*
  * Initialise the CPU feature register from Boot CPU values.
- * Also initiliases the strict_mask for the register.
+ * Also initiliases the strict_mask, user_mask and user_val
+ * for the register.
  */
 static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new)
 {
 	u64 val = 0;
-	u64 strict_mask = ~0x0ULL;
+	u64 user_mask = 0, strict_mask = ~0x0ULL;
 	struct arm64_ftr_bits *ftrp;
 	struct arm64_ftr_reg *reg = get_arm64_sys_reg(sys_reg);
 
@@ -385,10 +387,16 @@ static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new)
 		s64 ftr_new = arm64_ftr_value(ftrp, new);
 
 		val = arm64_ftr_set_value(ftrp, val, ftr_new);
+		if (ftrp->visible)
+			user_mask |= ftr_mask(ftrp);
+		else
+			reg->user_val = arm64_ftr_set_value(ftrp, reg->user_val,
+								ftrp->safe_val);
 		if (!ftrp->strict)
 			strict_mask &= ~ftr_mask(ftrp);
 	}
 	reg->sys_val = val;
+	reg->user_mask = user_mask;
 	reg->strict_mask = strict_mask;
 }
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 20/22] arm64: Expose feature registers by emulating MRS
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (18 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 19/22] arm64: cpufeature: Track the user visible fields Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 21/22] arm64: cpuinfo: Expose MIDR_EL1 and REVIDR_EL1 to sysfs Suzuki K. Poulose
                   ` (3 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

This patch adds the hook for emulating MRS instruction to
export the 'user visible' value of supported system registers.
We emulate only the following id space for system registers:
	Op0=0, Op1=0, CRn=0.

The rest will fall back to SIGILL.

This capability is also advertised via a new HWCAP_CPUID.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 Changes since V1:
   - Add a new HWCAP_CPUID to advertise the availability of the
     emulation.
---
 arch/arm64/include/asm/sysreg.h     |    3 ++
 arch/arm64/include/uapi/asm/hwcap.h |    1 +
 arch/arm64/kernel/cpufeature.c      |   94 +++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/cpuinfo.c         |    1 +
 4 files changed, 99 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index dc72fc6..b44ad93 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -192,6 +192,9 @@
 #define MVFR1_FPFTZ_SHIFT		0
 
 
+/* Safe value for MPIDR_EL1: Bit31:RES1, Bit30:U:0, Bit24:MT:1 */
+#define SYS_MPIDR_SAFE_VAL	((1UL<<31)|(1UL<<24))
+
 #ifdef __ASSEMBLY__
 
 	.irp	num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index 361c8a8..3386e64 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -28,5 +28,6 @@
 #define HWCAP_SHA2		(1 << 6)
 #define HWCAP_CRC32		(1 << 7)
 #define HWCAP_ATOMICS		(1 << 8)
+#define HWCAP_CPUID		(1 << 9)
 
 #endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c898ef1..6c92c81 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -24,6 +24,7 @@
 #include <asm/cpufeature.h>
 #include <asm/processor.h>
 #include <asm/sysreg.h>
+#include <asm/traps.h>
 
 unsigned long elf_hwcap __read_mostly;
 EXPORT_SYMBOL_GPL(elf_hwcap);
@@ -709,6 +710,8 @@ void check_cpu_hwcaps(void)
 {
 	int i;
 	const struct arm64_cpu_capabilities *hwcaps = arm64_hwcaps;
+
+	elf_hwcap |= HWCAP_CPUID;
 	for(i = 0; i < ARRAY_SIZE(arm64_hwcaps); i ++)
 		if (hwcaps[i].matches(&hwcaps[i]))
 			cap_set_hwcap(&hwcaps[i]);
@@ -880,3 +883,94 @@ void __init setup_cpu_features(void)
 			L1_CACHE_BYTES, cls);
 }
 
+/*
+ * We emulate only the following system register space.
+ * 	Op0 = 0x3, CRn = 0x0, Op1 = 0x0
+ * See Table C5-6 System instruction encodings for System register accesses,
+ * ARMv8 ARM(ARM DDI 0487A.f) for more details.
+ */
+static int __attribute_const__ is_emulated(u32 id)
+{
+	if (sys_reg_Op0(id) != 0x3 ||
+	    sys_reg_CRn(id) != 0x0 ||
+	    sys_reg_Op1(id) != 0x0)
+		return 0;
+	return 1;
+}
+
+/*
+ * With CRm = 0, id should be one of :
+ *	MIDR_EL1
+ *	MPIDR_EL1
+ *  	REVIDR_EL1
+ */
+static int emulate_id_reg(u32 id, u64 *valp)
+{
+	switch(id) {
+	case SYS_MIDR_EL1:
+		*valp = read_cpuid_id();
+		return 0;
+	case SYS_MPIDR_EL1:
+		*valp = SYS_MPIDR_SAFE_VAL;
+		return 0;
+	case SYS_REVIDR_EL1:
+		*valp = 0;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
+
+static int emulate_sys_reg(u32 id, u64 *valp)
+{
+	struct arm64_ftr_reg *regp;
+
+	if (!is_emulated(id))
+		return -EINVAL;
+
+	if (sys_reg_CRm(id) == 0)
+		return emulate_id_reg(id, valp);
+
+	regp = get_arm64_sys_reg(id);
+	if (regp)
+		*valp = regp->user_val | (regp->sys_val & regp->user_mask);
+	else
+		/*
+		 * Registers we don't track are either IMPLEMENTAION DEFINED
+		 * (e.g, ID_AFR0_EL1) or reserved RAZ.
+		 */
+		*valp = 0;
+	return 0;
+}
+
+static int emulate_mrs(struct pt_regs *regs, u32 insn)
+{
+	int rc = 0;
+	u32 sys_reg, dst;
+	u64 val = 0;
+
+	sys_reg = (u32)aarch64_insn_decode_immediate(AARCH64_INSN_IMM_16, insn);
+	rc = emulate_sys_reg(sys_reg, &val);
+	if (rc)
+		return rc;
+	dst = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RT ,insn);
+	regs->user_regs.regs[dst] = val;
+	regs->pc += 4;
+	return 0;
+}
+
+static struct undef_hook mrs_hook = {
+	.instr_mask = 0xfff00000,
+	.instr_val  = 0xd5300000,
+	.pstate_mask = COMPAT_PSR_MODE_MASK,
+	.pstate_val = PSR_MODE_EL0t,
+	.fn = emulate_mrs,
+};
+
+int __init arm64_cpufeature_init(void)
+{
+	register_undef_hook(&mrs_hook);
+	return 0;
+}
+
+late_initcall(arm64_cpufeature_init);
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 789fbea..52331ff 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -58,6 +58,7 @@ static const char *hwcap_str[] = {
 	"sha2",
 	"crc32",
 	"atomics",
+	"cpuid",
 	NULL
 };
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 21/22] arm64: cpuinfo: Expose MIDR_EL1 and REVIDR_EL1 to sysfs
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (19 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 20/22] arm64: Expose feature registers by emulating MRS Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-16 14:21 ` [PATCH 22/22] arm64: feature registers: Documentation Suzuki K. Poulose
                   ` (2 subsequent siblings)
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Steve Capper

From: Steve Capper <steve.capper@linaro.org>

It can be useful for JIT software to be aware of MIDR_EL1 and
REVIDR_EL1 to ascertain the presence of any core errata that could
affect codegen.

This patch exposes these registers through sysfs:

/sys/devices/system/cpu/cpu$ID/identification/midr
/sys/devices/system/cpu/cpu$ID/identification/revidr

where $ID is the cpu number. For big.LITTLE systems, one can have a
mixture of cores (e.g. Cortex A53 and Cortex A57), thus all CPUs need
to be enumerated.

If the kernel does not have valid information to populate these entries
with, an empty string is returned to userspace.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
---
 arch/arm64/include/asm/cpu.h |    1 +
 arch/arm64/kernel/cpuinfo.c  |   48 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 49 insertions(+)

diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h
index 8e797b2..f3649f9 100644
--- a/arch/arm64/include/asm/cpu.h
+++ b/arch/arm64/include/asm/cpu.h
@@ -29,6 +29,7 @@ struct cpuinfo_arm64 {
 	u32		reg_cntfrq;
 	u32		reg_dczid;
 	u32		reg_midr;
+	u32		reg_revidr;
 
 	u64		reg_id_aa64dfr0;
 	u64		reg_id_aa64dfr1;
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 52331ff..93e0488 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -199,6 +199,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
 	info->reg_ctr = read_cpuid_cachetype();
 	info->reg_dczid = read_cpuid(DCZID_EL0);
 	info->reg_midr = read_cpuid_id();
+	info->reg_revidr = read_cpuid(REVIDR_EL1);
 
 	info->reg_id_aa64dfr0 = read_cpuid(ID_AA64DFR0_EL1);
 	info->reg_id_aa64dfr1 = read_cpuid(ID_AA64DFR1_EL1);
@@ -247,3 +248,50 @@ void __init cpuinfo_store_boot_cpu(void)
 	boot_cpu_data = *info;
 	init_cpu_features(&boot_cpu_data);
 }
+
+#define CPUINFO_ATTR_RO(_name)							\
+	static ssize_t show_##_name (struct device *dev,			\
+			struct device_attribute *attr, char *buf)		\
+	{									\
+		struct cpuinfo_arm64 *info = &per_cpu(cpu_data, dev->id);	\
+										\
+		if (info->reg_midr)						\
+			return sprintf(buf, "0x%016x\n", info->reg_##_name);	\
+		else								\
+			return 0;						\
+	}									\
+	static DEVICE_ATTR(_name, 0444, show_##_name, NULL)
+
+CPUINFO_ATTR_RO(midr);
+CPUINFO_ATTR_RO(revidr);
+
+static struct attribute *cpuregs_attrs[] = {
+	&dev_attr_midr.attr,
+	&dev_attr_revidr.attr,
+	NULL
+};
+
+static struct attribute_group cpuregs_attr_group = {
+	.attrs = cpuregs_attrs,
+	.name = "identification"
+};
+
+static int __init cpuinfo_regs_init(void)
+{
+	int cpu, ret;
+
+	for_each_present_cpu(cpu) {
+		struct device *dev = get_cpu_device(cpu);
+
+		if (!dev)
+			return -1;
+
+		ret = sysfs_create_group(&dev->kobj, &cpuregs_attr_group);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+device_initcall(cpuinfo_regs_init);
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 22/22] arm64: feature registers: Documentation
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (20 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 21/22] arm64: cpuinfo: Expose MIDR_EL1 and REVIDR_EL1 to sysfs Suzuki K. Poulose
@ 2015-09-16 14:21 ` Suzuki K. Poulose
  2015-09-18  9:23 ` [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
  2015-09-23 15:56 ` Dave Martin
  23 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16 14:21 UTC (permalink / raw
  To: linux-arm-kernel
  Cc: Catalin.Marinas, Will.Deacon, Mark.Rutland, edward.nevill, aph,
	linux-kernel, andre.przywara, ard.biesheuvel, dave.martin,
	marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Documentation of the infrastructure

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 Documentation/arm64/cpu-feature-registers.txt |  209 +++++++++++++++++++++++++
 1 file changed, 209 insertions(+)
 create mode 100644 Documentation/arm64/cpu-feature-registers.txt

diff --git a/Documentation/arm64/cpu-feature-registers.txt b/Documentation/arm64/cpu-feature-registers.txt
new file mode 100644
index 0000000..2367164
--- /dev/null
+++ b/Documentation/arm64/cpu-feature-registers.txt
@@ -0,0 +1,209 @@
+		ARM64 CPU Feature Registers
+		===========================
+
+Author: Suzuki K. Poulose <suzuki.poulose@arm.com>
+
+
+This file describes the API for exporting the AArch64 CPU ID/feature registers
+to userspace.
+
+1. Motivation
+---------------
+
+The ARM architecture defines a set of feature registers, which describe
+the capabilities of the CPU/system. Access to these system registers is
+restricted from EL0 and there is no reliable way for an application to
+extract this information to make better decisions at runtime. There is
+limited information available to the application via ELF_HWCAPs, however
+there are some issues with their usage.
+
+ a) Any change to the HWCAPs requires an update to userspace (e.g libc)
+    to detect the new changes, which can take a long time to appear in
+    distributions. Exposing the registers allows applications to get the
+    information without requiring updates to the toolchains.
+
+ b) Access to HWCAPs is sometimes restricted (e.g prior to libc, or when ld is
+    initialised at startup time).
+
+ c) HWCAPs cannot represent non-boolean information effectively. The
+    architecture defines a canonical format for representing features
+    in the ID registers; this is well defined and is capable of
+    representing all valid architecture variations. Exposing the ID
+    registers avoids having to come up with HWCAP representations
+    and parsing code.
+
+
+2. Requirements
+-----------------
+
+ a) Safety :
+    Applications should be able to use the information provided by the
+    infrastructure to run optimally safely across the system. This has
+    greater implications on a system with heterogeneous CPUs. The
+    infrastructure exports a value that is safe across all the available
+    CPU on the system.
+
+    e.g, If at least one CPU doesn't implement CRC32 instructions, while others
+    do, we should report that the CRC32 is not implemented. Otherwise an
+    application could crash when scheduled on the CPU which doesn't support
+    CRC32.
+
+ b) Security :
+    Applications should only be able to receive information that is relevant
+    to the normal operation in userspace. Hence, some of the fields
+    are masked out and the values of the fields are set to indicate the
+    feature is 'not supported' (See the 'visible' field in the
+    table in Section 4). Also, the kernel may manipulate the fields based on what
+    it supports. e.g, If FP is not supported by the kernel, the values
+    could indicate that the FP is not available (even when the CPU provides
+    it).
+
+ c) Implementation Defined Features
+    The infrastructure doesn't expose any register which is
+    IMPLEMENTATION DEFINED as per ARMv8-A Architecture.
+
+ d) CPU Identification :
+    MIDR_EL1 is exposed to help identify the processor. On a heterogeneous
+    system, this could be racy (just like getcpu()). The process could be
+    migrated to another CPU by the time it uses the register value, unless the
+    CPU affinity is set. Hence, there is no guarantee that the value reflects the
+    processor that it is currently executing on. The REVIDR is not exposed due
+    to this constraint, as REVIDR makes sense only in conjunction with the MIDR.
+    Alternately, MIDR_EL1 and REVIDR_EL1 are exposed via sysfs at :
+	/sys/devices/system/cpu/cpu$ID/identification/
+	                                              \- midr
+	                                              \- revidr
+
+The list of supported registers and the attributes of individual
+feature bits are listed in section 4. Unless there is absolute necessity,
+we don't encourage the addition of new feature registers to the list.
+In any case, it should comply to the requirements listed above.
+
+3. Implementation
+--------------------
+
+The infrastructure is built on the emulation of the 'MRS' instruction.
+Accessing a restricted system register from an application generates an
+exception and ends up in SIGILL being delivered to the process.
+The infrastructure hooks into the exception handler and emulates the
+operation if the source belongs to the supported system register space.
+
+The infrastructure emulates only the following system register space:
+	Op0=3, Op1=0, CRn=0
+
+(See Table C5-6 'System instruction encodings for System register accesses'
+ in ARMv8 ARM, for the list of registers).
+
+
+The following rules are applied to the value returned by the infrastructure:
+
+ a) The value of an 'IMPLEMENTATION DEFINED' field is set to 0.
+ b) The value of a reserved field is populated with the reserved
+    value as defined by the architecture.
+ c) The value of a field marked as not 'visible', is set to indicate
+    the feature is missing (as defined by the architecture).
+ d) The value of a 'visible' field holds the system wide safe value
+    for the particular feature(except for MIDR_EL1, see section 4).
+
+There are only a few registers visible to the userspace. See Section 4,
+for the list of 'visible' registers.
+
+The registers which are either reserved RAZ or IMPLEMENTAION DEFINED are
+emulated as 0.
+
+All others are emulated as having 'invisible' features.
+
+4. List of exposed registers
+-----------------------------
+
+  1) ID_AA64ISAR0_EL1 - Instruction Set Attribute Register 0
+     x--------------------------------------------------x
+     | Name                         |  bits   | visible |
+     |--------------------------------------------------|
+     | RAZ                          | [63-20] |    n    |
+     |--------------------------------------------------|
+     | CRC32                        | [19-16] |    y    |
+     |--------------------------------------------------|
+     | SHA2                         | [15-12] |    y    |
+     |--------------------------------------------------|
+     | SHA1                         | [11-8]  |    y    |
+     |--------------------------------------------------|
+     | AES                          | [7-4]   |    y    |
+     |--------------------------------------------------|
+     | RAZ                          | [3-0]   |    n    |
+     x--------------------------------------------------x
+
+  2) ID_AA64ISAR1_EL1 - Instruction Set Attribute Register 1
+     x--------------------------------------------------x
+     | Name                         |  bits   | visible |
+     |--------------------------------------------------|
+     | RAZ                          | [63-0]  |    y    |
+     x--------------------------------------------------x
+
+  3) ID_AA64PFR0_EL1 - Processor Feature Register 0
+     x--------------------------------------------------x
+     | Name                         |  bits   | visible |
+     |--------------------------------------------------|
+     | RAZ                          | [63-28] |    n    |
+     |--------------------------------------------------|
+     | GIC                          | [27-24] |    n    |
+     |--------------------------------------------------|
+     | AdvSIMD                      | [23-20] |    y    |
+     |--------------------------------------------------|
+     | FP                           | [19-16] |    y    |
+     |--------------------------------------------------|
+     | EL3                          | [15-12] |    n    |
+     |--------------------------------------------------|
+     | EL2                          | [11-8]  |    n    |
+     |--------------------------------------------------|
+     | EL1                          | [7-4]   |    n    |
+     |--------------------------------------------------|
+     | EL0                          | [3-0]   |    n    |
+     x--------------------------------------------------x
+
+  4) ID_AA64PFR1_EL1 - Processor Feature Register 1
+     x--------------------------------------------------x
+     | Name                         |  bits   | visible |
+     |--------------------------------------------------|
+     | RAZ                          | [63-0]  |    y    |
+     x--------------------------------------------------x
+
+  5) MIDR_EL1 - Main ID Register
+     x--------------------------------------------------x
+     | Name                         |  bits   | visible |
+     |--------------------------------------------------|
+     | RAZ                          | [63-32] |    n    |
+     |--------------------------------------------------|
+     | Implementer                  | [31-24] |    y    |
+     |--------------------------------------------------|
+     | Variant                      | [23-20] |    y    |
+     |--------------------------------------------------|
+     | Architecture                 | [19-16] |    y    |
+     |--------------------------------------------------|
+     | PartNum                      | [15-4]  |    y    |
+     |--------------------------------------------------|
+     | Revision                     | [3-0]   |    y    |
+     x--------------------------------------------------x
+
+   NOTE: The 'visible' fields of MIDR_EL1 will contain the value
+   as available on the CPU where it is fetched and is not a system
+   wide safe value.
+
+
+Appendix
+-----------
+
+I. CPUID feature value scheme
+
+   Each 4bit field is a signed value, with RAZ as the original value defined by
+   the architecture. When a feature is added or extended the field is incremented.
+   If an existing feature(whose value is 0) is removed, the value becomes negative(0xf).
+
+   e.g: 1) Value for ID_AA64PFR0:FP
+    		0   - Floating Point instructions supported.
+		0xf - Floating Point instructions not supported.
+
+        2) Value for ID_AA64MMFR0:TGran16
+		0   - 16K page size not supported.
+		1   - 16K page size supported.
+
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/22] arm64: Consolidate CPU feature handling
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (21 preceding siblings ...)
  2015-09-16 14:21 ` [PATCH 22/22] arm64: feature registers: Documentation Suzuki K. Poulose
@ 2015-09-18  9:23 ` Suzuki K. Poulose
  2015-09-22 15:19   ` James Morse
  2015-09-23 15:56 ` Dave Martin
  23 siblings, 1 reply; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-18  9:23 UTC (permalink / raw
  To: linux-arm-kernel@lists.infradead.org
  Cc: Catalin Marinas, Will Deacon, Mark Rutland,
	edward.nevill@linaro.org, aph@redhat.com,
	linux-kernel@vger.kernel.org, Andre Przywara,
	ard.biesheuvel@linaro.org, Dave P Martin, Marc Zyngier

On 16/09/15 15:20, Suzuki K. Poulose wrote:
> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
>
> This is an updated reincarnation of my "arm64: Expose CPU feature registers"
> series [1], which does much more.


The series is also available here :

git://linux-arm.org/linux-skp.git cpu-ftr/v1-4.3-rc1

Thanks
Suzuki



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/22] arm64: Consolidate CPU feature handling
  2015-09-18  9:23 ` [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
@ 2015-09-22 15:19   ` James Morse
  2015-09-22 15:21     ` Suzuki K. Poulose
  0 siblings, 1 reply; 35+ messages in thread
From: James Morse @ 2015-09-22 15:19 UTC (permalink / raw
  To: Suzuki K. Poulose
  Cc: linux-arm-kernel@lists.infradead.org, Catalin Marinas,
	Will Deacon, Mark Rutland, edward.nevill@linaro.org,
	aph@redhat.com, linux-kernel@vger.kernel.org, Andre Przywara,
	ard.biesheuvel@linaro.org, Dave P Martin, Marc Zyngier

On 18/09/15 10:23, Suzuki K. Poulose wrote:
> On 16/09/15 15:20, Suzuki K. Poulose wrote:
>> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
>>
>> This is an updated reincarnation of my "arm64: Expose CPU feature registers"
>> series [1], which does much more.
> 
> 
> The series is also available here :
> 
> git://linux-arm.org/linux-skp.git cpu-ftr/v1-4.3-rc1

Hi Suzuki,

I gave your branch a spin on some insane combinations with a fast-model -
testing PAN only got enabled when all cores supported it.

Tested-by: James Morse <james.morse@arm.com>


Thanks,

James

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/22] arm64: Consolidate CPU feature handling
  2015-09-22 15:19   ` James Morse
@ 2015-09-22 15:21     ` Suzuki K. Poulose
  0 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-22 15:21 UTC (permalink / raw
  To: James Morse
  Cc: linux-arm-kernel@lists.infradead.org, Catalin Marinas,
	Will Deacon, Mark Rutland, edward.nevill@linaro.org,
	aph@redhat.com, linux-kernel@vger.kernel.org, Andre Przywara,
	ard.biesheuvel@linaro.org, Dave P Martin, Marc Zyngier

On 22/09/15 16:19, James Morse wrote:
> On 18/09/15 10:23, Suzuki K. Poulose wrote:
>> On 16/09/15 15:20, Suzuki K. Poulose wrote:
>>> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
>>>
>>> This is an updated reincarnation of my "arm64: Expose CPU feature registers"
>>> series [1], which does much more.
>>
>>
>> The series is also available here :
>>
>> git://linux-arm.org/linux-skp.git cpu-ftr/v1-4.3-rc1
>
> Hi Suzuki,
>
> I gave your branch a spin on some insane combinations with a fast-model -
> testing PAN only got enabled when all cores supported it.
>
> Tested-by: James Morse <james.morse@arm.com>

Thanks for the testing James !

Cheers
Suzuki


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/22] arm64: Consolidate CPU feature handling
  2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
                   ` (22 preceding siblings ...)
  2015-09-18  9:23 ` [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
@ 2015-09-23 15:56 ` Dave Martin
  2015-09-23 15:58   ` Suzuki K. Poulose
  23 siblings, 1 reply; 35+ messages in thread
From: Dave Martin @ 2015-09-23 15:56 UTC (permalink / raw
  To: Suzuki K. Poulose
  Cc: linux-arm-kernel, Mark.Rutland, ard.biesheuvel, aph,
	Catalin.Marinas, Will.Deacon, linux-kernel, edward.nevill,
	andre.przywara, marc.zyngier

On Wed, Sep 16, 2015 at 03:20:58PM +0100, Suzuki K. Poulose wrote:
> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
> 
> This is an updated reincarnation of my "arm64: Expose CPU feature registers"
> series [1], which does much more.

[...]

Do you get whitespace errors for some patches in this series?

git am complains for several of the patches, but it's possibly my mail
chewed them up.

If the errors are real, please fix ;)

Cheers
---Dave

[...]


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/22] arm64: Consolidate CPU feature handling
  2015-09-23 15:56 ` Dave Martin
@ 2015-09-23 15:58   ` Suzuki K. Poulose
  2015-09-23 16:37     ` Suzuki K. Poulose
  0 siblings, 1 reply; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-23 15:58 UTC (permalink / raw
  To: Dave P Martin
  Cc: linux-arm-kernel@lists.infradead.org, Mark Rutland,
	ard.biesheuvel@linaro.org, aph@redhat.com, Catalin Marinas,
	Will Deacon, linux-kernel@vger.kernel.org,
	edward.nevill@linaro.org, Andre Przywara, Marc Zyngier

On 23/09/15 16:56, Dave P Martin wrote:
> On Wed, Sep 16, 2015 at 03:20:58PM +0100, Suzuki K. Poulose wrote:
>> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
>>
>> This is an updated reincarnation of my "arm64: Expose CPU feature registers"
>> series [1], which does much more.
>
> [...]
>
> Do you get whitespace errors for some patches in this series?
>
> git am complains for several of the patches, but it's possibly my mail
> chewed them up.
>
> If the errors are real, please fix ;)
>

Thanks for reporting, let me take a look.

Suzuki


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/22] arm64: Consolidate CPU feature handling
  2015-09-23 15:58   ` Suzuki K. Poulose
@ 2015-09-23 16:37     ` Suzuki K. Poulose
  2015-09-23 17:08       ` Dave P Martin
  0 siblings, 1 reply; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-23 16:37 UTC (permalink / raw
  To: Dave P Martin
  Cc: linux-arm-kernel@lists.infradead.org, Mark Rutland,
	ard.biesheuvel@linaro.org, aph@redhat.com, Catalin Marinas,
	Will Deacon, linux-kernel@vger.kernel.org,
	edward.nevill@linaro.org, Andre Przywara, Marc Zyngier

On 23/09/15 16:58, Suzuki K. Poulose wrote:
> On 23/09/15 16:56, Dave P Martin wrote:
>> On Wed, Sep 16, 2015 at 03:20:58PM +0100, Suzuki K. Poulose wrote:
>>> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
>>>
>>> This is an updated reincarnation of my "arm64: Expose CPU feature registers"
>>> series [1], which does much more.
>>
>> [...]
>>
>> Do you get whitespace errors for some patches in this series?
>>
>> git am complains for several of the patches, but it's possibly my mail
>> chewed them up.
>>
>> If the errors are real, please fix ;)
>>
>
> Thanks for reporting, let me take a look.

The problems were with my patches. I have fixed them locally, will send it
along in the next version. Thanks again for spotting.

Suzuki


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/22] arm64: Consolidate CPU feature handling
  2015-09-23 16:37     ` Suzuki K. Poulose
@ 2015-09-23 17:08       ` Dave P Martin
  0 siblings, 0 replies; 35+ messages in thread
From: Dave P Martin @ 2015-09-23 17:08 UTC (permalink / raw
  To: Suzuki K. Poulose
  Cc: linux-arm-kernel@lists.infradead.org, Mark Rutland,
	ard.biesheuvel@linaro.org, aph@redhat.com, Catalin Marinas,
	Will Deacon, linux-kernel@vger.kernel.org,
	edward.nevill@linaro.org, Andre Przywara, Marc Zyngier

On Wed, Sep 23, 2015 at 05:37:19PM +0100, Suzuki K. Poulose wrote:
> On 23/09/15 16:58, Suzuki K. Poulose wrote:
> > On 23/09/15 16:56, Dave P Martin wrote:
> >> On Wed, Sep 16, 2015 at 03:20:58PM +0100, Suzuki K. Poulose wrote:
> >>> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
> >>>
> >>> This is an updated reincarnation of my "arm64: Expose CPU feature registers"
> >>> series [1], which does much more.
> >>
> >> [...]
> >>
> >> Do you get whitespace errors for some patches in this series?
> >>
> >> git am complains for several of the patches, but it's possibly my mail
> >> chewed them up.
> >>
> >> If the errors are real, please fix ;)
> >>
> >
> > Thanks for reporting, let me take a look.
> 
> The problems were with my patches. I have fixed them locally, will send it
> along in the next version. Thanks again for spotting.

Ah, OK.

Thanks
---Dave


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 07/22] arm64: Keep track of CPU feature registers
  2015-09-16 14:21 ` [PATCH 07/22] arm64: Keep track of CPU feature registers Suzuki K. Poulose
@ 2015-09-25 11:38   ` Dave Martin
  2015-09-25 13:05     ` Suzuki K. Poulose
  0 siblings, 1 reply; 35+ messages in thread
From: Dave Martin @ 2015-09-25 11:38 UTC (permalink / raw
  To: Suzuki K. Poulose
  Cc: linux-arm-kernel, Mark.Rutland, ard.biesheuvel, aph,
	Catalin.Marinas, Will.Deacon, linux-kernel, edward.nevill,
	andre.przywara, marc.zyngier

On Wed, Sep 16, 2015 at 03:21:05PM +0100, Suzuki K. Poulose wrote:
> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
> 
> This patch adds an infrastructure to keep track of the CPU feature
> registers on the system. For each register, the infrastructure keeps
> track of the system wide safe value of the feature bits. Also, tracks
> the which fields of a register should be matched strictly across all
> the CPUs on the system for the SANITY check infrastructure.
> 
> The feature bits are classified as one of SCALAR_MIN, SCALAR_MAX and DISCRETE
> depending on the implication of the possible values. This information
> is used to decide the safe value for a feature.
> 
> SCALAR_MIN - The smaller value is safer
> SCALAR_MAX - The bigger value is safer
> DISCRETE - We can't decide between the two, so a predefined safe_value is used.

Can documentation of the meanings of these be added somewhere in the
relevant header or in Documentation?

Cheers
---Dave

[...]


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 07/22] arm64: Keep track of CPU feature registers
  2015-09-25 11:38   ` Dave Martin
@ 2015-09-25 13:05     ` Suzuki K. Poulose
  0 siblings, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-25 13:05 UTC (permalink / raw
  To: Dave P Martin
  Cc: linux-arm-kernel@lists.infradead.org, Mark Rutland,
	ard.biesheuvel@linaro.org, aph@redhat.com, Catalin Marinas,
	Will Deacon, linux-kernel@vger.kernel.org,
	edward.nevill@linaro.org, Andre Przywara, Marc Zyngier

On 25/09/15 12:38, Dave P Martin wrote:
> On Wed, Sep 16, 2015 at 03:21:05PM +0100, Suzuki K. Poulose wrote:
>> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
>>
>> This patch adds an infrastructure to keep track of the CPU feature
>> registers on the system. For each register, the infrastructure keeps
>> track of the system wide safe value of the feature bits. Also, tracks
>> the which fields of a register should be matched strictly across all
>> the CPUs on the system for the SANITY check infrastructure.
>>
>> The feature bits are classified as one of SCALAR_MIN, SCALAR_MAX and DISCRETE
>> depending on the implication of the possible values. This information
>> is used to decide the safe value for a feature.
>>
>> SCALAR_MIN - The smaller value is safer
>> SCALAR_MAX - The bigger value is safer
>> DISCRETE - We can't decide between the two, so a predefined safe_value is used.
>
> Can documentation of the meanings of these be added somewhere in the
> relevant header or in Documentation?

Sure. They were part of the initial draft and eventually lost them over the
reworks. I will add them back, since there is wider use of the information
across the system, from what I started with(i.e, Userspace visibility).

Cheers
Suzuki


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 16/22] arm64/debug: Make use of the system wide safe value
  2015-09-16 14:21 ` [PATCH 16/22] arm64/debug: Make use of the system wide safe value Suzuki K. Poulose
@ 2015-09-29 12:17   ` Vladimir Murzin
  2015-09-29 12:46     ` Suzuki K. Poulose
  2015-09-30 16:13     ` Suzuki K. Poulose
  0 siblings, 2 replies; 35+ messages in thread
From: Vladimir Murzin @ 2015-09-29 12:17 UTC (permalink / raw
  To: Suzuki K. Poulose, linux-arm-kernel@lists.infradead.org
  Cc: Mark Rutland, ard.biesheuvel@linaro.org, aph@redhat.com,
	Catalin Marinas, Will Deacon, linux-kernel@vger.kernel.org,
	edward.nevill@linaro.org, Andre Przywara, Marc Zyngier,
	Dave P Martin

On 16/09/15 15:21, Suzuki K. Poulose wrote:
> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
> 
> Use the system wide value of ID_AA64DFR0 to make safer decisions
> 
> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm64/include/asm/hw_breakpoint.h |   14 ++------------
>  arch/arm64/kernel/debug-monitors.c     |    6 ++++--
>  arch/arm64/kernel/hw_breakpoint.c      |   19 ++++++++++++++++++-
>  3 files changed, 24 insertions(+), 15 deletions(-)
> 

Looks like this patch introduces build-time regression if
CONFIG_PERF_EVENTS is not set:

> arch/arm64/kvm/built-in.o: In function `kvm_arm_setup_debug':
> /work/tools/linux/arch/arm64/kvm/debug.c:173: undefined reference to `get_num_brps'
> /work/tools/linux/arch/arm64/kvm/debug.c:177: undefined reference to `get_num_wrps'
> arch/arm64/kvm/built-in.o: In function `kvm_arm_clear_debug':
> /work/tools/linux/arch/arm64/kvm/debug.c:208: undefined reference to `get_num_brps'
> /work/tools/linux/arch/arm64/kvm/debug.c:212: undefined reference to `get_num_wrps'
> arch/arm64/kvm/built-in.o: In function `kvm_arch_dev_ioctl_check_extension':
> /work/tools/linux/arch/arm64/kvm/reset.c:78: undefined reference to `get_num_wrps'
> /work/tools/linux/arch/arm64/kvm/reset.c:75: undefined reference to `get_num_brps'
> make: *** [vmlinux] Error 1

Vladimir


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 16/22] arm64/debug: Make use of the system wide safe value
  2015-09-29 12:17   ` Vladimir Murzin
@ 2015-09-29 12:46     ` Suzuki K. Poulose
  2015-09-30 16:13     ` Suzuki K. Poulose
  1 sibling, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-29 12:46 UTC (permalink / raw
  To: Vladimir Murzin, linux-arm-kernel@lists.infradead.org
  Cc: Mark Rutland, ard.biesheuvel@linaro.org, aph@redhat.com,
	Catalin Marinas, Will Deacon, linux-kernel@vger.kernel.org,
	edward.nevill@linaro.org, Andre Przywara, Marc Zyngier,
	Dave P Martin

On 29/09/15 13:17, Vladimir Murzin wrote:
> On 16/09/15 15:21, Suzuki K. Poulose wrote:
>> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
>>
>> Use the system wide value of ID_AA64DFR0 to make safer decisions
>>
>> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
>> ---
>>   arch/arm64/include/asm/hw_breakpoint.h |   14 ++------------
>>   arch/arm64/kernel/debug-monitors.c     |    6 ++++--
>>   arch/arm64/kernel/hw_breakpoint.c      |   19 ++++++++++++++++++-
>>   3 files changed, 24 insertions(+), 15 deletions(-)
>>
>
> Looks like this patch introduces build-time regression if
> CONFIG_PERF_EVENTS is not set:
>
>> arch/arm64/kvm/built-in.o: In function `kvm_arm_setup_debug':
>> /work/tools/linux/arch/arm64/kvm/debug.c:173: undefined reference to `get_num_brps'
>> /work/tools/linux/arch/arm64/kvm/debug.c:177: undefined reference to `get_num_wrps'
>> arch/arm64/kvm/built-in.o: In function `kvm_arm_clear_debug':
>> /work/tools/linux/arch/arm64/kvm/debug.c:208: undefined reference to `get_num_brps'
>> /work/tools/linux/arch/arm64/kvm/debug.c:212: undefined reference to `get_num_wrps'
>> arch/arm64/kvm/built-in.o: In function `kvm_arch_dev_ioctl_check_extension':
>> /work/tools/linux/arch/arm64/kvm/reset.c:78: undefined reference to `get_num_wrps'
>> /work/tools/linux/arch/arm64/kvm/reset.c:75: undefined reference to `get_num_brps'
>> make: *** [vmlinux] Error 1
>
> Vladimir
>

Thanks for pointing this out. I will fix it by moving the get_num_xrps() definition
back to hw_breakpoint.h.

Suzuki


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 16/22] arm64/debug: Make use of the system wide safe value
  2015-09-29 12:17   ` Vladimir Murzin
  2015-09-29 12:46     ` Suzuki K. Poulose
@ 2015-09-30 16:13     ` Suzuki K. Poulose
  1 sibling, 0 replies; 35+ messages in thread
From: Suzuki K. Poulose @ 2015-09-30 16:13 UTC (permalink / raw
  To: Vladimir Murzin, linux-arm-kernel@lists.infradead.org
  Cc: Mark Rutland, ard.biesheuvel@linaro.org, aph@redhat.com,
	Catalin Marinas, Will Deacon, linux-kernel@vger.kernel.org,
	edward.nevill@linaro.org, Andre Przywara, Marc Zyngier,
	Dave P Martin

On 29/09/15 13:17, Vladimir Murzin wrote:
> On 16/09/15 15:21, Suzuki K. Poulose wrote:
>> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
>>
>> Use the system wide value of ID_AA64DFR0 to make safer decisions
>>
>> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
>> ---
>>   arch/arm64/include/asm/hw_breakpoint.h |   14 ++------------
>>   arch/arm64/kernel/debug-monitors.c     |    6 ++++--
>>   arch/arm64/kernel/hw_breakpoint.c      |   19 ++++++++++++++++++-
>>   3 files changed, 24 insertions(+), 15 deletions(-)
>>
>
> Looks like this patch introduces build-time regression if
> CONFIG_PERF_EVENTS is not set:
>
>> arch/arm64/kvm/built-in.o: In function `kvm_arm_setup_debug':
>> /work/tools/linux/arch/arm64/kvm/debug.c:173: undefined reference to `get_num_brps'
>> /work/tools/linux/arch/arm64/kvm/debug.c:177: undefined reference to `get_num_wrps'
>> arch/arm64/kvm/built-in.o: In function `kvm_arm_clear_debug':
>> /work/tools/linux/arch/arm64/kvm/debug.c:208: undefined reference to `get_num_brps'
>> /work/tools/linux/arch/arm64/kvm/debug.c:212: undefined reference to `get_num_wrps'
>> arch/arm64/kvm/built-in.o: In function `kvm_arch_dev_ioctl_check_extension':
>> /work/tools/linux/arch/arm64/kvm/reset.c:78: undefined reference to `get_num_wrps'
>> /work/tools/linux/arch/arm64/kvm/reset.c:75: undefined reference to `get_num_brps'
>> make: *** [vmlinux] Error 1

The fixed series is available here :

git://linux-arm.org/linux-skp.git cpu-ftr/v1.1-4.3-rc3


I will wait for some more comments/review before I sent the updated
version to the list.

Thanks
Suzuki


^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2015-09-30 16:14 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-16 14:20 [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
2015-09-16 14:20 ` [PATCH 01/22] arm64: Make the CPU information more clear Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 02/22] arm64: Delay ELF HWCAP initialisation until all CPUs are up Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 03/22] arm64: Move cpu feature detection code Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 04/22] arm64: Move mixed endian support detection Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 05/22] arm64: Move /proc/cpuinfo handling code Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 06/22] arm64: sys_reg: Define System register encoding Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 07/22] arm64: Keep track of CPU feature registers Suzuki K. Poulose
2015-09-25 11:38   ` Dave Martin
2015-09-25 13:05     ` Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 08/22] arm64: Consolidate CPU Sanity check to CPU Feature infrastructure Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 09/22] arm64: Read system wide CPUID value Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 10/22] arm64: Cleanup mixed endian support detection Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 11/22] arm64: Populate cpuinfo after notify_cpu_starting Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 12/22] arm64: Delay cpu feature checks Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 13/22] arm64: Make use of system wide capability checks Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 14/22] arm64: Cleanup HWCAP handling Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 15/22] arm64: Move FP/ASIMD hwcap handling to common code Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 16/22] arm64/debug: Make use of the system wide safe value Suzuki K. Poulose
2015-09-29 12:17   ` Vladimir Murzin
2015-09-29 12:46     ` Suzuki K. Poulose
2015-09-30 16:13     ` Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 17/22] arm64/kvm: Make use of the system wide safe values Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 18/22] arm64: Add helper to decode register from instruction Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 19/22] arm64: cpufeature: Track the user visible fields Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 20/22] arm64: Expose feature registers by emulating MRS Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 21/22] arm64: cpuinfo: Expose MIDR_EL1 and REVIDR_EL1 to sysfs Suzuki K. Poulose
2015-09-16 14:21 ` [PATCH 22/22] arm64: feature registers: Documentation Suzuki K. Poulose
2015-09-18  9:23 ` [PATCH 00/22] arm64: Consolidate CPU feature handling Suzuki K. Poulose
2015-09-22 15:19   ` James Morse
2015-09-22 15:21     ` Suzuki K. Poulose
2015-09-23 15:56 ` Dave Martin
2015-09-23 15:58   ` Suzuki K. Poulose
2015-09-23 16:37     ` Suzuki K. Poulose
2015-09-23 17:08       ` Dave P Martin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).