OSDN Git Service

arm64: Restore forced disabling of KPTI on ThunderX
authordann frazier <dann.frazier@canonical.com>
Thu, 23 Sep 2021 14:50:02 +0000 (08:50 -0600)
committerCatalin Marinas <catalin.marinas@arm.com>
Thu, 23 Sep 2021 14:59:15 +0000 (15:59 +0100)
A noted side-effect of commit 0c6c2d3615ef ("arm64: Generate cpucaps.h")
is that cpucaps are now sorted, changing the enumeration order. This
assumed no dependencies between cpucaps, which turned out not to be true
in one case. UNMAP_KERNEL_AT_EL0 currently needs to be processed after
WORKAROUND_CAVIUM_27456. ThunderX systems are incompatible with KPTI, so
unmap_kernel_at_el0() bails if WORKAROUND_CAVIUM_27456 is set. But because
of the sorting, WORKAROUND_CAVIUM_27456 will not yet have been considered
when unmap_kernel_at_el0() checks for it, so the kernel tries to
run w/ KPTI - and quickly falls over.

Because all ThunderX implementations have homogeneous CPUs, we can remove
this dependency by just checking the current CPU for the erratum.

Fixes: 0c6c2d3615ef ("arm64: Generate cpucaps.h")
Cc: <stable@vger.kernel.org> # 5.13.x
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210923145002.3394558-1-dann.frazier@canonical.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
arch/arm64/kernel/cpufeature.c

index f8a3067..6ec7036 100644 (file)
@@ -1526,9 +1526,13 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
        /*
         * For reasons that aren't entirely clear, enabling KPTI on Cavium
         * ThunderX leads to apparent I-cache corruption of kernel text, which
-        * ends as well as you might imagine. Don't even try.
+        * ends as well as you might imagine. Don't even try. We cannot rely
+        * on the cpus_have_*cap() helpers here to detect the CPU erratum
+        * because cpucap detection order may change. However, since we know
+        * affected CPUs are always in a homogeneous configuration, it is
+        * safe to rely on this_cpu_has_cap() here.
         */
-       if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456)) {
+       if (this_cpu_has_cap(ARM64_WORKAROUND_CAVIUM_27456)) {
                str = "ARM64_WORKAROUND_CAVIUM_27456";
                __kpti_forced = -1;
        }