OSDN Git Service

tomoyo/tomoyo-test1.git
5 years agoarm64: kdump: fix small typo
Yangtao Li [Fri, 2 Nov 2018 12:36:19 +0000 (08:36 -0400)]
arm64: kdump: fix small typo

This brings the kernel doc in line with the function signature.

Signed-off-by: Yangtao Li <tiny.windzz@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: makefile fix build of .i file in external module case
Victor Kamensky [Tue, 30 Oct 2018 23:37:10 +0000 (16:37 -0700)]
arm64: makefile fix build of .i file in external module case

After 'a66649dab350 arm64: fix vdso-offsets.h dependency' if
one will try to build .i file in case of external kernel module,
build fails complaining that prepare0 target is missing. This
issue came up with SystemTap when it tries to build variety
of .i files for its own generated kernel modules trying to
figure given kernel features/capabilities.

The issue is that prepare0 is defined in top level Makefile
only if KBUILD_EXTMOD is not defined. .i file rule depends
on prepare and in case KBUILD_EXTMOD defined top level Makefile
contains empty rule for prepare. But after mentioned commit
arch/arm64/Makefile would introduce dependency on prepare0
through its own prepare target.

Fix it to put proper ifdef KBUILD_EXTMOD around code introduced
by mentioned commit. It matches what top level Makefile does.

Acked-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Victor Kamensky <kamensky@cisco.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: KVM: Guests can skip __install_bp_hardening_cb()s HYP work
James Morse [Fri, 21 Sep 2018 20:49:19 +0000 (21:49 +0100)]
arm64: KVM: Guests can skip __install_bp_hardening_cb()s HYP work

enable_smccc_arch_workaround_1() passes NULL as the hyp_vecs start and
end if the HVC conduit is in use, and ARM_SMCCC_ARCH_WORKAROUND_1 is
detected.

If the guest kernel happened to be built with KVM_INDIRECT_VECTORS,
we go on to allocate a slot, memcpy() the empty workaround in and
do the appropriate cache maintenance.

This works as we always tell memcpy() the range is 0, so it never
accesses the NULL src pointer, but we still do the cache maintenance.

If hyp_vecs_start is NULL we know we're a guest, just update the fn
like the !KVM_INDIRECT_VECTORS version.

Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: cpufeature: Trap CTR_EL0 access only where it is necessary
Suzuki K Poulose [Tue, 9 Oct 2018 13:47:07 +0000 (14:47 +0100)]
arm64: cpufeature: Trap CTR_EL0 access only where it is necessary

When there is a mismatch in the CTR_EL0 field, we trap
access to CTR from EL0 on all CPUs to expose the safe
value. However, we could skip trapping on a CPU which
matches the safe value.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: cpufeature: Fix handling of CTR_EL0.IDC field
Suzuki K Poulose [Tue, 9 Oct 2018 13:47:06 +0000 (14:47 +0100)]
arm64: cpufeature: Fix handling of CTR_EL0.IDC field

CTR_EL0.IDC reports the data cache clean requirements for instruction
to data coherence. However, if the field is 0, we need to check the
CLIDR_EL1 fields to detect the status of the feature. Currently we
don't do this and generate a warning with tainting the kernel, when
there is a mismatch in the field among the CPUs. Also the userspace
doesn't have a reliable way to check the CLIDR_EL1 register to check
the status.

This patch fixes the problem by checking the CLIDR_EL1 fields, when
(CTR_EL0.IDC == 0) and updates the kernel's copy of the CTR_EL0 for
the CPU with the actual status of the feature. This would allow the
sanity check infrastructure to do the proper checking of the fields
and also allow the CTR_EL0 emulation code to supply the real status
of the feature.

Now, if a CPU has raw CTR_EL0.IDC == 0 and effective IDC == 1 (with
overall system wide IDC == 1), we need to expose the real value to
the user. So, we trap CTR_EL0 access on the CPU which reports incorrect
CTR_EL0.IDC.

Fixes: commit 6ae4b6e057888 ("arm64: Add support for new control bits CTR_EL0.DIC and CTR_EL0.IDC")
Cc: Shanker Donthineni <shankerd@codeaurora.org>
Cc: Philip Elcan <pelcan@codeaurora.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: cpufeature: ctr: Fix cpu capability check for late CPUs
Suzuki K Poulose [Tue, 9 Oct 2018 13:47:05 +0000 (14:47 +0100)]
arm64: cpufeature: ctr: Fix cpu capability check for late CPUs

The matches() routine for a capability must honor the "scope"
passed to it and return the proper results.
i.e, when passed with SCOPE_LOCAL_CPU, it should check the
status of the capability on the current CPU. This is used by
verify_local_cpu_capabilities() on a late secondary CPU to make
sure that it's compliant with the established system features.
However, ARM64_HAS_CACHE_{IDC/DIC} always checks the system wide
registers and this could mean that a late secondary CPU could return
"true" (since the CPU hasn't updated the system wide registers yet)
and thus lead the system in an inconsistent state, where
the system assumes it has IDC/DIC feature, while the new CPU
doesn't.

Fixes: commit 6ae4b6e0578886eb36 ("arm64: Add support for new control bits CTR_EL0.DIC and CTR_EL0.IDC")
Cc: Philip Elcan <pelcan@codeaurora.org>
Cc: Shanker Donthineni <shankerd@codeaurora.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoDocumentation/arm64: HugeTLB page implementation
Punit Agrawal [Mon, 8 Oct 2018 10:03:55 +0000 (11:03 +0100)]
Documentation/arm64: HugeTLB page implementation

Arm v8 architecture supports multiple page sizes - 4k, 16k and
64k. Based on the active page size, the Linux port supports
corresponding hugepage sizes at PMD and PUD(4k only) levels.

In addition, the architecture also supports caching larger sized
ranges (composed of multiple entries) at the PTE and PMD level in the
TLBs using the contiguous bit. The Linux port makes use of this
architectural support to enable additional hugepage sizes.

Describe the two different types of hugepages supported by the arm64
kernel and the hugepage sizes enabled by each.

Acked-by: Randy Dunlap <rdunlap@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: mm: Use __pa_symbol() for set_swapper_pgd()
James Morse [Wed, 10 Oct 2018 14:43:22 +0000 (15:43 +0100)]
arm64: mm: Use __pa_symbol() for set_swapper_pgd()

commit 2330b7ca78350efcb ("arm64/mm: use fixmap to modify
swapper_pg_dir") modifies the swapper_pg_dir via the fixmap
as the kernel page tables have been moved to a read-only
part of the kernel mapping.

Using __pa() to setup the fixmap causes CONFIG_DEBUG_VIRTUAL
to fire, as this function is used on the kernel-image swapper
address. The in_swapper_pgdir() test before each call of this
function means set_swapper_pgd() will only ever be called when
pgdp points somewhere in the kernel-image mapping of
swapper_pd_dir. Use __pa_symbol().

Reported-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Jun Yao <yaojun8558363@gmail.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: Add silicon-errata.txt entry for ARM erratum 1188873
Marc Zyngier [Wed, 10 Oct 2018 16:25:51 +0000 (17:25 +0100)]
arm64: Add silicon-errata.txt entry for ARM erratum 1188873

Document that we actually work around ARM erratum 1188873

Fixes: 95b861a4a6d9 ("arm64: arch_timer: Add workaround for ARM erratum 1188873")
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoRevert "arm64: uaccess: implement unsafe accessors"
James Morse [Wed, 10 Oct 2018 15:55:44 +0000 (16:55 +0100)]
Revert "arm64: uaccess: implement unsafe accessors"

This reverts commit a1f33941f7e103bcf471eaf8461b212223c642d6.

The unsafe accessors allow the PAN enable/disable calls to be made
once for a group of accesses. Adding these means we can now have
sequences that look like this:

| user_access_begin();
| unsafe_put_user(static-value, x, err);
| unsafe_put_user(helper-that-sleeps(), x, err);
| user_access_end();

Calling schedule() without taking an exception doesn't switch the
PSTATE or TTBRs. We can switch out of a uaccess-enabled region, and
run other code with uaccess enabled for a different thread.

We can also switch from uaccess-disabled code back into this region,
meaning the unsafe_put_user()s will fault.

For software-PAN, threads that do this will get stuck as
handle_mm_fault() will determine the page has already been mapped in,
but we fault again as the page tables aren't loaded.

To solve this we need code in __switch_to() that save/restores the
PAN state.

Acked-by: Julien Thierry <julien.thierry@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: mm: Drop the unused cpu parameter
Shaokun Zhang [Sat, 6 Oct 2018 08:49:04 +0000 (16:49 +0800)]
arm64: mm: Drop the unused cpu parameter

Cpu parameter is never used in flush_context, remove it.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoMAINTAINERS: fix bad sdei paths
James Morse [Fri, 5 Oct 2018 14:21:20 +0000 (15:21 +0100)]
MAINTAINERS: fix bad sdei paths

The SDEI header files had an 'arm_' namespace added, but the patterns
in the MAINTAINERS files were missed. Oops.

Reported-by: Joe Perches <joe@perches.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: mm: Use #ifdef for the __PAGETABLE_P?D_FOLDED defines
James Morse [Fri, 5 Oct 2018 13:49:16 +0000 (14:49 +0100)]
arm64: mm: Use #ifdef for the __PAGETABLE_P?D_FOLDED defines

__is_defined(__PAGETABLE_P?D_FOLDED) doesn't quite work as intended
as these symbols are internal to asm-generic and aren't defined in the
way kconfig expects. This makes them always evaluate to false.
Switch to #ifdef.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: Fix typo in a comment in arch/arm64/mm/kasan_init.c
Kyrylo Tkachov [Thu, 4 Oct 2018 16:06:46 +0000 (17:06 +0100)]
arm64: Fix typo in a comment in arch/arm64/mm/kasan_init.c

"bellow" -> "below"

The recommendation from kegel.com/kerspell is to only fix the howlers.
"Bellow" is a synonym of "howl" so this should be appropriate.

Signed-off-by: Kyrylo Tkachov <kyrylo.tkachov@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: xen: Use existing helper to check interrupt status
Julien Thierry [Tue, 28 Aug 2018 15:51:17 +0000 (16:51 +0100)]
arm64: xen: Use existing helper to check interrupt status

The status of interrupts might depend on more than just pstate. Use
interrupts_disabled() instead of raw_irqs_disabled_flags() to take the full
context into account.

Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: Use daifflag_restore after bp_hardening
Julien Thierry [Tue, 28 Aug 2018 15:51:15 +0000 (16:51 +0100)]
arm64: Use daifflag_restore after bp_hardening

For EL0 entries requiring bp_hardening, daif status is kept at
DAIF_PROCCTX_NOIRQ until after hardening has been done. Then interrupts
are enabled through local_irq_enable().

Before using local_irq_* functions, daifflags should be properly restored
to a state where IRQs are enabled.

Enable IRQs by restoring DAIF_PROCCTX state after bp hardening.

Acked-by: James Morse <james.morse@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: daifflags: Use irqflags functions for daifflags
Julien Thierry [Tue, 28 Aug 2018 15:51:14 +0000 (16:51 +0100)]
arm64: daifflags: Use irqflags functions for daifflags

Some of the work done in daifflags save/restore is already provided
by irqflags functions. Daifflags should always be a superset of irqflags
(it handles irq status + status of other flags). Modifying behaviour of
irqflags should alter the behaviour of daifflags.

Use irqflags_save/restore functions for the corresponding daifflags
operation.

Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: arch_timer: avoid unused function warning
Arnd Bergmann [Tue, 2 Oct 2018 21:11:44 +0000 (23:11 +0200)]
arm64: arch_timer: avoid unused function warning

arm64_1188873_read_cntvct_el0() is protected by the correct
CONFIG_ARM64_ERRATUM_1188873 #ifdef, but the only reference to it is
also inside of an CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND section,
and causes a warning if that is disabled:

drivers/clocksource/arm_arch_timer.c:323:20: error: 'arm64_1188873_read_cntvct_el0' defined but not used [-Werror=unused-function]

Since the erratum requires that we always apply the workaround
in the timer driver, select that symbol as we do for SoC
specific errata.

Fixes: 95b861a4a6d9 ("arm64: arch_timer: Add workaround for ARM erratum 1188873")
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: Trap WFI executed in userspace
Marc Zyngier [Mon, 1 Oct 2018 11:19:43 +0000 (12:19 +0100)]
arm64: Trap WFI executed in userspace

It recently came to light that userspace can execute WFI, and that
the arm64 kernel doesn't trap this event. This sounds rather benign,
but the kernel should decide when it wants to wait for an interrupt,
and not userspace.

Let's trap WFI and immediately return after having skipped the
instruction. This effectively makes WFI a rather expensive NOP.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: docs: Document SSBS HWCAP
Will Deacon [Mon, 1 Oct 2018 14:24:48 +0000 (15:24 +0100)]
arm64: docs: Document SSBS HWCAP

We advertise the MRS/MSR instructions for toggling SSBS at EL0 using an
HWCAP, so document it along with the others.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: docs: Fix typos in ELF hwcaps
Giacomo Travaglini [Mon, 1 Oct 2018 14:24:47 +0000 (15:24 +0100)]
arm64: docs: Fix typos in ELF hwcaps

Fix some typos in our hwcap documentation, where we refer to the wrong
ID register for some of the capabilities.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
[will: fix amusing binary constants]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/kprobes: remove an extra semicolon in arch_prepare_kprobe
zhong jiang [Thu, 9 Aug 2018 14:20:40 +0000 (22:20 +0800)]
arm64/kprobes: remove an extra semicolon in arch_prepare_kprobe

There is an extra semicolon in arch_prepare_kprobe, remove it.

Signed-off-by: zhong jiang <zhongjiang@huawei.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/numa: Unify common error path in numa_init()
Anshuman Khandual [Sat, 22 Sep 2018 15:39:56 +0000 (21:09 +0530)]
arm64/numa: Unify common error path in numa_init()

At present numa_free_distance() is being called before numa_distance is
even initialized with numa_alloc_distance() which is really pointless.
Instead lets call numa_free_distance() on the common error path inside
numa_init() after numa_alloc_distance() has been successful.

Fixes: 1a2db30034 ("arm64, numa: Add NUMA support for arm64 platforms")
Acked-by: Punit Agrawal <punit.agrawal@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/numa: Report correct memblock range for the dummy node
Anshuman Khandual [Sat, 22 Sep 2018 15:39:55 +0000 (21:09 +0530)]
arm64/numa: Report correct memblock range for the dummy node

The dummy node ID is marked into all memory ranges on the system. So the
dummy node really extends the entire memblock.memory. Hence report correct
extent information for the dummy node using memblock range helper functions
instead of the range [0LLU, PFN_PHYS(max_pfn) - 1)].

Fixes: 1a2db30034 ("arm64, numa: Add NUMA support for arm64 platforms")
Acked-by: Punit Agrawal <punit.agrawal@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/mm: Define esr_to_debug_fault_info()
Anshuman Khandual [Sat, 22 Sep 2018 15:39:54 +0000 (21:09 +0530)]
arm64/mm: Define esr_to_debug_fault_info()

fault_info[] and debug_fault_info[] are static arrays defining memory abort
exception handling functions looking into ESR fault status code encodings.
As esr_to_fault_info() is already available providing fault_info[] array
lookup, it really makes sense to have a corresponding debug_fault_info[]
array lookup function as well. This just adds an equivalent helper function
esr_to_debug_fault_info().

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/mm: Reorganize arguments for is_el1_permission_fault()
Anshuman Khandual [Sat, 22 Sep 2018 15:39:53 +0000 (21:09 +0530)]
arm64/mm: Reorganize arguments for is_el1_permission_fault()

Most memory abort exception handling related functions have the arguments
in the order (addr, esr, regs) except is_el1_permission_fault(). This
changes the argument order in this function as (addr, esr, regs) like
others.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/mm: Use ESR_ELx_FSC macro while decoding fault exception
Anshuman Khandual [Sat, 22 Sep 2018 15:39:52 +0000 (21:09 +0530)]
arm64/mm: Use ESR_ELx_FSC macro while decoding fault exception

Just replace hard code value of 63 (0x111111) with an existing macro
ESR_ELx_FSC when parsing for the status code during fault exception.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: arch_timer: Add workaround for ARM erratum 1188873
Marc Zyngier [Thu, 27 Sep 2018 16:15:34 +0000 (17:15 +0100)]
arm64: arch_timer: Add workaround for ARM erratum 1188873

When running on Cortex-A76, a timer access from an AArch32 EL0
task may end up with a corrupted value or register. The workaround for
this is to trap these accesses at EL1/EL2 and execute them there.

This only affects versions r0p0, r1p0 and r2p0 of the CPU.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: compat: Add CNTFRQ trap handler
Marc Zyngier [Thu, 27 Sep 2018 16:15:33 +0000 (17:15 +0100)]
arm64: compat: Add CNTFRQ trap handler

Just like CNTVCT, we need to handle userspace trapping into the
kernel if we're decided that the timer wasn't fit for purpose...
64bit userspace is already dealt with, but we're missing the
equivalent compat handling.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: compat: Add CNTVCT trap handler
Marc Zyngier [Thu, 27 Sep 2018 16:15:32 +0000 (17:15 +0100)]
arm64: compat: Add CNTVCT trap handler

Since people seem to make a point in breaking the userspace visible
counter, we have no choice but to trap the access. We already do this
for 64bit userspace, but this is lacking for compat. Let's provide
the required handler.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: compat: Add cp15_32 and cp15_64 handler arrays
Marc Zyngier [Thu, 27 Sep 2018 16:15:31 +0000 (17:15 +0100)]
arm64: compat: Add cp15_32 and cp15_64 handler arrays

We're now ready to start handling CP15 access. Let's add (empty)
arrays for both 32 and 64bit accessors, and the code that deals
with them.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: compat: Add condition code checks and IT advance
Marc Zyngier [Thu, 27 Sep 2018 16:15:30 +0000 (17:15 +0100)]
arm64: compat: Add condition code checks and IT advance

Here's a /really nice/ part of the architecture: a CP15 access is
allowed to trap even if it fails its condition check, and SW must
handle it. This includes decoding the IT state if this happens in
am IT block. As a consequence, SW must also deal with advancing
the IT state machine.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: compat: Add separate CP15 trapping hook
Marc Zyngier [Thu, 27 Sep 2018 16:15:29 +0000 (17:15 +0100)]
arm64: compat: Add separate CP15 trapping hook

Instead of directly generating an UNDEF when trapping a CP15 access,
let's add a new entry point to that effect (which only generates an
UNDEF for now).

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: Add decoding macros for CP15_32 and CP15_64 traps
Marc Zyngier [Thu, 27 Sep 2018 16:15:28 +0000 (17:15 +0100)]
arm64: Add decoding macros for CP15_32 and CP15_64 traps

So far, we don't have anything to help decoding ESR_ELx when dealing
with ESR_ELx_EC_CP15_{32,64}. As we're about to handle some of those,
let's add some useful macros.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: remove unused asm/compiler.h header file
Ard Biesheuvel [Thu, 27 Sep 2018 13:07:37 +0000 (15:07 +0200)]
arm64: remove unused asm/compiler.h header file

arm64 does not define CONFIG_HAVE_ARCH_COMPILER_H, nor does it keep
anything useful in its copy of asm/compiler.h, so let's remove it
before anybody starts using it.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: compat: Provide definition for COMPAT_SIGMINSTKSZ
Will Deacon [Wed, 5 Sep 2018 14:34:43 +0000 (15:34 +0100)]
arm64: compat: Provide definition for COMPAT_SIGMINSTKSZ

arch/arm/ defines a SIGMINSTKSZ of 2k, so we should use the same value
for compat tasks.

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Reported-by: Steve McIntyre <steve.mcintyre@arm.com>
Tested-by: Steve McIntyre <93sam@debian.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agosignal: Introduce COMPAT_SIGMINSTKSZ for use in compat_sys_sigaltstack
Will Deacon [Wed, 5 Sep 2018 14:34:42 +0000 (15:34 +0100)]
signal: Introduce COMPAT_SIGMINSTKSZ for use in compat_sys_sigaltstack

The sigaltstack(2) system call fails with -ENOMEM if the new alternative
signal stack is found to be smaller than SIGMINSTKSZ. On architectures
such as arm64, where the native value for SIGMINSTKSZ is larger than
the compat value, this can result in an unexpected error being reported
to a compat task. See, for example:

  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=904385

This patch fixes the problem by extending do_sigaltstack to take the
minimum signal stack size as an additional parameter, allowing the
native and compat system call entry code to pass in their respective
values. COMPAT_SIGMINSTKSZ is just defined as SIGMINSTKSZ if it has not
been defined by the architecture.

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Oleg Nesterov <oleg@redhat.com>
Reported-by: Steve McIntyre <steve.mcintyre@arm.com>
Tested-by: Steve McIntyre <93sam@debian.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoperf: Convert to using %pOFn instead of device_node.name
Rob Herring [Tue, 28 Aug 2018 01:52:39 +0000 (20:52 -0500)]
perf: Convert to using %pOFn instead of device_node.name

In preparation to remove the node name pointer from struct device_node,
convert printf users to use the %pOFn format specifier.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/mm: move runtime pgds to rodata
Jun Yao [Mon, 24 Sep 2018 16:56:18 +0000 (17:56 +0100)]
arm64/mm: move runtime pgds to rodata

Now that deliberate writes to swapper_pg_dir are made via the fixmap, we
can defend against errant writes by moving it into the rodata section.
Since tramp_pg_dir and reserved_ttbr0 must be at a fixed offset from
swapper_pg_dir, and are not modified at runtime, these are also moved
into the rodata section. Likewise, idmap_pg_dir is not modified at
runtime, and is moved into rodata.

Signed-off-by: Jun Yao <yaojun8558363@gmail.com>
Reviewed-by: James Morse <james.morse@arm.com>
[Mark: simplify linker script, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/mm: use fixmap to modify swapper_pg_dir
Jun Yao [Mon, 24 Sep 2018 16:15:02 +0000 (17:15 +0100)]
arm64/mm: use fixmap to modify swapper_pg_dir

Once swapper_pg_dir is in the rodata section, it will not be possible to
modify it directly, but we will need to modify it in some cases.

To enable this, we can use the fixmap when deliberately modifying
swapper_pg_dir. As the pgd is only transiently mapped, this provides
some resilience against illicit modification of the pgd, e.g. for
Kernel Space Mirror Attack (KSMA).

Signed-off-by: Jun Yao <yaojun8558363@gmail.com>
Reviewed-by: James Morse <james.morse@arm.com>
[Mark: simplify ifdeffery, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/mm: Separate boot-time page tables from swapper_pg_dir
Jun Yao [Mon, 24 Sep 2018 14:47:49 +0000 (15:47 +0100)]
arm64/mm: Separate boot-time page tables from swapper_pg_dir

Since the address of swapper_pg_dir is fixed for a given kernel image,
it is an attractive target for manipulation via an arbitrary write. To
mitigate this we'd like to make it read-only by moving it into the
rodata section.

We require that swapper_pg_dir is at a fixed offset from tramp_pg_dir
and reserved_ttbr0, so these will also need to move into rodata.
However, swapper_pg_dir is allocated along with some transient page
tables used for boot which we do not want to move into rodata.

As a step towards this, this patch separates the boot-time page tables
into a new init_pg_dir, and reduces swapper_pg_dir to the single page it
needs to be. This allows us to retain the relationship between
swapper_pg_dir, tramp_pg_dir, and swapper_pg_dir, while cleanly
separating these from the boot-time page tables.

The init_pg_dir holds all of the pgd/pud/pmd/pte levels needed during
boot, and all of these levels will be freed when we switch to the
swapper_pg_dir, which is initialized by the existing code in
paging_init(). Since we start off on the init_pg_dir, we no longer need
to allocate a transient page table in paging_init() in order to ensure
that swapper_pg_dir isn't live while we initialize it.

There should be no functional change as a result of this patch.

Signed-off-by: Jun Yao <yaojun8558363@gmail.com>
Reviewed-by: James Morse <james.morse@arm.com>
[Mark: place init_pg_dir after BSS, fold mm changes, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/mm: Pass ttbr1 as a parameter to __enable_mmu()
Jun Yao [Mon, 24 Sep 2018 13:51:13 +0000 (14:51 +0100)]
arm64/mm: Pass ttbr1 as a parameter to __enable_mmu()

In subsequent patches we'll use a transient pgd during the primary cpu's
boot process. To make this work while allowing secondary cpus to use the
swapper_pg_dir, we need to pass the relevant TTBR1 pgd as a parameter
to __enable_mmu().

This patch updates __enable__mmu() to take this as a parameter, updating
callsites to pass swapper_pg_dir for now.

There should be no functional change as a result of this patch.

Signed-off-by: Jun Yao <yaojun8558363@gmail.com>
Reviewed-by: James Morse <james.morse@arm.com>
[Mark: simplify assembly, clarify commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: lse: remove -fcall-used-x0 flag
Tri Vo [Wed, 19 Sep 2018 19:27:50 +0000 (12:27 -0700)]
arm64: lse: remove -fcall-used-x0 flag

x0 is not callee-saved in the PCS. So there is no need to specify
-fcall-used-x0.

Clang doesn't currently support -fcall-used flags. This patch will help
building the kernel with clang.

Tested-by: Nick Desaulniers <ndesaulniers@google.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Tri Vo <trong@android.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: Remove unused VGA console support
Andrew Murray [Thu, 13 Sep 2018 11:56:40 +0000 (12:56 +0100)]
arm64: Remove unused VGA console support

Support for VGA_CONSOLE is not allowable due to commit ee23794b8668
("video: vgacon: Don't build on arm64"), thus remove the associated
unused code.

Whilst PCI on arm64 would support VGA a valid screen_info structure
is missing.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: Kconfig: Remove ARCH_HAS_HOLES_MEMORYMODEL
James Morse [Fri, 31 Aug 2018 15:19:43 +0000 (16:19 +0100)]
arm64: Kconfig: Remove ARCH_HAS_HOLES_MEMORYMODEL

include/linux/mmzone.h describes ARCH_HAS_HOLES_MEMORYMODEL as
relevant when parts the memmap have been free()d. This would
happen on systems where memory is smaller than a sparsemem-section,
and the extra struct pages are expensive. pfn_valid() on these
systems returns true for the whole sparsemem-section, so an extra
memmap_valid_within() check is needed.

On arm64 we have nomap memory, so always provide pfn_valid() to test
for nomap pages. This means ARCH_HAS_HOLES_MEMORYMODEL's extra checks
are already rolled up into pfn_valid().

Remove it.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/cpufeatures: Emulate MRS instructions by parsing ESR_ELx.ISS
Anshuman Khandual [Thu, 20 Sep 2018 04:06:21 +0000 (09:36 +0530)]
arm64/cpufeatures: Emulate MRS instructions by parsing ESR_ELx.ISS

Armv8.4-A extension enables MRS instruction encodings inside ESR_ELx.ISS
during exception class ESR_ELx_EC_SYS64 (0x18). This encoding can be used
to emulate MRS instructions which can avoid fetch/decode from user space
thus improving performance. This adds a new sys64_hook structure element
with applicable ESR mask/value pair for MRS instructions on various system
registers but constrained by sysreg encodings which is currently allowed
to be emulated.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/cpufeatures: Factorize emulate_mrs()
Anshuman Khandual [Thu, 20 Sep 2018 04:06:20 +0000 (09:36 +0530)]
arm64/cpufeatures: Factorize emulate_mrs()

MRS emulation gets triggered with exception class (0x00 or 0x18) eventually
calling the function emulate_mrs() which fetches the user space instruction
and analyses it's encodings (OP0, OP1, OP2, CRN, CRM, RT). The kernel tries
to emulate the given instruction looking into the encoding details. Going
forward these encodings can also be parsed from ESR_ELx.ISS fields without
requiring to fetch/decode faulting userspace instruction which can improve
performance. This factorizes emulate_mrs() function in a way that it can be
called directly with MRS encodings (OP0, OP1, OP2, CRN, CRM) for any given
target register which can then be used directly from 0x18 exception class.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/cpufeatures: Introduce ESR_ELx_SYS64_ISS_RT()
Anshuman Khandual [Thu, 20 Sep 2018 04:06:19 +0000 (09:36 +0530)]
arm64/cpufeatures: Introduce ESR_ELx_SYS64_ISS_RT()

Extracting target register from ESR.ISS encoding has already been required
at multiple instances. Just make it a macro definition and replace all the
existing use cases.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: cpu_errata: Remove ARM64_MISMATCHED_CACHE_LINE_SIZE
Will Deacon [Wed, 19 Sep 2018 10:41:21 +0000 (11:41 +0100)]
arm64: cpu_errata: Remove ARM64_MISMATCHED_CACHE_LINE_SIZE

There's no need to treat mismatched cache-line sizes reported by CTR_EL0
differently to any other mismatched fields that we treat as "STRICT" in
the cpufeature code. In both cases we need to trap and emulate EL0
accesses to the register, so drop ARM64_MISMATCHED_CACHE_LINE_SIZE and
rely on ARM64_MISMATCHED_CACHE_TYPE instead.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[catalin.marinas@arm.com: move ARM64_HAS_CNP in the empty cpucaps.h slot]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: KVM: Enable Common Not Private translations
Vladimir Murzin [Tue, 31 Jul 2018 13:08:57 +0000 (14:08 +0100)]
arm64: KVM: Enable Common Not Private translations

We rely on cpufeature framework to detect and enable CNP so for KVM we
need to patch hyp to set CNP bit just before TTBR0_EL2 gets written.

For the guest we encode CNP bit while building vttbr, so we don't need
to bother with that in a world switch.

Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: mm: Support Common Not Private translations
Vladimir Murzin [Tue, 31 Jul 2018 13:08:56 +0000 (14:08 +0100)]
arm64: mm: Support Common Not Private translations

Common Not Private (CNP) is a feature of ARMv8.2 extension which
allows translation table entries to be shared between different PEs in
the same inner shareable domain, so the hardware can use this fact to
optimise the caching of such entries in the TLB.

CNP occupies one bit in TTBRx_ELy and VTTBR_EL2, which advertises to
the hardware that the translation table entries pointed to by this
TTBR are the same as every PE in the same inner shareable domain for
which the equivalent TTBR also has CNP bit set. In case CNP bit is set
but TTBR does not point at the same translation table entries for a
given ASID and VMID, then the system is mis-configured, so the results
of translations are UNPREDICTABLE.

For kernel we postpone setting CNP till all cpus are up and rely on
cpufeature framework to 1) patch the code which is sensitive to CNP
and 2) update TTBR1_EL1 with CNP bit set. TTBR1_EL1 can be
reprogrammed as result of hibernation or cpuidle (via __enable_mmu).
For these two cases we restore CnP bit via __cpu_suspend_exit().

There are a few cases we need to care of changes in TTBR0_EL1:
  - a switch to idmap
  - software emulated PAN

we rule out latter via Kconfig options and for the former we make
sure that CNP is set for non-zero ASIDs only.

Reviewed-by: James Morse <james.morse@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
[catalin.marinas@arm.com: default y for CONFIG_ARM64_CNP]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: sysreg: Clean up instructions for modifying PSTATE fields
Suzuki K Poulose [Sun, 16 Sep 2018 22:17:23 +0000 (23:17 +0100)]
arm64: sysreg: Clean up instructions for modifying PSTATE fields

Instructions for modifying the PSTATE fields which were not supported
in the older toolchains (e.g, PAN, UAO) are generated using macros.
We have so far used the normal sys_reg() helper for defining the PSTATE
fields. While this works fine, it is really difficult to correlate the
code with the Arm ARM definition.

As per Arm ARM, the PSTATE fields are defined only using Op1, Op2 fields,
with fixed values for Op0, CRn. Also the CRm field has been reserved
for the Immediate value for the instruction. So using the sys_reg()
looks quite confusing.

This patch cleans up the instruction helpers by bringing them
in line with the Arm ARM definitions to make it easier to correlate
code with the document. No functional changes.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: fix for bad_mode() handler to always result in panic
Hari Vyas [Tue, 7 Aug 2018 11:03:48 +0000 (16:33 +0530)]
arm64: fix for bad_mode() handler to always result in panic

The bad_mode() handler is called if we encounter an uunknown exception,
with the expectation that the subsequent call to panic() will halt the
system. Unfortunately, if the exception calling bad_mode() is taken from
EL0, then the call to die() can end up killing the current user task and
calling schedule() instead of falling through to panic().

Remove the die() call altogether, since we really want to bring down the
machine in this "impossible" case.

Signed-off-by: Hari Vyas <hari.vyas@broadcom.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: force_signal_inject: WARN if called from kernel context
Will Deacon [Tue, 14 Aug 2018 15:24:54 +0000 (16:24 +0100)]
arm64: force_signal_inject: WARN if called from kernel context

force_signal_inject() is designed to send a fatal signal to userspace,
so WARN if the current pt_regs indicates a kernel context. This can
currently happen for the undefined instruction trap, so patch that up so
we always BUG() if we didn't have a handler.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: cpu: Move errata and feature enable callbacks closer to callers
Will Deacon [Tue, 7 Aug 2018 12:53:41 +0000 (13:53 +0100)]
arm64: cpu: Move errata and feature enable callbacks closer to callers

The cpu errata and feature enable callbacks are only called via their
respective arm64_cpu_capabilities structure and therefore shouldn't
exist in the global namespace.

Move the PAN, RAS and cache maintenance emulation enable callbacks into
the same files as their corresponding arm64_cpu_capabilities structures,
making them static in the process.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoKVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vhe
Will Deacon [Wed, 8 Aug 2018 15:10:54 +0000 (16:10 +0100)]
KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vhe

When running without VHE, it is necessary to set SCTLR_EL2.DSSBS if SSBD
has been forcefully disabled on the kernel command-line.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3
Will Deacon [Tue, 7 Aug 2018 12:47:06 +0000 (13:47 +0100)]
arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3

On CPUs with support for PSTATE.SSBS, the kernel can toggle the SSBD
state without needing to call into firmware.

This patch hooks into the existing SSBD infrastructure so that SSBS is
used on CPUs that support it, but it's all made horribly complicated by
the very real possibility of big/little systems that don't uniformly
provide the new capability.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: entry: Allow handling of undefined instructions from EL1
Will Deacon [Tue, 7 Aug 2018 12:43:06 +0000 (13:43 +0100)]
arm64: entry: Allow handling of undefined instructions from EL1

Rather than panic() when taking an undefined instruction exception from
EL1, allow a hook to be registered in case we want to emulate the
instruction, like we will for the SSBS PSTATE manipulation instructions.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: ssbd: Drop #ifdefs for PR_SPEC_STORE_BYPASS
Will Deacon [Fri, 15 Jun 2018 10:50:42 +0000 (11:50 +0100)]
arm64: ssbd: Drop #ifdefs for PR_SPEC_STORE_BYPASS

Now that we're all merged nicely into mainline, there's no need to check
to see if PR_SPEC_STORE_BYPASS is defined.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: cpufeature: Detect SSBS and advertise to userspace
Will Deacon [Fri, 15 Jun 2018 10:37:34 +0000 (11:37 +0100)]
arm64: cpufeature: Detect SSBS and advertise to userspace

Armv8.5 introduces a new PSTATE bit known as Speculative Store Bypass
Safe (SSBS) which can be used as a mitigation against Spectre variant 4.

Additionally, a CPU may provide instructions to manipulate PSTATE.SSBS
directly, so that userspace can toggle the SSBS control without trapping
to the kernel.

This patch probes for the existence of SSBS and advertise the new instructions
to userspace if they exist.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: Fix silly typo in comment
Will Deacon [Fri, 15 Jun 2018 10:36:43 +0000 (11:36 +0100)]
arm64: Fix silly typo in comment

I was passing through and figuered I'd fix this up:

featuer -> feature

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: tlb: Rewrite stale comment in asm/tlbflush.h
Will Deacon [Tue, 28 Aug 2018 13:52:17 +0000 (14:52 +0100)]
arm64: tlb: Rewrite stale comment in asm/tlbflush.h

Peter Z asked me to justify the barrier usage in asm/tlbflush.h, but
actually that whole block comment needs to be rewritten.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: tlb: Avoid synchronous TLBIs when freeing page tables
Will Deacon [Thu, 23 Aug 2018 20:16:50 +0000 (21:16 +0100)]
arm64: tlb: Avoid synchronous TLBIs when freeing page tables

By selecting HAVE_RCU_TABLE_INVALIDATE, we can rely on tlb_flush() being
called if we fail to batch table pages for freeing. This in turn allows
us to postpone walk-cache invalidation until tlb_finish_mmu(), which
avoids lots of unnecessary DSBs and means we can shoot down the ASID if
the range is large enough.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: tlb: Adjust stride and type of TLBI according to mmu_gather
Will Deacon [Thu, 23 Aug 2018 20:08:31 +0000 (21:08 +0100)]
arm64: tlb: Adjust stride and type of TLBI according to mmu_gather

Now that the core mmu_gather code keeps track of both the levels of page
table cleared and also whether or not these entries correspond to
intermediate entries, we can use this in our tlb_flush() callback to
reduce the number of invalidations we issue as well as their scope.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: tlb: Remove redundant !CONFIG_HAVE_RCU_TABLE_FREE code
Will Deacon [Thu, 23 Aug 2018 18:48:44 +0000 (19:48 +0100)]
arm64: tlb: Remove redundant !CONFIG_HAVE_RCU_TABLE_FREE code

If there's one thing the RCU-based table freeing doesn't need, it's more
ifdeffery.

Remove the redundant !CONFIG_HAVE_RCU_TABLE_FREE code, since this option
is unconditionally selected in our Kconfig.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: tlbflush: Allow stride to be specified for __flush_tlb_range()
Will Deacon [Thu, 23 Aug 2018 18:26:21 +0000 (19:26 +0100)]
arm64: tlbflush: Allow stride to be specified for __flush_tlb_range()

When we are unmapping intermediate page-table entries or huge pages, we
don't need to issue a TLBI instruction for every PAGE_SIZE chunk in the
VA range being unmapped.

Allow the invalidation stride to be passed to __flush_tlb_range(), and
adjust our "just nuke the ASID" heuristic to take this into account.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: tlb: Justify non-leaf invalidation in flush_tlb_range()
Will Deacon [Thu, 23 Aug 2018 18:08:15 +0000 (19:08 +0100)]
arm64: tlb: Justify non-leaf invalidation in flush_tlb_range()

Add a comment to explain why we can't get away with last-level
invalidation in flush_tlb_range()

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: pgtable: Implement p[mu]d_valid() and check in set_p[mu]d()
Will Deacon [Wed, 22 Aug 2018 20:36:31 +0000 (21:36 +0100)]
arm64: pgtable: Implement p[mu]d_valid() and check in set_p[mu]d()

Now that our walk-cache invalidation routines imply a DSB before the
invalidation, we no longer need one when we are clearing an entry during
unmap.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: tlb: Add DSB ISHST prior to TLBI in __flush_tlb_[kernel_]pgtable()
Will Deacon [Wed, 22 Aug 2018 20:40:30 +0000 (21:40 +0100)]
arm64: tlb: Add DSB ISHST prior to TLBI in __flush_tlb_[kernel_]pgtable()

__flush_tlb_[kernel_]pgtable() rely on set_pXd() having a DSB after
writing the new table entry and therefore avoid the barrier prior to the
TLBI instruction.

In preparation for delaying our walk-cache invalidation on the unmap()
path, move the DSB into the TLB invalidation routines.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: tlb: Use last-level invalidation in flush_tlb_kernel_range()
Will Deacon [Wed, 22 Aug 2018 20:23:05 +0000 (21:23 +0100)]
arm64: tlb: Use last-level invalidation in flush_tlb_kernel_range()

flush_tlb_kernel_range() is only ever used to invalidate last-level
entries, so we can restrict the scope of the TLB invalidation
instruction.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: uaccess: implement unsafe accessors
Julien Thierry [Thu, 6 Sep 2018 11:09:56 +0000 (12:09 +0100)]
arm64: uaccess: implement unsafe accessors

Current implementation of get/put_user_unsafe default to get/put_user
which toggle PAN before each access, despite having been told by the caller
that multiple accesses to user memory were about to happen.

Provide implementations for user_access_begin/end to turn PAN off/on and
implement unsafe accessors that assume PAN was already turned off.

Tested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: dump: Use consistent capitalisation for page-table dumps
Will Deacon [Wed, 5 Sep 2018 14:12:27 +0000 (15:12 +0100)]
arm64: dump: Use consistent capitalisation for page-table dumps

Being consistent in our capitalisation for page-table dumps helps when
grepping for things like "end".

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64/lib: add accelerated crc32 routines
Ard Biesheuvel [Mon, 27 Aug 2018 11:02:44 +0000 (13:02 +0200)]
arm64/lib: add accelerated crc32 routines

Unlike crc32c(), which is wired up to the crypto API internally so the
optimal driver is selected based on the platform's capabilities,
crc32_le() is implemented as a library function using a slice-by-8 table
based C implementation. Even though few of the call sites may be
bottlenecks, calling a time variant implementation with a non-negligible
D-cache footprint is a bit of a waste, given that ARMv8.1 and up mandates
support for the CRC32 instructions that were optional in ARMv8.0, but are
already widely available, even on the Cortex-A53 based Raspberry Pi.

So implement routines that use these instructions if available, and fall
back to the existing generic routines otherwise. The selection is based
on alternatives patching.

Note that this unconditionally selects CONFIG_CRC32 as a builtin. Since
CRC32 is relied upon by core functionality such as CONFIG_OF_FLATTREE,
this just codifies the status quo.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoarm64: cpufeature: add feature for CRC32 instructions
Ard Biesheuvel [Mon, 27 Aug 2018 11:02:43 +0000 (13:02 +0200)]
arm64: cpufeature: add feature for CRC32 instructions

Add a CRC32 feature bit and wire it up to the CPU id register so we
will be able to use alternatives patching for CRC32 operations.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agolib/crc32: make core crc32() routines weak so they can be overridden
Ard Biesheuvel [Mon, 27 Aug 2018 11:02:42 +0000 (13:02 +0200)]
lib/crc32: make core crc32() routines weak so they can be overridden

Allow architectures to drop in accelerated CRC32 routines by making
the crc32_le/__crc32c_le entry points weak, and exposing non-weak
aliases for them that may be used by the accelerated versions as
fallbacks in case the instructions they rely upon are not available.

Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
5 years agoMerge branch 'tlb/asm-generic' into aarch64/for-next/core
Will Deacon [Fri, 7 Sep 2018 17:44:41 +0000 (18:44 +0100)]
Merge branch 'tlb/asm-generic' into aarch64/for-next/core

As agreed on the list, merge in the core mmu_gather changes which allow
us to track the levels of page-table being cleared. We'll build on this
in our low-level flushing routines, and Nick and Peter also have plans
for other architectures.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoMAINTAINERS: Add entry for MMU GATHER AND TLB INVALIDATION
Will Deacon [Mon, 3 Sep 2018 14:19:37 +0000 (15:19 +0100)]
MAINTAINERS: Add entry for MMU GATHER AND TLB INVALIDATION

We recently had to debug a TLB invalidation problem on the munmap()
path, which was made more difficult than necessary because:

  (a) The MMU gather code had changed without people realising
  (b) Many people subtly misunderstood the operation of the MMU gather
      code and its interactions with RCU and arch-specific TLB invalidation
  (c) Untangling the intended behaviour involved educated guesswork and
      plenty of discussion

Hopefully, we can avoid getting into this mess again by designating a
cross-arch group of people to look after this code. It is not intended
that they will have a separate tree, but they at least provide a point
of contact for anybody working in this area and can co-ordinate any
proposed future changes to the internal API.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agomm/memory: Move mmu_gather and TLB invalidation code into its own file
Peter Zijlstra [Mon, 3 Sep 2018 14:07:36 +0000 (15:07 +0100)]
mm/memory: Move mmu_gather and TLB invalidation code into its own file

In preparation for maintaining the mmu_gather code as its own entity,
move the implementation out of memory.c and into its own file.

Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoasm-generic/tlb: Track which levels of the page tables have been cleared
Will Deacon [Thu, 23 Aug 2018 20:01:46 +0000 (21:01 +0100)]
asm-generic/tlb: Track which levels of the page tables have been cleared

It is common for architectures with hugepage support to require only a
single TLB invalidation operation per hugepage during unmap(), rather than
iterating through the mapping at a PAGE_SIZE increment. Currently,
however, the level in the page table where the unmap() operation occurs
is not stored in the mmu_gather structure, therefore forcing
architectures to issue additional TLB invalidation operations or to give
up and over-invalidate by e.g. invalidating the entire TLB.

Ideally, we could add an interval rbtree to the mmu_gather structure,
which would allow us to associate the correct mapping granule with the
various sub-mappings within the range being invalidated. However, this
is costly in terms of book-keeping and memory management, so instead we
approximate by keeping track of the page table levels that are cleared
and provide a means to query the smallest granule required for invalidation.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoasm-generic/tlb: Track freeing of page-table directories in struct mmu_gather
Peter Zijlstra [Thu, 23 Aug 2018 19:27:25 +0000 (20:27 +0100)]
asm-generic/tlb: Track freeing of page-table directories in struct mmu_gather

Some architectures require different TLB invalidation instructions
depending on whether it is only the last-level of page table being
changed, or whether there are also changes to the intermediate
(directory) entries higher up the tree.

Add a new bit to the flags bitfield in struct mmu_gather so that the
architecture code can operate accordingly if it's the intermediate
levels being invalidated.

Acked-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoasm-generic/tlb: Guard with #ifdef CONFIG_MMU
Will Deacon [Fri, 24 Aug 2018 12:28:28 +0000 (13:28 +0100)]
asm-generic/tlb: Guard with #ifdef CONFIG_MMU

The inner workings of the mmu_gather-based TLB invalidation mechanism
are not relevant to nommu configurations, so guard them with an #ifdef.
This allows us to implement future functions using static inlines
without breaking the build.

Acked-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoLinux 4.19-rc2 v4.19-rc2
Linus Torvalds [Sun, 2 Sep 2018 21:37:30 +0000 (14:37 -0700)]
Linux 4.19-rc2

5 years agoMerge tag 'devicetree-fixes-for-4.19' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 2 Sep 2018 17:56:01 +0000 (10:56 -0700)]
Merge tag 'devicetree-fixes-for-4.19' of git://git./linux/kernel/git/robh/linux

Pull devicetree updates from Rob Herring:
 "A couple of new helper functions in preparation for some tree wide
  clean-ups.

  I'm sending these new helpers now for rc2 in order to simplify the
  dependencies on subsequent cleanups across the tree in 4.20"

* tag 'devicetree-fixes-for-4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
  of: Add device_type access helper functions
  of: add node name compare helper functions
  of: add helper to lookup compatible child node

5 years agoMerge tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Linus Torvalds [Sun, 2 Sep 2018 17:44:28 +0000 (10:44 -0700)]
Merge tag 'armsoc-fixes' of git://git./linux/kernel/git/arm/arm-soc

Pull ARM SoC fixes from Olof Johansson:
 "First batch of fixes post-merge window:

   - A handful of devicetree changes for i.MX2{3,8} to change over to
     new panel bindings. The platforms were moved from legacy
     framebuffers to DRM and some development board panels hadn't yet
     been converted.

   - OMAP fixes related to ti-sysc driver conversion fallout, fixing
     some register offsets, no_console_suspend fixes, etc.

   - Droid4 changes to fix flaky eMMC probing and vibrator DTS mismerge.

   - Fixed 0755->0644 permissions on a newly added file.

   - Defconfig changes to make ARM Versatile more useful with QEMU
     (helps testing).

   - Enable defconfig options for new TI SoC platform that was merged
     this window (AM6)"

* tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
  arm64: defconfig: Enable TI's AM6 SoC platform
  ARM: defconfig: Update the ARM Versatile defconfig
  ARM: dts: omap4-droid4: Fix emmc errors seen on some devices
  ARM: dts: Fix file permission for am335x-osd3358-sm-red.dts
  ARM: imx_v6_v7_defconfig: Select CONFIG_DRM_PANEL_SEIKO_43WVF1G
  ARM: mxs_defconfig: Select CONFIG_DRM_PANEL_SEIKO_43WVF1G
  ARM: dts: imx23-evk: Convert to the new display bindings
  ARM: dts: imx23-evk: Move regulators outside simple-bus
  ARM: dts: imx28-evk: Convert to the new display bindings
  ARM: dts: imx28-evk: Move regulators outside simple-bus
  Revert "ARM: dts: imx7d: Invert legacy PCI irq mapping"
  arm: dts: am4372: setup rtc as system-power-controller
  ARM: dts: omap4-droid4: fix vibrations on Droid 4
  bus: ti-sysc: Fix no_console_suspend handling
  bus: ti-sysc: Fix module register ioremap for larger offsets
  ARM: OMAP2+: Fix module address for modules using mpu_rt_idx
  ARM: OMAP2+: Fix null hwmod for ti-sysc debug

5 years agoMerge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 2 Sep 2018 17:11:30 +0000 (10:11 -0700)]
Merge branch 'x86-urgent-for-linus' of git://git./linux/kernel/git/tip/tip

Pull x86 fixes from Thomas Gleixner:
 "Speculation:

   - Make the microcode check more robust

   - Make the L1TF memory limit depend on the internal cache physical
     address space and not on the CPUID advertised physical address
     space, which might be significantly smaller. This avoids disabling
     L1TF on machines which utilize the full physical address space.

   - Fix the GDT mapping for EFI calls on 32bit PTI

   - Fix the MCE nospec implementation to prevent #GP

  Fixes and robustness:

   - Use the proper operand order for LSL in the VDSO

   - Prevent NMI uaccess race against CR3 switching

   - Add a lockdep check to verify that text_mutex is held in
     text_poke() functions

   - Repair the fallout of giving native_restore_fl() a prototype

   - Prevent kernel memory dumps based on usermode RIP

   - Wipe KASAN shadow stack before rewinding the stack to prevent false
     positives

   - Move the AMS GOTO enforcement to the actual build stage to allow
     user API header extraction without a compiler

   - Fix a section mismatch introduced by the on demand VDSO mapping
     change

  Miscellaneous:

   - Trivial typo, GCC quirk removal and CC_SET/OUT() cleanups"

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/pti: Fix section mismatch warning/error
  x86/vdso: Fix lsl operand order
  x86/mce: Fix set_mce_nospec() to avoid #GP fault
  x86/efi: Load fixmap GDT in efi_call_phys_epilog()
  x86/nmi: Fix NMI uaccess race against CR3 switching
  x86: Allow generating user-space headers without a compiler
  x86/dumpstack: Don't dump kernel memory based on usermode RIP
  x86/asm: Use CC_SET()/CC_OUT() in __gen_sigismember()
  x86/alternatives: Lockdep-enforce text_mutex in text_poke*()
  x86/entry/64: Wipe KASAN stack shadow before rewind_stack_do_exit()
  x86/irqflags: Mark native_restore_fl extern inline
  x86/build: Remove jump label quirk for GCC older than 4.5.2
  x86/Kconfig: Fix trivial typo
  x86/speculation/l1tf: Increase l1tf memory limit for Nehalem+
  x86/spectre: Add missing family 6 check to microcode check

5 years agoMerge branch 'smp-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 2 Sep 2018 17:09:35 +0000 (10:09 -0700)]
Merge branch 'smp-urgent-for-linus' of git://git./linux/kernel/git/tip/tip

Pull CPU hotplug fix from Thomas Gleixner:
 "Remove the stale skip_onerr member from the hotplug states"

* 'smp-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  cpu/hotplug: Remove skip_onerr field from cpuhp_step structure

5 years agoMerge branch 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 2 Sep 2018 16:41:45 +0000 (09:41 -0700)]
Merge branch 'core-urgent-for-linus' of git://git./linux/kernel/git/tip/tip

Pull core fixes from Thomas Gleixner:
 "A small set of updates for core code:

   - Prevent tracing in functions which are called from trace patching
     via stop_machine() to prevent executing half patched function trace
     entries.

   - Remove old GCC workarounds

   - Remove pointless includes of notifier.h"

* 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  objtool: Remove workaround for unreachable warnings from old GCC
  notifier: Remove notifier header file wherever not used
  watchdog: Mark watchdog touch functions as notrace

5 years agox86/pti: Fix section mismatch warning/error
Randy Dunlap [Sun, 2 Sep 2018 04:01:28 +0000 (21:01 -0700)]
x86/pti: Fix section mismatch warning/error

Fix the section mismatch warning in arch/x86/mm/pti.c:

WARNING: vmlinux.o(.text+0x6972a): Section mismatch in reference from the function pti_clone_pgtable() to the function .init.text:pti_user_pagetable_walk_pte()
The function pti_clone_pgtable() references
the function __init pti_user_pagetable_walk_pte().
This is often because pti_clone_pgtable lacks a __init
annotation or the annotation of pti_user_pagetable_walk_pte is wrong.
FATAL: modpost: Section mismatches detected.

Fixes: 85900ea51577 ("x86/pti: Map the vsyscall page if needed")
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andy Lutomirski <luto@kernel.org>
Link: https://lkml.kernel.org/r/43a6d6a3-d69d-5eda-da09-0b1c88215a2a@infradead.org
5 years agoMerge tag 'omap-for-v4.19/fixes-v2-signed' of git://git.kernel.org/pub/scm/linux...
Olof Johansson [Sun, 2 Sep 2018 01:22:19 +0000 (18:22 -0700)]
Merge tag 'omap-for-v4.19/fixes-v2-signed' of git://git./linux/kernel/git/tmlind/linux-omap into fixes

Fixes for omap variants against v4.19-rc1

These are mostly fixes related to using ti-sysc interconnect target module
driver for accessing right register offsets for sgx and cpsw and for
no_console_suspend regression.

There is also a droid4 emmc fix where emmc may not get detected for some
models, and vibrator dts mismerge fix.

And we have a file permission fix for am335x-osd3358-sm-red.dts that
just got added. And we must tag RTC as system-power-controller for
am437x for PMIC to shut down during poweroff.

* tag 'omap-for-v4.19/fixes-v2-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
  ARM: dts: omap4-droid4: Fix emmc errors seen on some devices
  ARM: dts: Fix file permission for am335x-osd3358-sm-red.dts
  arm: dts: am4372: setup rtc as system-power-controller
  ARM: dts: omap4-droid4: fix vibrations on Droid 4
  bus: ti-sysc: Fix no_console_suspend handling
  bus: ti-sysc: Fix module register ioremap for larger offsets
  ARM: OMAP2+: Fix module address for modules using mpu_rt_idx
  ARM: OMAP2+: Fix null hwmod for ti-sysc debug

Signed-off-by: Olof Johansson <olof@lixom.net>
5 years agox86/vdso: Fix lsl operand order
Samuel Neves [Sat, 1 Sep 2018 20:14:52 +0000 (21:14 +0100)]
x86/vdso: Fix lsl operand order

In the __getcpu function, lsl is using the wrong target and destination
registers. Luckily, the compiler tends to choose %eax for both variables,
so it has been working so far.

Fixes: a582c540ac1b ("x86/vdso: Use RDPID in preference to LSL when available")
Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20180901201452.27828-1-sneves@dei.uc.pt
5 years agoMerge tag 'linux-watchdog-4.19-rc2' of git://www.linux-watchdog.org/linux-watchdog
Linus Torvalds [Sat, 1 Sep 2018 20:17:15 +0000 (13:17 -0700)]
Merge tag 'linux-watchdog-4.19-rc2' of git://linux-watchdog.org/linux-watchdog

Pull watchdog fixlet from Wim Van Sebroeck:
 "Document support for r8a774a1"

* tag 'linux-watchdog-4.19-rc2' of git://www.linux-watchdog.org/linux-watchdog:
  dt-bindings: watchdog: renesas-wdt: Document r8a774a1 support

5 years agoMerge tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sat, 1 Sep 2018 20:03:32 +0000 (13:03 -0700)]
Merge tag 'clk-fixes-for-linus' of git://git./linux/kernel/git/clk/linux

Pull clk fixes from Stephen Boyd:
 "Two small fixes, one for the x86 Stoney SoC to get a more accurate clk
  frequency and the other to fix a bad allocation in the Nuvoton NPCM7XX
  driver"

* tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux:
  clk: x86: Set default parent to 48Mhz
  clk: npcm7xx: fix memory allocation

5 years agox86/mce: Fix set_mce_nospec() to avoid #GP fault
LuckTony [Fri, 31 Aug 2018 16:55:06 +0000 (09:55 -0700)]
x86/mce: Fix set_mce_nospec() to avoid #GP fault

The trick with flipping bit 63 to avoid loading the address of the 1:1
mapping of the poisoned page while the 1:1 map is updated used to work when
unmapping the page. But it falls down horribly when attempting to directly
set the page as uncacheable.

The problem is that when the cache mode is changed to uncachable, the pages
needs to be flushed from the cache first. But the decoy address is
non-canonical due to bit 63 flipped, and the CLFLUSH instruction throws a
#GP fault.

Add code to change_page_attr_set_clr() to fix the address before calling
flush.

Fixes: 284ce4011ba6 ("x86/memory_failure: Introduce {set, clear}_mce_nospec()")
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Anvin <hpa@zytor.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: linux-edac <linux-edac@vger.kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Link: https://lkml.kernel.org/r/20180831165506.GA9605@agluck-desk
5 years agoMerge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Linus Torvalds [Fri, 31 Aug 2018 16:20:30 +0000 (09:20 -0700)]
Merge tag 'arm64-fixes' of git://git./linux/kernel/git/arm64/linux

Pull arm64 fixes from Will Deacon:
 "A few arm64 fixes came in this week, specifically fixing some nasty
  truncation of return values from firmware calls and resolving a
  VM_BUG_ON due to accessing uninitialised struct pages corresponding to
  NOMAP pages.

  Summary:

   - Fix typos in SVE documentation

   - Fix type-checking and implicit truncation for SMCCC calls

   - Force CONFIG_HOLES_IN_ZONE=y so that SLAB doesn't fall over NOMAP
     regions"

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: mm: always enable CONFIG_HOLES_IN_ZONE
  arm/arm64: smccc-1.1: Handle function result as parameters
  arm/arm64: smccc-1.1: Make return values unsigned long
  Documentation/arm64/sve: Couple of improvements and typos

5 years agox86/efi: Load fixmap GDT in efi_call_phys_epilog()
Joerg Roedel [Fri, 31 Aug 2018 08:05:38 +0000 (10:05 +0200)]
x86/efi: Load fixmap GDT in efi_call_phys_epilog()

When PTI is enabled on x86-32 the kernel uses the GDT mapped in the fixmap
for the simple reason that this address is also mapped for user-space.

The efi_call_phys_prolog()/efi_call_phys_epilog() wrappers change the GDT
to call EFI runtime services and switch back to the kernel GDT when they
return. But the switch-back uses the writable GDT, not the fixmap GDT.

When that happened and and the CPU returns to user-space it switches to the
user %cr3 and tries to restore user segment registers. This fails because
the writable GDT is not mapped in the user page-table, and without a GDT
the fault handlers also can't be launched. The result is a triple fault and
reboot of the machine.

Fix that by restoring the GDT back to the fixmap GDT which is also mapped
in the user page-table.

Fixes: 7757d607c6b3 x86/pti: ('Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32')
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: hpa@zytor.com
Cc: linux-efi@vger.kernel.org
Link: https://lkml.kernel.org/r/1535702738-10971-1-git-send-email-joro@8bytes.org
5 years agoMerge tag 'for-linus-4.19b-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Fri, 31 Aug 2018 15:45:16 +0000 (08:45 -0700)]
Merge tag 'for-linus-4.19b-rc2-tag' of git://git./linux/kernel/git/xen/tip

Pull xen fixes from Juergen Gross:

 - minor cleanup avoiding a warning when building with new gcc

 - a patch to add a new sysfs node for Xen frontend/backend drivers to
   make it easier to obtain the state of a pv device

 - two fixes for 32-bit pv-guests to avoid intermediate L1TF vulnerable
   PTEs

* tag 'for-linus-4.19b-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
  x86/xen: remove redundant variable save_pud
  xen: export device state to sysfs
  x86/pae: use 64 bit atomic xchg function in native_ptep_get_and_clear
  x86/xen: don't write ptes directly in 32-bit PV guests

5 years agoMerge tag 'm68k-for-v4.19-tag2' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Fri, 31 Aug 2018 15:42:46 +0000 (08:42 -0700)]
Merge tag 'm68k-for-v4.19-tag2' of git://git./linux/kernel/git/geert/linux-m68k

Pull m68k fix from Geert Uytterhoeven:
 "Just a single fix for a bug introduced during the merge window: fix
  wrong date and time on PMU-based Macs"

* tag 'm68k-for-v4.19-tag2' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k:
  m68k/mac: Use correct PMU response format

5 years agoMerge branch 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa...
Linus Torvalds [Fri, 31 Aug 2018 15:38:53 +0000 (08:38 -0700)]
Merge branch 'i2c/for-current' of git://git./linux/kernel/git/wsa/linux

Pull i2c fixes from Wolfram Sang:

 - regression fixes for i801 and designware

 - better API and leak fix for releasing DMA safe buffers

 - better greppable strings for the bitbang algorithm

* 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
  i2c: sh_mobile: fix leak when using DMA bounce buffer
  i2c: sh_mobile: define start_ch() void as it only returns 0 anyhow
  i2c: refactor function to release a DMA safe buffer
  i2c: algos: bit: make the error messages grepable
  i2c: designware: Re-init controllers with pm_disabled set on resume
  i2c: i801: Allow ACPI AML access I/O ports not reserved for SMBus

5 years agox86/nmi: Fix NMI uaccess race against CR3 switching
Andy Lutomirski [Wed, 29 Aug 2018 15:47:18 +0000 (08:47 -0700)]
x86/nmi: Fix NMI uaccess race against CR3 switching

A NMI can hit in the middle of context switching or in the middle of
switch_mm_irqs_off().  In either case, CR3 might not match current->mm,
which could cause copy_from_user_nmi() and friends to read the wrong
memory.

Fix it by adding a new nmi_uaccess_okay() helper and checking it in
copy_from_user_nmi() and in __copy_from_user_nmi()'s callers.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Rik van Riel <riel@surriel.com>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jann Horn <jannh@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/dd956eba16646fd0b15c3c0741269dfd84452dac.1535557289.git.luto@kernel.org
5 years agox86: Allow generating user-space headers without a compiler
Ben Hutchings [Wed, 29 Aug 2018 19:43:17 +0000 (20:43 +0100)]
x86: Allow generating user-space headers without a compiler

When bootstrapping an architecture, it's usual to generate the kernel's
user-space headers (make headers_install) before building a compiler.  Move
the compiler check (for asm goto support) to the archprepare target so that
it is only done when building code for the target.

Fixes: e501ce957a78 ("x86: Force asm-goto")
Reported-by: Helmut Grohne <helmutg@debian.org>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20180829194317.GA4765@decadent.org.uk