OSDN Git Service

tomoyo/tomoyo-test1.git
4 years agoarm64: ftrace: fix ifdeffery
Mark Rutland [Fri, 6 Dec 2019 13:01:29 +0000 (13:01 +0000)]
arm64: ftrace: fix ifdeffery

When I tweaked the ftrace entry assembly in commit:

  3b23e4991fb66f6d ("arm64: implement ftrace with regs")

... my ifdeffery tweaks left ftrace_graph_caller undefined for
CONFIG_DYNAMIC_FTRACE && CONFIG_FUNCTION_GRAPH_TRACER when ftrace is
based on mcount.

The kbuild test robot reported that this issue is detected at link time:

| arch/arm64/kernel/entry-ftrace.o: In function `skip_ftrace_call':
| arch/arm64/kernel/entry-ftrace.S:238: undefined reference to `ftrace_graph_caller'
| arch/arm64/kernel/entry-ftrace.S:238:(.text+0x3c): relocation truncated to fit: R_AARCH64_CONDBR19 against undefined symbol
| `ftrace_graph_caller'
| arch/arm64/kernel/entry-ftrace.S:243: undefined reference to `ftrace_graph_caller'
| arch/arm64/kernel/entry-ftrace.S:243:(.text+0x54): relocation truncated to fit: R_AARCH64_CONDBR19 against undefined symbol
| `ftrace_graph_caller'

This patch fixes the ifdeffery so that the mcount version of
ftrace_graph_caller doesn't depend on CONFIG_DYNAMIC_FTRACE. At the same
time, a redundant #else is removed from the ifdeffery for the
patchable-function-entry version of ftrace_graph_caller.

Fixes: 3b23e4991fb66f6d ("arm64: implement ftrace with regs")
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Torsten Duwe <duwe@lst.de>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: KVM: Invoke compute_layout() before alternatives are applied
Sebastian Andrzej Siewior [Thu, 28 Nov 2019 19:58:05 +0000 (20:58 +0100)]
arm64: KVM: Invoke compute_layout() before alternatives are applied

compute_layout() is invoked as part of an alternative fixup under
stop_machine(). This function invokes get_random_long() which acquires a
sleeping lock on -RT which can not be acquired in this context.

Rename compute_layout() to kvm_compute_layout() and invoke it before
stop_machine() applies the alternatives. Add a __init prefix to
kvm_compute_layout() because the caller has it, too (and so the code can be
discarded after boot).

Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: Validate tagged addresses in access_ok() called from kernel threads
Catalin Marinas [Thu, 5 Dec 2019 13:57:36 +0000 (13:57 +0000)]
arm64: Validate tagged addresses in access_ok() called from kernel threads

__range_ok(), invoked from access_ok(), clears the tag of the user
address only if CONFIG_ARM64_TAGGED_ADDR_ABI is enabled and the thread
opted in to the relaxed ABI. The latter sets the TIF_TAGGED_ADDR thread
flag. In the case of asynchronous I/O (e.g. io_submit()), the
access_ok() may be called from a kernel thread. Since kernel threads
don't have TIF_TAGGED_ADDR set, access_ok() will fail for valid tagged
user addresses. Example from the ffs_user_copy_worker() thread:

use_mm(io_data->mm);
ret = ffs_copy_to_iter(io_data->buf, ret, &io_data->data);
unuse_mm(io_data->mm);

Relax the __range_ok() check to always untag the user address if called
in the context of a kernel thread. The user pointers would have already
been checked via aio_setup_rw() -> import_{single_range,iovec}() at the
time of the asynchronous I/O request.

Fixes: 63f0c6037965 ("arm64: Introduce prctl() options to control the tagged user addresses ABI")
Cc: <stable@vger.kernel.org> # 5.4.x-
Cc: Will Deacon <will@kernel.org>
Reported-by: Evgenii Stepanov <eugenis@google.com>
Tested-by: Evgenii Stepanov <eugenis@google.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: mm: Fix column alignment for UXN in kernel_page_tables
Mark Brown [Thu, 21 Nov 2019 13:51:32 +0000 (13:51 +0000)]
arm64: mm: Fix column alignment for UXN in kernel_page_tables

UXN is the only individual PTE bit other than the PTE_ATTRINDX_MASK ones
which doesn't have both a set and a clear value provided, meaning that the
columns in the table won't all be aligned. The PTE_ATTRINDX_MASK values
are all both mutually exclusive and longer so are listed last to make a
single final column for those values. Ensure everything is aligned by
providing a clear value for UXN.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: insn: consistently handle exit text
Mark Rutland [Mon, 2 Dec 2019 16:11:07 +0000 (16:11 +0000)]
arm64: insn: consistently handle exit text

A kernel built with KASAN && FTRACE_WITH_REGS && !MODULES, produces a
boot-time splat in the bowels of ftrace:

| [    0.000000] ftrace: allocating 32281 entries in 127 pages
| [    0.000000] ------------[ cut here ]------------
| [    0.000000] WARNING: CPU: 0 PID: 0 at kernel/trace/ftrace.c:2019 ftrace_bug+0x27c/0x328
| [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.4.0-rc3-00008-g7f08ae53a7e3 #13
| [    0.000000] Hardware name: linux,dummy-virt (DT)
| [    0.000000] pstate: 60000085 (nZCv daIf -PAN -UAO)
| [    0.000000] pc : ftrace_bug+0x27c/0x328
| [    0.000000] lr : ftrace_init+0x640/0x6cc
| [    0.000000] sp : ffffa000120e7e00
| [    0.000000] x29: ffffa000120e7e00 x28: ffff00006ac01b10
| [    0.000000] x27: ffff00006ac898c0 x26: dfffa00000000000
| [    0.000000] x25: ffffa000120ef290 x24: ffffa0001216df40
| [    0.000000] x23: 000000000000018d x22: ffffa0001244c700
| [    0.000000] x21: ffffa00011bf393c x20: ffff00006ac898c0
| [    0.000000] x19: 00000000ffffffff x18: 0000000000001584
| [    0.000000] x17: 0000000000001540 x16: 0000000000000007
| [    0.000000] x15: 0000000000000000 x14: ffffa00010432770
| [    0.000000] x13: ffff940002483519 x12: 1ffff40002483518
| [    0.000000] x11: 1ffff40002483518 x10: ffff940002483518
| [    0.000000] x9 : dfffa00000000000 x8 : 0000000000000001
| [    0.000000] x7 : ffff940002483519 x6 : ffffa0001241a8c0
| [    0.000000] x5 : ffff940002483519 x4 : ffff940002483519
| [    0.000000] x3 : ffffa00011780870 x2 : 0000000000000001
| [    0.000000] x1 : 1fffe0000d591318 x0 : 0000000000000000
| [    0.000000] Call trace:
| [    0.000000]  ftrace_bug+0x27c/0x328
| [    0.000000]  ftrace_init+0x640/0x6cc
| [    0.000000]  start_kernel+0x27c/0x654
| [    0.000000] random: get_random_bytes called from print_oops_end_marker+0x30/0x60 with crng_init=0
| [    0.000000] ---[ end trace 0000000000000000 ]---
| [    0.000000] ftrace faulted on writing
| [    0.000000] [<ffffa00011bf393c>] _GLOBAL__sub_D_65535_0___tracepoint_initcall_level+0x4/0x28
| [    0.000000] Initializing ftrace call sites
| [    0.000000] ftrace record flags: 0
| [    0.000000]  (0)
| [    0.000000]  expected tramp: ffffa000100b3344

This is due to an unfortunate combination of several factors.

Building with KASAN results in the compiler generating anonymous
functions to register/unregister global variables against the shadow
memory. These functions are placed in .text.startup/.text.exit, and
given mangled names like _GLOBAL__sub_{I,D}_65535_0_$OTHER_SYMBOL. The
kernel linker script places these in .init.text and .exit.text
respectively, which are both discarded at runtime as part of initmem.

Building with FTRACE_WITH_REGS uses -fpatchable-function-entry=2, which
also instruments KASAN's anonymous functions. When these are discarded
with the rest of initmem, ftrace removes dangling references to these
call sites.

Building without MODULES implicitly disables STRICT_MODULE_RWX, and
causes arm64's patch_map() function to treat any !core_kernel_text()
symbol as something that can be modified in-place. As core_kernel_text()
is only true for .text and .init.text, with the latter depending on
system_state < SYSTEM_RUNNING, we'll treat .exit.text as something that
can be patched in-place. However, .exit.text is mapped read-only.

Hence in this configuration the ftrace init code blows up while trying
to patch one of the functions generated by KASAN.

We could try to filter out the call sites in .exit.text rather than
initializing them, but this would be inconsistent with how we handle
.init.text, and requires hooking into core bits of ftrace. The behaviour
of patch_map() is also inconsistent today, so instead let's clean that
up and have it consistently handle .exit.text.

This patch teaches patch_map() to handle .exit.text at init time,
preventing the boot-time splat above. The flow of patch_map() is
reworked to make the logic clearer and minimize redundant
conditionality.

Fixes: 3b23e4991fb66f6d ("arm64: implement ftrace with regs")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Torsten Duwe <duwe@suse.de>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: mm: Fix initialisation of DMA zones on non-NUMA systems
Will Deacon [Tue, 3 Dec 2019 12:10:13 +0000 (12:10 +0000)]
arm64: mm: Fix initialisation of DMA zones on non-NUMA systems

John reports that the recently merged commit 1a8e1cef7603 ("arm64: use
both ZONE_DMA and ZONE_DMA32") breaks the boot on his DB845C board:

  | Booting Linux on physical CPU 0x0000000000 [0x517f803c]
  | Linux version 5.4.0-mainline-10675-g957a03b9e38f
  | Machine model: Thundercomm Dragonboard 845c
  | [...]
  | Built 1 zonelists, mobility grouping on.  Total pages: -188245
  | Kernel command line: earlycon
  | firmware_class.path=/vendor/firmware/ androidboot.hardware=db845c
  | init=/init androidboot.boot_devices=soc/1d84000.ufshc
  | printk.devkmsg=on buildvariant=userdebug root=/dev/sda2
  | androidboot.bootdevice=1d84000.ufshc androidboot.serialno=c4e1189c
  | androidboot.baseband=sda
  | msm_drm.dsi_display0=dsi_lt9611_1080_video_display:
  | androidboot.slot_suffix=_a skip_initramfs rootwait ro init=/init
  |
  | <hangs indefinitely here>

This is because, when CONFIG_NUMA=n, zone_sizes_init() fails to handle
memblocks that fall entirely within the ZONE_DMA region and erroneously ends up
trying to add a negatively-sized region into the following ZONE_DMA32, which is
later interpreted as a large unsigned region by the core MM code.

Rework the non-NUMA implementation of zone_sizes_init() so that the start
address of the memblock being processed is adjusted according to the end of the
previous zone, which is then range-checked before updating the hole information
of subsequent zones.

Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
Link: https://lore.kernel.org/lkml/CALAqxLVVcsmFrDKLRGRq7GewcW405yTOxG=KR3csVzQ6bXutkA@mail.gmail.com
Fixes: 1a8e1cef7603 ("arm64: use both ZONE_DMA and ZONE_DMA32")
Reported-by: John Stultz <john.stultz@linaro.org>
Tested-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: Kconfig: add a choice for endianness
Anders Roxell [Wed, 13 Nov 2019 09:26:52 +0000 (10:26 +0100)]
arm64: Kconfig: add a choice for endianness

When building allmodconfig KCONFIG_ALLCONFIG=$(pwd)/arch/arm64/configs/defconfig
CONFIG_CPU_BIG_ENDIAN gets enabled. Which tends not to be what most
people want. Another concern that has come up is that ACPI isn't built
for an allmodconfig kernel today since that also depends on !CPU_BIG_ENDIAN.

Rework so that we introduce a 'choice' and default the choice to
CPU_LITTLE_ENDIAN. That means that when we build an allmodconfig kernel
it will default to CPU_LITTLE_ENDIAN that most people tends to want.

Reviewed-by: John Garry <john.garry@huawei.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agokselftest: arm64: fix spelling mistake "contiguos" -> "contiguous"
Colin Ian King [Mon, 11 Nov 2019 09:12:36 +0000 (09:12 +0000)]
kselftest: arm64: fix spelling mistake "contiguos" -> "contiguous"

There is a spelling mistake in an error message literal string. Fix it.

Fixes: f96bf4340316 ("kselftest: arm64: mangle_pstate_invalid_compat_toggle and common utils")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: Kconfig: make CMDLINE_FORCE depend on CMDLINE
Anders Roxell [Mon, 11 Nov 2019 08:59:56 +0000 (09:59 +0100)]
arm64: Kconfig: make CMDLINE_FORCE depend on CMDLINE

When building allmodconfig KCONFIG_ALLCONFIG=$(pwd)/arch/arm64/configs/defconfig
CONFIG_CMDLINE_FORCE gets enabled. Which forces the user to pass the
full cmdline to CONFIG_CMDLINE="...".

Rework so that CONFIG_CMDLINE_FORCE gets set only if CONFIG_CMDLINE is
set to something except an empty string.

Suggested-by: John Garry <john.garry@huawei.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoMAINTAINERS: Add arm64 selftests to the ARM64 PORT entry
Catalin Marinas [Fri, 8 Nov 2019 14:46:54 +0000 (14:46 +0000)]
MAINTAINERS: Add arm64 selftests to the ARM64 PORT entry

Since these are tests specific to the arm64 architecture, it makes sense
for the arm64 maintainers to gatekeep the corresponding changes.

Cc: Shuah Khan <shuah@kernel.org>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoMerge branches 'for-next/elf-hwcap-docs', 'for-next/smccc-conduit-cleanup', 'for...
Catalin Marinas [Fri, 8 Nov 2019 17:46:11 +0000 (17:46 +0000)]
Merge branches 'for-next/elf-hwcap-docs', 'for-next/smccc-conduit-cleanup', 'for-next/zone-dma', 'for-next/relax-icc_pmr_el1-sync', 'for-next/double-page-fault', 'for-next/misc', 'for-next/kselftest-arm64-signal' and 'for-next/kaslr-diagnostics' into for-next/core

* for-next/elf-hwcap-docs:
  : Update the arm64 ELF HWCAP documentation
  docs/arm64: cpu-feature-registers: Rewrite bitfields that don't follow [e, s]
  docs/arm64: cpu-feature-registers: Documents missing visible fields
  docs/arm64: elf_hwcaps: Document HWCAP_SB
  docs/arm64: elf_hwcaps: sort the HWCAP{, 2} documentation by ascending value

* for-next/smccc-conduit-cleanup:
  : SMC calling convention conduit clean-up
  firmware: arm_sdei: use common SMCCC_CONDUIT_*
  firmware/psci: use common SMCCC_CONDUIT_*
  arm: spectre-v2: use arm_smccc_1_1_get_conduit()
  arm64: errata: use arm_smccc_1_1_get_conduit()
  arm/arm64: smccc/psci: add arm_smccc_1_1_get_conduit()

* for-next/zone-dma:
  : Reintroduction of ZONE_DMA for Raspberry Pi 4 support
  arm64: mm: reserve CMA and crashkernel in ZONE_DMA32
  dma/direct: turn ARCH_ZONE_DMA_BITS into a variable
  arm64: Make arm64_dma32_phys_limit static
  arm64: mm: Fix unused variable warning in zone_sizes_init
  mm: refresh ZONE_DMA and ZONE_DMA32 comments in 'enum zone_type'
  arm64: use both ZONE_DMA and ZONE_DMA32
  arm64: rename variables used to calculate ZONE_DMA32's size
  arm64: mm: use arm64_dma_phys_limit instead of calling max_zone_dma_phys()

* for-next/relax-icc_pmr_el1-sync:
  : Relax ICC_PMR_EL1 (GICv3) accesses when ICC_CTLR_EL1.PMHE is clear
  arm64: Document ICC_CTLR_EL3.PMHE setting requirements
  arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear

* for-next/double-page-fault:
  : Avoid a double page fault in __copy_from_user_inatomic() if hw does not support auto Access Flag
  mm: fix double page fault on arm64 if PTE_AF is cleared
  x86/mm: implement arch_faults_on_old_pte() stub on x86
  arm64: mm: implement arch_faults_on_old_pte() on arm64
  arm64: cpufeature: introduce helper cpu_has_hw_af()

* for-next/misc:
  : Various fixes and clean-ups
  arm64: kpti: Add NVIDIA's Carmel core to the KPTI whitelist
  arm64: mm: Remove MAX_USER_VA_BITS definition
  arm64: mm: simplify the page end calculation in __create_pgd_mapping()
  arm64: print additional fault message when executing non-exec memory
  arm64: psci: Reduce the waiting time for cpu_psci_cpu_kill()
  arm64: pgtable: Correct typo in comment
  arm64: docs: cpu-feature-registers: Document ID_AA64PFR1_EL1
  arm64: cpufeature: Fix typos in comment
  arm64/mm: Poison initmem while freeing with free_reserved_area()
  arm64: use generic free_initrd_mem()
  arm64: simplify syscall wrapper ifdeffery

* for-next/kselftest-arm64-signal:
  : arm64-specific kselftest support with signal-related test-cases
  kselftest: arm64: fake_sigreturn_misaligned_sp
  kselftest: arm64: fake_sigreturn_bad_size
  kselftest: arm64: fake_sigreturn_duplicated_fpsimd
  kselftest: arm64: fake_sigreturn_missing_fpsimd
  kselftest: arm64: fake_sigreturn_bad_size_for_magic0
  kselftest: arm64: fake_sigreturn_bad_magic
  kselftest: arm64: add helper get_current_context
  kselftest: arm64: extend test_init functionalities
  kselftest: arm64: mangle_pstate_invalid_mode_el[123][ht]
  kselftest: arm64: mangle_pstate_invalid_daif_bits
  kselftest: arm64: mangle_pstate_invalid_compat_toggle and common utils
  kselftest: arm64: extend toplevel skeleton Makefile

* for-next/kaslr-diagnostics:
  : Provide diagnostics on boot for KASLR
  arm64: kaslr: Check command line before looking for a seed
  arm64: kaslr: Announce KASLR status on boot

4 years agoarm64: kaslr: Check command line before looking for a seed
Mark Brown [Fri, 8 Nov 2019 17:12:44 +0000 (17:12 +0000)]
arm64: kaslr: Check command line before looking for a seed

Now that we print diagnostics at boot the reason why we do not initialise
KASLR matters. Currently we check for a seed before we check if the user
has explicitly disabled KASLR on the command line which will result in
misleading diagnostics so reverse the order of those checks. We still
parse the seed from the DT early so that if the user has both provided a
seed and disabled KASLR on the command line we still mask the seed on
the command line.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: kaslr: Announce KASLR status on boot
Mark Brown [Fri, 8 Nov 2019 17:12:43 +0000 (17:12 +0000)]
arm64: kaslr: Announce KASLR status on boot

Currently the KASLR code is silent at boot unless it forces on KPTI in
which case a message will be printed for that. This can lead to users
incorrectly believing their system has the feature enabled when it in
fact does not, and if they notice the problem the lack of any
diagnostics makes it harder to understand the problem. Add an initcall
which prints a message showing the status of KASLR during boot to make
the status clear.

This is particularly useful in cases where we don't have a seed. It
seems to be a relatively common error for system integrators and
administrators to enable KASLR in their configuration but not provide
the seed at runtime, often due to seed provisioning breaking at some
later point after it is initially enabled and verified.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agokselftest: arm64: fake_sigreturn_misaligned_sp
Cristian Marussi [Fri, 25 Oct 2019 17:57:17 +0000 (18:57 +0100)]
kselftest: arm64: fake_sigreturn_misaligned_sp

Add a simple fake_sigreturn testcase which places a valid sigframe on a
non-16 bytes aligned SP. Expects a SIGSEGV on test PASS.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agokselftest: arm64: fake_sigreturn_bad_size
Cristian Marussi [Fri, 25 Oct 2019 17:57:16 +0000 (18:57 +0100)]
kselftest: arm64: fake_sigreturn_bad_size

Add a simple fake_sigreturn testcase which builds a ucontext_t with a
badly sized header that causes a overrun in the __reserved area and
place it onto the stack. Expects a SIGSEGV on test PASS.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agokselftest: arm64: fake_sigreturn_duplicated_fpsimd
Cristian Marussi [Fri, 25 Oct 2019 17:57:15 +0000 (18:57 +0100)]
kselftest: arm64: fake_sigreturn_duplicated_fpsimd

Add a simple fake_sigreturn testcase which builds a ucontext_t with
an anomalous additional fpsimd_context and place it onto the stack.
Expects a SIGSEGV on test PASS.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agokselftest: arm64: fake_sigreturn_missing_fpsimd
Cristian Marussi [Fri, 25 Oct 2019 17:57:14 +0000 (18:57 +0100)]
kselftest: arm64: fake_sigreturn_missing_fpsimd

Add a simple fake_sigreturn testcase which builds a ucontext_t without
the required fpsimd_context and place it onto the stack.
Expects a SIGSEGV on test PASS.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agokselftest: arm64: fake_sigreturn_bad_size_for_magic0
Cristian Marussi [Fri, 25 Oct 2019 17:57:13 +0000 (18:57 +0100)]
kselftest: arm64: fake_sigreturn_bad_size_for_magic0

Add a simple fake_sigreturn testcase which builds a ucontext_t with a
badly sized terminator record and place it onto the stack.
Expects a SIGSEGV on test PASS.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agokselftest: arm64: fake_sigreturn_bad_magic
Cristian Marussi [Fri, 25 Oct 2019 17:57:12 +0000 (18:57 +0100)]
kselftest: arm64: fake_sigreturn_bad_magic

Add a simple fake_sigreturn testcase which builds a ucontext_t with a bad
magic header and place it onto the stack. Expects a SIGSEGV on test PASS.

Introduce a common utility assembly trampoline function to invoke a
sigreturn while placing the provided sigframe at wanted alignment and
also an helper to make space when needed inside the sigframe reserved
area.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agokselftest: arm64: add helper get_current_context
Cristian Marussi [Fri, 25 Oct 2019 17:57:11 +0000 (18:57 +0100)]
kselftest: arm64: add helper get_current_context

Introduce a new common utility function get_current_context() which can be
used to grab a ucontext without the help of libc, and also to detect if
such ucontext has been successfully used by placing it on the stack as a
fake sigframe.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agokselftest: arm64: extend test_init functionalities
Cristian Marussi [Fri, 25 Oct 2019 17:57:10 +0000 (18:57 +0100)]
kselftest: arm64: extend test_init functionalities

Extend signal testing framework to allow the definition of a custom per
test initialization function to be run at the end of the common test_init
after test setup phase has completed and before test-run routine.

This custom per-test initialization function also enables the test writer
to decide on its own when forcibly skip the test itself using standard KSFT
mechanism.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agokselftest: arm64: mangle_pstate_invalid_mode_el[123][ht]
Cristian Marussi [Fri, 25 Oct 2019 17:57:09 +0000 (18:57 +0100)]
kselftest: arm64: mangle_pstate_invalid_mode_el[123][ht]

Add 6 simple mangle testcases that mess with the ucontext_t from within
the signal handler, trying to toggle PSTATE mode bits to trick the system
into switching to EL1/EL2/EL3 using both SP_EL0(t) and SP_ELx(h).
Expects SIGSEGV on test PASS.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agokselftest: arm64: mangle_pstate_invalid_daif_bits
Cristian Marussi [Fri, 25 Oct 2019 17:57:08 +0000 (18:57 +0100)]
kselftest: arm64: mangle_pstate_invalid_daif_bits

Add a simple mangle testcase which messes with the ucontext_t from within
the signal handler, trying to set PSTATE DAIF bits to an invalid value
(masking everything). Expects SIGSEGV on test PASS.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agokselftest: arm64: mangle_pstate_invalid_compat_toggle and common utils
Cristian Marussi [Fri, 25 Oct 2019 17:57:07 +0000 (18:57 +0100)]
kselftest: arm64: mangle_pstate_invalid_compat_toggle and common utils

Add some arm64/signal specific boilerplate and utility code to help
further testcases' development.

Introduce also one simple testcase mangle_pstate_invalid_compat_toggle
and some related helpers: it is a simple mangle testcase which messes
with the ucontext_t from within the signal handler, trying to toggle
PSTATE state bits to switch the system between 32bit/64bit execution
state. Expects SIGSEGV on test PASS.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agokselftest: arm64: extend toplevel skeleton Makefile
Cristian Marussi [Fri, 25 Oct 2019 17:57:06 +0000 (18:57 +0100)]
kselftest: arm64: extend toplevel skeleton Makefile

Modify KSFT arm64 toplevel Makefile to maintain arm64 kselftests organized
by subsystem, keeping them into distinct subdirectories under arm64 custom
KSFT directory: tools/testing/selftests/arm64/

Add to such toplevel Makefile a mechanism to guess the effective location
of Kernel headers as installed by KSFT framework.

Fit existing arm64 tags kselftest into this new schema moving them into
their own subdirectory (arm64/tags).

Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoMerge branch 'for-next/perf' into for-next/core
Catalin Marinas [Fri, 8 Nov 2019 10:57:14 +0000 (10:57 +0000)]
Merge branch 'for-next/perf' into for-next/core

- Support for additional PMU topologies on HiSilicon platforms
- Support for CCN-512 interconnect PMU
- Support for AXI ID filtering in the IMX8 DDR PMU
- Support for the CCPI2 uncore PMU in ThunderX2
- Driver cleanup to use devm_platform_ioremap_resource()

* for-next/perf:
  drivers/perf: hisi: update the sccl_id/ccl_id for certain HiSilicon platform
  perf/imx_ddr: Dump AXI ID filter info to userspace
  docs/perf: Add AXI ID filter capabilities information
  perf/imx_ddr: Add driver for DDR PMU in i.MX8MPlus
  perf/imx_ddr: Add enhanced AXI ID filter support
  bindings: perf: imx-ddr: Add new compatible string
  docs/perf: Add explanation for DDR_CAP_AXI_ID_FILTER_ENHANCED quirk
  arm64: perf: Simplify the ARMv8 PMUv3 event attributes
  drivers/perf: Add CCPI2 PMU support in ThunderX2 UNCORE driver.
  Documentation: perf: Update documentation for ThunderX2 PMU uncore driver
  Documentation: Add documentation for CCN-512 DTS binding
  perf: arm-ccn: Enable stats for CCN-512 interconnect
  perf/smmuv3: use devm_platform_ioremap_resource() to simplify code
  perf/arm-cci: use devm_platform_ioremap_resource() to simplify code
  perf/arm-ccn: use devm_platform_ioremap_resource() to simplify code
  perf: xgene: use devm_platform_ioremap_resource() to simplify code
  perf: hisi: use devm_platform_ioremap_resource() to simplify code

4 years agodrivers/perf: hisi: update the sccl_id/ccl_id for certain HiSilicon platform
Shaokun Zhang [Thu, 7 Nov 2019 07:56:04 +0000 (15:56 +0800)]
drivers/perf: hisi: update the sccl_id/ccl_id for certain HiSilicon platform

For some HiSilicon platform, the originally designed SCCL_ID and CCL_ID
are not satisfied with much rich topology when the MT is set, so we
extend the SCCL_ID to MPIDR[aff3] and CCL_ID to MPIDR[aff2]. Let's
update this for HiSilicon uncore PMU driver.

Cc: John Garry <john.garry@huawei.com>
Cc: Hanjun Guo <guohanjun@huawei.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoMerge branch 'arm64/ftrace-with-regs' of git://git.kernel.org/pub/scm/linux/kernel...
Catalin Marinas [Thu, 7 Nov 2019 11:26:54 +0000 (11:26 +0000)]
Merge branch 'arm64/ftrace-with-regs' of git://git./linux/kernel/git/mark/linux into for-next/core

FTRACE_WITH_REGS support for arm64.

* 'arm64/ftrace-with-regs' of git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux:
  arm64: ftrace: minimize ifdeffery
  arm64: implement ftrace with regs
  arm64: asm-offsets: add S_FP
  arm64: insn: add encoder for MOV (register)
  arm64: module/ftrace: intialize PLT at load time
  arm64: module: rework special section handling
  module/ftrace: handle patchable-function-entry
  ftrace: add ftrace_init_nop()

4 years agoarm64: mm: reserve CMA and crashkernel in ZONE_DMA32
Nicolas Saenz Julienne [Thu, 7 Nov 2019 09:56:11 +0000 (10:56 +0100)]
arm64: mm: reserve CMA and crashkernel in ZONE_DMA32

With the introduction of ZONE_DMA in arm64 we moved the default CMA and
crashkernel reservation into that area. This caused a regression on big
machines that need big CMA and crashkernel reservations. Note that
ZONE_DMA is only 1GB big.

Restore the previous behavior as the wide majority of devices are OK
with reserving these in ZONE_DMA32. The ones that need them in ZONE_DMA
will configure it explicitly.

Fixes: 1a8e1cef7603 ("arm64: use both ZONE_DMA and ZONE_DMA32")
Reported-by: Qian Cai <cai@lca.pw>
Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: ftrace: minimize ifdeffery
Mark Rutland [Mon, 21 Oct 2019 14:05:52 +0000 (15:05 +0100)]
arm64: ftrace: minimize ifdeffery

Now that we no longer refer to mod->arch.ftrace_trampolines in the body
of ftrace_make_call(), we can use IS_ENABLED() rather than ifdeffery,
and make the code easier to follow. Likewise in ftrace_make_nop().

Let's do so.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
4 years agoarm64: implement ftrace with regs
Torsten Duwe [Fri, 8 Feb 2019 15:10:19 +0000 (16:10 +0100)]
arm64: implement ftrace with regs

This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified. This is
a building block for live-patching, where a function's arguments may be
forwarded to another function. This is also necessary to enable ftrace
and in-kernel pointer authentication at the same time, as it allows the
LR value to be captured and adjusted prior to signing.

Using GCC's -fpatchable-function-entry=N option, we can have the
compiler insert a configurable number of NOPs between the function entry
point and the usual prologue. This also ensures functions are AAPCS
compliant (e.g. disabling inter-procedural register allocation).

For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
following:

| unsigned long bar(void);
|
| unsigned long foo(void)
| {
|         return bar() + 1;
| }

... to:

| <foo>:
|         nop
|         nop
|         stp     x29, x30, [sp, #-16]!
|         mov     x29, sp
|         bl      0 <bar>
|         add     x0, x0, #0x1
|         ldp     x29, x30, [sp], #16
|         ret

This patch builds the kernel with -fpatchable-function-entry=2,
prefixing each function with two NOPs. To trace a function, we replace
these NOPs with a sequence that saves the LR into a GPR, then calls an
ftrace entry assembly function which saves this and other relevant
registers:

| mov x9, x30
| bl <ftrace-entry>

Since patchable functions are AAPCS compliant (and the kernel does not
use x18 as a platform register), x9-x18 can be safely clobbered in the
patched sequence and the ftrace entry code.

There are now two ftrace entry functions, ftrace_regs_entry (which saves
all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
allocated for each within modules.

Signed-off-by: Torsten Duwe <duwe@suse.de>
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Julien Thierry <jthierry@redhat.com>
Cc: Will Deacon <will@kernel.org>
4 years agoarm64: asm-offsets: add S_FP
Mark Rutland [Fri, 18 Oct 2019 15:37:47 +0000 (16:37 +0100)]
arm64: asm-offsets: add S_FP

So that assembly code can more easily manipulate the FP (x29) within a
pt_regs, add an S_FP asm-offsets definition.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
4 years agoarm64: insn: add encoder for MOV (register)
Mark Rutland [Fri, 18 Oct 2019 10:25:26 +0000 (11:25 +0100)]
arm64: insn: add encoder for MOV (register)

For FTRACE_WITH_REGS, we're going to want to generate a MOV (register)
instruction as part of the callsite intialization. As MOV (register) is
an alias for ORR (shifted register), we can generate this with
aarch64_insn_gen_logical_shifted_reg(), but it's somewhat verbose and
difficult to read in-context.

Add a aarch64_insn_gen_move_reg() wrapper for this case so that we can
write callers in a more straightforward way.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
4 years agoarm64: module/ftrace: intialize PLT at load time
Mark Rutland [Thu, 17 Oct 2019 14:26:38 +0000 (15:26 +0100)]
arm64: module/ftrace: intialize PLT at load time

Currently we lazily-initialize a module's ftrace PLT at runtime when we
install the first ftrace call. To do so we have to apply a number of
sanity checks, transiently mark the module text as RW, and perform an
IPI as part of handling Neoverse-N1 erratum #1542419.

We only expect the ftrace trampoline to point at ftrace_caller() (AKA
FTRACE_ADDR), so let's simplify all of this by intializing the PLT at
module load time, before the module loader marks the module RO and
performs the intial I-cache maintenance for the module.

Thus we can rely on the module having been correctly intialized, and can
simplify the runtime work necessary to install an ftrace call in a
module. This will also allow for the removal of module_disable_ro().

Tested by forcing ftrace_make_call() to use the module PLT, and then
loading up a module after setting up ftrace with:

| echo ":mod:<module-name>" > set_ftrace_filter;
| echo function > current_tracer;
| modprobe <module-name>

Since FTRACE_ADDR is only defined when CONFIG_DYNAMIC_FTRACE is
selected, we wrap its use along with most of module_init_ftrace_plt()
with ifdeffery rather than using IS_ENABLED().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
4 years agoarm64: module: rework special section handling
Mark Rutland [Thu, 17 Oct 2019 13:03:26 +0000 (14:03 +0100)]
arm64: module: rework special section handling

When we load a module, we have to perform some special work for a couple
of named sections. To do this, we iterate over all of the module's
sections, and perform work for each section we recognize.

To make it easier to handle the unexpected absence of a section, and to
make the section-specific logic easer to read, let's factor the section
search into a helper. Similar is already done in the core module loader,
and other architectures (and ideally we'd unify these in future).

If we expect a module to have an ftrace trampoline section, but it
doesn't have one, we'll now reject loading the module. When
ARM64_MODULE_PLTS is selected, any correctly built module should have
one (and this is assumed by arm64's ftrace PLT code) and the absence of
such a section implies something has gone wrong at build time.

Subsequent patches will make use of the new helper.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
4 years agomodule/ftrace: handle patchable-function-entry
Mark Rutland [Wed, 16 Oct 2019 17:17:11 +0000 (18:17 +0100)]
module/ftrace: handle patchable-function-entry

When using patchable-function-entry, the compiler will record the
callsites into a section named "__patchable_function_entries" rather
than "__mcount_loc". Let's abstract this difference behind a new
FTRACE_CALLSITE_SECTION, so that architectures don't have to handle this
explicitly (e.g. with custom module linker scripts).

As parisc currently handles this explicitly, it is fixed up accordingly,
with its custom linker script removed. Since FTRACE_CALLSITE_SECTION is
only defined when DYNAMIC_FTRACE is selected, the parisc module loading
code is updated to only use the definition in that case. When
DYNAMIC_FTRACE is not selected, modules shouldn't have this section, so
this removes some redundant work in that case.

To make sure that this is keep up-to-date for modules and the main
kernel, a comment is added to vmlinux.lds.h, with the existing ifdeffery
simplified for legibility.

I built parisc generic-{32,64}bit_defconfig with DYNAMIC_FTRACE enabled,
and verified that the section made it into the .ko files for modules.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Helge Deller <deller@gmx.de>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Sven Schnelle <svens@stackframe.org>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: linux-parisc@vger.kernel.org
4 years agoftrace: add ftrace_init_nop()
Mark Rutland [Wed, 16 Oct 2019 16:51:10 +0000 (17:51 +0100)]
ftrace: add ftrace_init_nop()

Architectures may need to perform special initialization of ftrace
callsites, and today they do so by special-casing ftrace_make_nop() when
the expected branch address is MCOUNT_ADDR. In some cases (e.g. for
patchable-function-entry), we don't have an mcount-like symbol and don't
want a synthetic MCOUNT_ADDR, but we may need to perform some
initialization of callsites.

To make it possible to separate initialization from runtime
modification, and to handle cases without an mcount-like symbol, this
patch adds an optional ftrace_init_nop() function that architectures can
implement, which does not pass a branch address.

Where an architecture does not provide ftrace_init_nop(), we will fall
back to the existing behaviour of calling ftrace_make_nop() with
MCOUNT_ADDR.

At the same time, ftrace_code_disable() is renamed to
ftrace_nop_initialize() to make it clearer that it is intended to
intialize a callsite into a disabled state, and is not for disabling a
callsite that has been runtime enabled. The kerneldoc description of rec
arguments is updated to cover non-mcount callsites.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Miroslav Benes <mbenes@suse.cz>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Sven Schnelle <svens@stackframe.org>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
4 years agoarm64: kpti: Add NVIDIA's Carmel core to the KPTI whitelist
Rich Wiley [Tue, 5 Nov 2019 18:45:10 +0000 (10:45 -0800)]
arm64: kpti: Add NVIDIA's Carmel core to the KPTI whitelist

NVIDIA Carmel CPUs don't implement ID_AA64PFR0_EL1.CSV3 but
aren't susceptible to Meltdown, so add Carmel to kpti_safe_list[].

Signed-off-by: Rich Wiley <rwiley@nvidia.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: mm: Remove MAX_USER_VA_BITS definition
Bhupesh Sharma [Mon, 4 Nov 2019 21:56:46 +0000 (03:26 +0530)]
arm64: mm: Remove MAX_USER_VA_BITS definition

commit 9b31cf493ffa ("arm64: mm: Introduce MAX_USER_VA_BITS definition")
introduced the MAX_USER_VA_BITS definition, which was used to support
the arm64 mm use-cases where the user-space could use 52-bit virtual
addresses whereas the kernel-space would still could a maximum of 48-bit
virtual addressing.

But, now with commit b6d00d47e81a ("arm64: mm: Introduce 52-bit Kernel
VAs"), we removed the 52-bit user/48-bit kernel kconfig option and hence
there is no longer any scenario where user VA != kernel VA size
(even with CONFIG_ARM64_FORCE_52BIT enabled, the same is true).

Hence we can do away with the MAX_USER_VA_BITS macro as it is equal to
VA_BITS (maximum VA space size) in all possible use-cases. Note that
even though the 'vabits_actual' value would be 48 for arm64 hardware
which don't support LVA-8.2 extension (even when CONFIG_ARM64_VA_BITS_52
is enabled), VA_BITS would still be set to a value 52. Hence this change
would be safe in all possible VA address space combinations.

Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Steve Capper <steve.capper@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: linux-kernel@vger.kernel.org
Cc: kexec@lists.infradead.org
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: mm: simplify the page end calculation in __create_pgd_mapping()
Masahiro Yamada [Sun, 3 Nov 2019 12:35:58 +0000 (21:35 +0900)]
arm64: mm: simplify the page end calculation in __create_pgd_mapping()

Calculate the page-aligned end address more simply.

The local variable, "length" is unneeded.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoperf/imx_ddr: Dump AXI ID filter info to userspace
Joakim Zhang [Mon, 4 Nov 2019 07:09:24 +0000 (07:09 +0000)]
perf/imx_ddr: Dump AXI ID filter info to userspace

caps/filter indicates whether HW supports AXI ID filter or not.
caps/enhanced_filter indicates whether HW supports enhanced AXI ID filter
or not.

Users can check filter features from userspace with these attributions.

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
[will: reworked cap switch to be less error-prone]
Signed-off-by: Will Deacon <will@kernel.org>
4 years agodocs/perf: Add AXI ID filter capabilities information
Joakim Zhang [Mon, 4 Nov 2019 07:09:20 +0000 (07:09 +0000)]
docs/perf: Add AXI ID filter capabilities information

Add capabilities information for AXI ID filter.

Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf/imx_ddr: Add driver for DDR PMU in i.MX8MPlus
Joakim Zhang [Fri, 1 Nov 2019 08:36:20 +0000 (08:36 +0000)]
perf/imx_ddr: Add driver for DDR PMU in i.MX8MPlus

Add driver for DDR PMU in i.MX8MPlus.

Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf/imx_ddr: Add enhanced AXI ID filter support
Joakim Zhang [Fri, 1 Nov 2019 08:36:16 +0000 (08:36 +0000)]
perf/imx_ddr: Add enhanced AXI ID filter support

With DDR_CAP_AXI_ID_FILTER quirk, indicating HW supports AXI ID filter
which only can get bursts from DDR transaction, i.e. DDR read/write
requests.

This patch add DDR_CAP_AXI_ID_ENHANCED_FILTER quirk, indicating HW
supports AXI ID filter which can get bursts and bytes from DDR
transaction at the same time. We hope PMU always return bytes in the
driver due to it is more meaningful for users.

Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agobindings: perf: imx-ddr: Add new compatible string
Joakim Zhang [Fri, 1 Nov 2019 08:36:13 +0000 (08:36 +0000)]
bindings: perf: imx-ddr: Add new compatible string

Add new compatible string for i.MX8MPlus DDR PMU core.

Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agodocs/perf: Add explanation for DDR_CAP_AXI_ID_FILTER_ENHANCED quirk
Joakim Zhang [Fri, 1 Nov 2019 08:36:10 +0000 (08:36 +0000)]
docs/perf: Add explanation for DDR_CAP_AXI_ID_FILTER_ENHANCED quirk

Add explanation for DDR_CAP_AXI_ID_FILTER_ENHANCED quirk.

Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
[will: Simplified wording]
Signed-off-by: Will Deacon <will@kernel.org>
4 years agodocs/arm64: cpu-feature-registers: Rewrite bitfields that don't follow [e, s]
Julien Grall [Fri, 1 Nov 2019 15:20:22 +0000 (15:20 +0000)]
docs/arm64: cpu-feature-registers: Rewrite bitfields that don't follow [e, s]

Commit "docs/arm64: cpu-feature-registers: Documents missing visible
fields" added bitfields following the convention [s, e]. However, the
documentation is following [s, e] and so does the Arm ARM.

Rewrite the bitfields to match the format [s, e].

Fixes: a8613e7070e7 ("docs/arm64: cpu-feature-registers: Documents missing visible fields")
Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: perf: Simplify the ARMv8 PMUv3 event attributes
Shaokun Zhang [Wed, 30 Oct 2019 03:46:17 +0000 (11:46 +0800)]
arm64: perf: Simplify the ARMv8 PMUv3 event attributes

For each PMU event, there is a ARMV8_EVENT_ATTR(xx, XX) and
&armv8_event_attr_xx.attr.attr. Let's redefine the ARMV8_EVENT_ATTR
to simplify the armv8_pmuv3_event_attrs.

Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
[will: Dropped unnecessary array syntax]
Signed-off-by: Will Deacon <will@kernel.org>
4 years agodma/direct: turn ARCH_ZONE_DMA_BITS into a variable
Nicolas Saenz Julienne [Mon, 14 Oct 2019 18:31:03 +0000 (20:31 +0200)]
dma/direct: turn ARCH_ZONE_DMA_BITS into a variable

Some architectures, notably ARM, are interested in tweaking this
depending on their runtime DMA addressing limitations.

Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: print additional fault message when executing non-exec memory
Xiang Zheng [Tue, 29 Oct 2019 12:41:31 +0000 (20:41 +0800)]
arm64: print additional fault message when executing non-exec memory

When attempting to executing non-executable memory, the fault message
shows:

  Unable to handle kernel read from unreadable memory at virtual address
  ffff802dac469000

This may confuse someone, so add a new fault message for instruction
abort.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agodrivers/perf: Add CCPI2 PMU support in ThunderX2 UNCORE driver.
Ganapatrao Prabhakerrao Kulkarni [Wed, 16 Oct 2019 09:37:00 +0000 (09:37 +0000)]
drivers/perf: Add CCPI2 PMU support in ThunderX2 UNCORE driver.

CCPI2 is a low-latency high-bandwidth serial interface for inter socket
connectivity of ThunderX2 processors.

CCPI2 PMU supports up to 8 counters per socket. Counters are
independently programmable to different events and can be started and
stopped individually. The CCPI2 counters are 64-bit and do not overflow
in normal operation.

Signed-off-by: Ganapatrao Prabhakerrao Kulkarni <gkulkarni@marvell.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoDocumentation: perf: Update documentation for ThunderX2 PMU uncore driver
Ganapatrao Prabhakerrao Kulkarni [Wed, 16 Oct 2019 09:36:59 +0000 (09:36 +0000)]
Documentation: perf: Update documentation for ThunderX2 PMU uncore driver

Add documentation for Cavium Coherent Processor Interconnect (CCPI2) PMU.

Signed-off-by: Ganapatrao Prabhakerrao Kulkarni <gkulkarni@marvell.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoMerge branch 'for-next/entry-s-to-c' into for-next/core
Catalin Marinas [Mon, 28 Oct 2019 17:02:56 +0000 (17:02 +0000)]
Merge branch 'for-next/entry-s-to-c' into for-next/core

Move the synchronous exception paths from entry.S into a C file to
improve the code readability.

* for-next/entry-s-to-c:
  arm64: entry-common: don't touch daif before bp-hardening
  arm64: Remove asmlinkage from updated functions
  arm64: entry: convert el0_sync to C
  arm64: entry: convert el1_sync to C
  arm64: add local_daif_inherit()
  arm64: Add prototypes for functions called by entry.S
  arm64: remove __exception annotations

4 years agoarm64: Make arm64_dma32_phys_limit static
Catalin Marinas [Mon, 28 Oct 2019 16:45:07 +0000 (16:45 +0000)]
arm64: Make arm64_dma32_phys_limit static

This variable is only used in the arch/arm64/mm/init.c file for
ZONE_DMA32 initialisation, no need to expose it.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoMerge branch 'kvm-arm64/erratum-1319367' of git://git.kernel.org/pub/scm/linux/kernel...
Catalin Marinas [Mon, 28 Oct 2019 16:22:49 +0000 (16:22 +0000)]
Merge branch 'kvm-arm64/erratum-1319367' of git://git./linux/kernel/git/maz/arm-platforms into for-next/core

Similarly to erratum 1165522 that affects Cortex-A76, A57 and A72
respectively suffer from errata 1319537 and 1319367, potentially
resulting in TLB corruption if the CPU speculates an AT instruction
while switching guests.

The fix is slightly more involved since we don't have VHE to help us
here, but the idea is the same: when switching a guest in, we must
prevent any speculated AT from being able to parse the page tables
until S2 is up and running. Only at this stage can we allow AT to take
place.

For this, we always restore the guest sysregs first, except for its
SCTLR and TCR registers, which must be set with SCTLR.M=1 and
TCR.EPD{0,1} = {1, 1}, effectively disabling the PTW and TLB
allocation. Once S2 is setup, we restore the guest's SCTLR and
TCR. Similar things must be done on TLB invalidation...

* 'kvm-arm64/erratum-1319367' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms:
  arm64: Enable and document ARM errata 1319367 and 1319537
  arm64: KVM: Prevent speculative S1 PTW when restoring vcpu context
  arm64: KVM: Disable EL1 PTW when invalidating S2 TLBs
  arm64: KVM: Reorder system register restoration and stage-2 activation
  arm64: Add ARM64_WORKAROUND_1319367 for all A57 and A72 versions

4 years agoMerge branch 'for-next/neoverse-n1-stale-instr' into for-next/core
Catalin Marinas [Mon, 28 Oct 2019 16:12:40 +0000 (16:12 +0000)]
Merge branch 'for-next/neoverse-n1-stale-instr' into for-next/core

Neoverse-N1 cores with the 'COHERENT_ICACHE' feature may fetch stale
instructions when software depends on prefetch-speculation-protection
instead of explicit synchronization. [0]

The workaround is to trap I-Cache maintenance and issue an
inner-shareable TLBI. The affected cores have a Coherent I-Cache, so the
I-Cache maintenance isn't necessary. The core tells user-space it can
skip it with CTR_EL0.DIC. We also have to trap this register to hide the
bit forcing DIC-aware user-space to perform the maintenance.

To avoid trapping all cache-maintenance, this workaround depends on
a firmware component that only traps I-cache maintenance from EL0 and
performs the workaround.

For user-space, the kernel's work is to trap CTR_EL0 to hide DIC, and
produce a fake IminLine. EL3 traps the now-necessary I-Cache maintenance
and performs the inner-shareable-TLBI that makes everything better.

[0] https://developer.arm.com/docs/sden885747/latest/arm-neoverse-n1-mp050-software-developer-errata-notice

* for-next/neoverse-n1-stale-instr:
  arm64: Silence clang warning on mismatched value/register sizes
  arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-space
  arm64: Fake the IminLine size on systems affected by Neoverse-N1 #1542419
  arm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419

4 years agoDocumentation: Add documentation for CCN-512 DTS binding
Marek Bykowski [Mon, 7 Oct 2019 13:21:15 +0000 (15:21 +0200)]
Documentation: Add documentation for CCN-512 DTS binding

Indicate the arm-ccn perf back-end supports now ccn-512.

Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Marek Bykowski <marek.bykowski@gmail.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf: arm-ccn: Enable stats for CCN-512 interconnect
Marek Bykowski [Wed, 16 Oct 2019 09:57:39 +0000 (11:57 +0200)]
perf: arm-ccn: Enable stats for CCN-512 interconnect

Add compatible string for the ARM CCN-512 interconnect

Acked-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Marek Bykowski <marek.bykowski@gmail.com>
Signed-off-by: Boleslaw Malecki <boleslaw.malecki@tieto.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoMerge remote-tracking branch 'arm64/for-next/fixes' into for-next/core
Catalin Marinas [Mon, 28 Oct 2019 14:57:16 +0000 (14:57 +0000)]
Merge remote-tracking branch 'arm64/for-next/fixes' into for-next/core

This is required to solve the conflicts with subsequent merges of two
more errata workaround branches.

* arm64/for-next/fixes:
  arm64: tags: Preserve tags for addresses translated via TTBR1
  arm64: mm: fix inverted PAR_EL1.F check
  arm64: sysreg: fix incorrect definition of SYS_PAR_EL1_F
  arm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabled
  arm64: hibernate: check pgd table allocation
  arm64: cpufeature: Treat ID_AA64ZFR0_EL1 as RAZ when SVE is not enabled
  arm64: Fix kcore macros after 52-bit virtual addressing fallout
  arm64: Allow CAVIUM_TX2_ERRATUM_219 to be selected
  arm64: Avoid Cavium TX2 erratum 219 when switching TTBR
  arm64: Enable workaround for Cavium TX2 erratum 219 when running SMT
  arm64: KVM: Trap VM ops when ARM64_WORKAROUND_CAVIUM_TX2_219_TVM is set

4 years agoarm64: entry-common: don't touch daif before bp-hardening
James Morse [Fri, 25 Oct 2019 16:42:16 +0000 (17:42 +0100)]
arm64: entry-common: don't touch daif before bp-hardening

The previous patches mechanically transformed the assembly version of
entry.S to entry-common.c for synchronous exceptions.

The C version of local_daif_restore() doesn't quite do the same thing
as the assembly versions if pseudo-NMI is in use. In particular,
| local_daif_restore(DAIF_PROCCTX_NOIRQ)
will still allow pNMI to be delivered. This is not the behaviour
do_el0_ia_bp_hardening() and do_sp_pc_abort() want as it should not
be possible for the PMU handler to run as an NMI until the bp-hardening
sequence has run.

The bp-hardening calls were placed where they are because this was the
first C code to run after the relevant exceptions. As we've now moved
that point earlier, move the checks and calls earlier too.

This makes it clearer that this stuff runs before any kind of exception,
and saves modifying PSTATE twice.

Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: Remove asmlinkage from updated functions
James Morse [Fri, 25 Oct 2019 16:42:15 +0000 (17:42 +0100)]
arm64: Remove asmlinkage from updated functions

Now that the callers of these functions have moved into C, they no longer
need the asmlinkage annotation. Remove it.

Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: entry: convert el0_sync to C
Mark Rutland [Fri, 25 Oct 2019 16:42:14 +0000 (17:42 +0100)]
arm64: entry: convert el0_sync to C

This is largely a 1-1 conversion of asm to C, with a couple of caveats.

The el0_sync{_compat} switches explicitly handle all the EL0 debug
cases, so el0_dbg doesn't have to try to bail out for unexpected EL1
debug ESR values. This also means that an unexpected vector catch from
AArch32 is routed to el0_inv.

We *could* merge the native and compat switches, which would make the
diffstat negative, but I've tried to stay as close to the existing
assembly as possible for the moment.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[split out of a bigger series, added nokprobes. removed irq trace
 calls as the C helpers do this. renamed el0_dbg's use of FAR]
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: entry: convert el1_sync to C
Mark Rutland [Fri, 25 Oct 2019 16:42:13 +0000 (17:42 +0100)]
arm64: entry: convert el1_sync to C

This patch converts the EL1 sync entry assembly logic to C code.

Doing this will allow us to make changes in a slightly more
readable way. A case in point is supporting kernel-first RAS.
do_sea() should be called on the CPU that took the fault.

Largely the assembly code is converted to C in a relatively
straightforward manner.

Since all sync sites share a common asm entry point, the ASM_BUG()
instances are no longer required for effective backtraces back to
assembly, and we don't need similar BUG() entries.

The ESR_ELx.EC codes for all (supported) debug exceptions are now
checked in the el1_sync_handler's switch statement, which renders the
check in el1_dbg redundant. This both simplifies the el1_dbg handler,
and makes the EL1 exception handling more robust to
currently-unallocated ESR_ELx.EC encodings.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[split out of a bigger series, added nokprobes, moved prototypes]
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: add local_daif_inherit()
Mark Rutland [Fri, 25 Oct 2019 16:42:12 +0000 (17:42 +0100)]
arm64: add local_daif_inherit()

Some synchronous exceptions can be taken from a number of contexts,
e.g. where IRQs may or may not be masked. In the entry assembly for
these exceptions, we use the inherit_daif assembly macro to ensure
that we only mask those exceptions which were masked when the exception
was taken.

So that we can do the same from C code, this patch adds a new
local_daif_inherit() function, following the existing local_daif_*()
naming scheme.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[moved away from local_daif_restore()]
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: Add prototypes for functions called by entry.S
James Morse [Fri, 25 Oct 2019 16:42:11 +0000 (17:42 +0100)]
arm64: Add prototypes for functions called by entry.S

Functions that are only called by assembly don't always have a
C header file prototype.

Add the prototypes before moving the assembly callers to C.

Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: remove __exception annotations
James Morse [Fri, 25 Oct 2019 16:42:10 +0000 (17:42 +0100)]
arm64: remove __exception annotations

Since commit 732674980139 ("arm64: unwind: reference pt_regs via embedded
stack frame") arm64 has not used the __exception annotation to dump
the pt_regs during stack tracing. in_exception_text() has no callers.

This annotation is only used to blacklist kprobes, it means the same as
__kprobes.

Section annotations like this require the functions to be grouped
together between the start/end markers, and placed according to
the linker script. For kprobes we also have NOKPROBE_SYMBOL() which
logs the symbol address in a section that kprobes parses and
blacklists at boot.

Using NOKPROBE_SYMBOL() instead lets kprobes publish the list of
blacklisted symbols, and saves us from having an arm64 specific
spelling of __kprobes.

do_debug_exception() already has a NOKPROBE_SYMBOL() annotation.

Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: Silence clang warning on mismatched value/register sizes
Catalin Marinas [Mon, 28 Oct 2019 09:08:34 +0000 (09:08 +0000)]
arm64: Silence clang warning on mismatched value/register sizes

Clang reports a warning on the __tlbi(aside1is, 0) macro expansion since
the value size does not match the register size specified in the inline
asm. Construct the ASID value using the __TLBI_VADDR() macro.

Fixes: 222fc0c8503d ("arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-space")
Reported-by: Nathan Chancellor <natechancellor@gmail.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: Enable and document ARM errata 1319367 and 1319537
Marc Zyngier [Wed, 9 Jan 2019 14:36:34 +0000 (14:36 +0000)]
arm64: Enable and document ARM errata 1319367 and 1319537

Now that everything is in place, let's get the ball rolling
by allowing the corresponding config option to be selected.
Also add the required information to silicon_errata.rst.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
4 years agoarm64: KVM: Prevent speculative S1 PTW when restoring vcpu context
Marc Zyngier [Tue, 30 Jul 2019 10:15:31 +0000 (11:15 +0100)]
arm64: KVM: Prevent speculative S1 PTW when restoring vcpu context

When handling erratum 1319367, we must ensure that the page table
walker cannot parse the S1 page tables while the guest is in an
inconsistent state. This is done as follows:

On guest entry:
- TCR_EL1.EPD{0,1} are set, ensuring that no PTW can occur
- all system registers are restored, except for TCR_EL1 and SCTLR_EL1
- stage-2 is restored
- SCTLR_EL1 and TCR_EL1 are restored

On guest exit:
- SCTLR_EL1.M and TCR_EL1.EPD{0,1} are set, ensuring that no PTW can occur
- stage-2 is disabled
- All host system registers are restored

Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
4 years agoarm64: KVM: Disable EL1 PTW when invalidating S2 TLBs
Marc Zyngier [Tue, 30 Jul 2019 09:50:38 +0000 (10:50 +0100)]
arm64: KVM: Disable EL1 PTW when invalidating S2 TLBs

When erratum 1319367 is being worked around, special care must
be taken not to allow the page table walker to populate TLBs
while we have the stage-2 translation enabled (which would otherwise
result in a bizare mix of the host S1 and the guest S2).

We enforce this by setting TCR_EL1.EPD{0,1} before restoring the S2
configuration, and clear the same bits after having disabled S2.

Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
4 years agoarm64: KVM: Reorder system register restoration and stage-2 activation
Marc Zyngier [Wed, 9 Jan 2019 14:46:23 +0000 (14:46 +0000)]
arm64: KVM: Reorder system register restoration and stage-2 activation

In order to prepare for handling erratum 1319367, we need to make
sure that all system registers (and most importantly the registers
configuring the virtual memory) are set before we enable stage-2
translation.

This results in a minor reorganisation of the load sequence, without
any functional change.

Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
4 years agoarm64: compat: Workaround Neoverse-N1 #1542419 for compat user-space
James Morse [Thu, 17 Oct 2019 17:43:00 +0000 (18:43 +0100)]
arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-space

Compat user-space is unable to perform ICIMVAU instructions from
user-space. Instead it uses a compat-syscall. Add the workaround for
Neoverse-N1 #1542419 to this code path.

Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: Fake the IminLine size on systems affected by Neoverse-N1 #1542419
James Morse [Thu, 17 Oct 2019 17:42:59 +0000 (18:42 +0100)]
arm64: Fake the IminLine size on systems affected by Neoverse-N1 #1542419

Systems affected by Neoverse-N1 #1542419 support DIC so do not need to
perform icache maintenance once new instructions are cleaned to the PoU.
For the errata workaround, the kernel hides DIC from user-space, so that
the unnecessary cache maintenance can be trapped by firmware.

To reduce the number of traps, produce a fake IminLine value based on
PAGE_SIZE.

Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419
James Morse [Thu, 17 Oct 2019 17:42:58 +0000 (18:42 +0100)]
arm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419

Cores affected by Neoverse-N1 #1542419 could execute a stale instruction
when a branch is updated to point to freshly generated instructions.

To workaround this issue we need user-space to issue unnecessary
icache maintenance that we can trap. Start by hiding CTR_EL0.DIC.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: psci: Reduce the waiting time for cpu_psci_cpu_kill()
Yunfeng Ye [Mon, 21 Oct 2019 11:31:21 +0000 (19:31 +0800)]
arm64: psci: Reduce the waiting time for cpu_psci_cpu_kill()

In cases like suspend-to-disk and suspend-to-ram, a large number of CPU
cores need to be shut down. At present, the CPU hotplug operation is
serialised, and the CPU cores can only be shut down one by one. In this
process, if PSCI affinity_info() does not return LEVEL_OFF quickly,
cpu_psci_cpu_kill() needs to wait for 10ms. If hundreds of CPU cores
need to be shut down, it will take a long time.

Normally, there is no need to wait 10ms in cpu_psci_cpu_kill(). So
change the wait interval from 10 ms to max 1 ms and use usleep_range()
instead of msleep() for more accurate timer.

In addition, reducing the time interval will increase the messages
output, so remove the "Retry ..." message, instead, track time and
output to the the sucessful message.

Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com>
Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: pgtable: Correct typo in comment
Mark Brown [Thu, 24 Oct 2019 12:01:43 +0000 (13:01 +0100)]
arm64: pgtable: Correct typo in comment

vmmemmap -> vmemmap

Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: docs: cpu-feature-registers: Document ID_AA64PFR1_EL1
Dave Martin [Wed, 23 Oct 2019 17:52:22 +0000 (18:52 +0100)]
arm64: docs: cpu-feature-registers: Document ID_AA64PFR1_EL1

Commit d71be2b6c0e1 ("arm64: cpufeature: Detect SSBS and advertise
to userspace") exposes ID_AA64PFR1_EL1 to userspace, but didn't
update the documentation to match.

Add it.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: cpufeature: Fix typos in comment
Shaokun Zhang [Fri, 25 Oct 2019 06:32:06 +0000 (14:32 +0800)]
arm64: cpufeature: Fix typos in comment

Fix up one typos: CTR_E0 -> CTR_EL0

Cc: Will Deacon <will@kernel.org>
Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: Add ARM64_WORKAROUND_1319367 for all A57 and A72 versions
Marc Zyngier [Fri, 23 Nov 2018 17:25:52 +0000 (17:25 +0000)]
arm64: Add ARM64_WORKAROUND_1319367 for all A57 and A72 versions

Rework the EL2 vector hardening that is only selected for A57 and A72
so that the table can also be used for ARM64_WORKAROUND_1319367.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
4 years agomm: fix double page fault on arm64 if PTE_AF is cleared
Jia He [Fri, 11 Oct 2019 14:09:39 +0000 (22:09 +0800)]
mm: fix double page fault on arm64 if PTE_AF is cleared

When we tested pmdk unit test [1] vmmalloc_fork TEST3 on arm64 guest, there
will be a double page fault in __copy_from_user_inatomic of cow_user_page.

To reproduce the bug, the cmd is as follows after you deployed everything:
make -C src/test/vmmalloc_fork/ TEST_TIME=60m check

Below call trace is from arm64 do_page_fault for debugging purpose:
[  110.016195] Call trace:
[  110.016826]  do_page_fault+0x5a4/0x690
[  110.017812]  do_mem_abort+0x50/0xb0
[  110.018726]  el1_da+0x20/0xc4
[  110.019492]  __arch_copy_from_user+0x180/0x280
[  110.020646]  do_wp_page+0xb0/0x860
[  110.021517]  __handle_mm_fault+0x994/0x1338
[  110.022606]  handle_mm_fault+0xe8/0x180
[  110.023584]  do_page_fault+0x240/0x690
[  110.024535]  do_mem_abort+0x50/0xb0
[  110.025423]  el0_da+0x20/0x24

The pte info before __copy_from_user_inatomic is (PTE_AF is cleared):
[ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003,
               pmd=000000023d4b3003, pte=360000298607bd3

As told by Catalin: "On arm64 without hardware Access Flag, copying from
user will fail because the pte is old and cannot be marked young. So we
always end up with zeroed page after fork() + CoW for pfn mappings. we
don't always have a hardware-managed access flag on arm64."

This patch fixes it by calling pte_mkyoung. Also, the parameter is
changed because vmf should be passed to cow_user_page()

Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error
in case there can be some obscure use-case (by Kirill).

[1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork

Signed-off-by: Jia He <justin.he@arm.com>
Reported-by: Yibo Cai <Yibo.Cai@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agox86/mm: implement arch_faults_on_old_pte() stub on x86
Jia He [Fri, 11 Oct 2019 14:09:38 +0000 (22:09 +0800)]
x86/mm: implement arch_faults_on_old_pte() stub on x86

arch_faults_on_old_pte is a helper to indicate that it might cause page
fault when accessing old pte. But on x86, there is feature to setting
pte access flag by hardware. Hence implement an overriding stub which
always returns false.

Signed-off-by: Jia He <justin.he@arm.com>
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: mm: implement arch_faults_on_old_pte() on arm64
Jia He [Fri, 11 Oct 2019 14:09:37 +0000 (22:09 +0800)]
arm64: mm: implement arch_faults_on_old_pte() on arm64

On arm64 without hardware Access Flag, copying from user will fail because
the pte is old and cannot be marked young. So we always end up with zeroed
page after fork() + CoW for pfn mappings. We don't always have a
hardware-managed Access Flag on arm64.

Hence implement arch_faults_on_old_pte on arm64 to indicate that it might
cause page fault when accessing old pte.

Signed-off-by: Jia He <justin.he@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: cpufeature: introduce helper cpu_has_hw_af()
Jia He [Fri, 11 Oct 2019 14:09:36 +0000 (22:09 +0800)]
arm64: cpufeature: introduce helper cpu_has_hw_af()

We unconditionally set the HW_AFDBM capability and only enable it on
CPUs which really have the feature. But sometimes we need to know
whether this cpu has the capability of HW AF. So decouple AF from
DBM by a new helper cpu_has_hw_af().

If later we noticed a potential performance issue on this path, we can
turn it into a static label as with other CPU features.

Signed-off-by: Jia He <justin.he@arm.com>
Suggested-by: Suzuki Poulose <Suzuki.Poulose@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoMerge branch 'errata/tx2-219' into for-next/fixes
Will Deacon [Thu, 17 Oct 2019 20:42:42 +0000 (13:42 -0700)]
Merge branch 'errata/tx2-219' into for-next/fixes

Workaround for Cavium/Marvell ThunderX2 erratum #219.

* errata/tx2-219:
  arm64: Allow CAVIUM_TX2_ERRATUM_219 to be selected
  arm64: Avoid Cavium TX2 erratum 219 when switching TTBR
  arm64: Enable workaround for Cavium TX2 erratum 219 when running SMT
  arm64: KVM: Trap VM ops when ARM64_WORKAROUND_CAVIUM_TX2_219_TVM is set

4 years agoarm64: tags: Preserve tags for addresses translated via TTBR1
Will Deacon [Wed, 16 Oct 2019 04:04:18 +0000 (21:04 -0700)]
arm64: tags: Preserve tags for addresses translated via TTBR1

Sign-extending TTBR1 addresses when converting to an untagged address
breaks the documented POSIX semantics for mlock() in some obscure error
cases where we end up returning -EINVAL instead of -ENOMEM as a direct
result of rewriting the upper address bits.

Rework the untagged_addr() macro to preserve the upper address bits for
TTBR1 addresses and only clear the tag bits for user addresses. This
matches the behaviour of the 'clear_address_tag' assembly macro, so
rename that and align the implementations at the same time so that they
use the same instruction sequences for the tag manipulation.

Link: https://lore.kernel.org/stable/20191014162651.GF19200@arrakis.emea.arm.com/
Reported-by: Jan Stancek <jstancek@redhat.com>
Tested-by: Jan Stancek <jstancek@redhat.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: mm: fix inverted PAR_EL1.F check
Mark Rutland [Wed, 16 Oct 2019 11:03:04 +0000 (12:03 +0100)]
arm64: mm: fix inverted PAR_EL1.F check

When detecting a spurious EL1 translation fault, we have the CPU retry
the translation using an AT S1E1R instruction, and inspect PAR_EL1 to
determine if the fault was spurious.

When PAR_EL1.F == 0, the AT instruction successfully translated the
address without a fault, which implies the original fault was spurious.
However, in this case we return false and treat the original fault as if
it was not spurious.

Invert the return value so that we treat such a case as spurious.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Fixes: 42f91093b043 ("arm64: mm: Ignore spurious translation faults taken from the kernel")
Tested-by: James Morse <james.morse@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: sysreg: fix incorrect definition of SYS_PAR_EL1_F
Yang Yingliang [Wed, 16 Oct 2019 03:42:57 +0000 (11:42 +0800)]
arm64: sysreg: fix incorrect definition of SYS_PAR_EL1_F

The 'F' field of the PAR_EL1 register lives in bit 0, not bit 1.
Fix the broken definition in 'sysreg.h'.

Fixes: e8620cff9994 ("arm64: sysreg: Add some field definitions for PAR_EL1")
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabled
Julien Thierry [Tue, 15 Oct 2019 17:25:44 +0000 (18:25 +0100)]
arm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabled

Preempting from IRQ-return means that the task has its PSTATE saved
on the stack, which will get restored when the task is resumed and does
the actual IRQ return.

However, enabling some CPU features requires modifying the PSTATE. This
means that, if a task was scheduled out during an IRQ-return before all
CPU features are enabled, the task might restore a PSTATE that does not
include the feature enablement changes once scheduled back in.

* Task 1:

PAN == 0 ---|                          |---------------
            |                          |<- return from IRQ, PSTATE.PAN = 0
            | <- IRQ                   |
            +--------+ <- preempt()  +--
                                     ^
                                     |
                                     reschedule Task 1, PSTATE.PAN == 1
* Init:
        --------------------+------------------------
                            ^
                            |
                            enable_cpu_features
                            set PSTATE.PAN on all CPUs

Worse than this, since PSTATE is untouched when task switching is done,
a task missing the new bits in PSTATE might affect another task, if both
do direct calls to schedule() (outside of IRQ/exception contexts).

Fix this by preventing preemption on IRQ-return until features are
enabled on all CPUs.

This way the only PSTATE values that are saved on the stack are from
synchronous exceptions. These are expected to be fatal this early, the
exception is BRK for WARN_ON(), but as this uses do_debug_exception()
which keeps IRQs masked, it shouldn't call schedule().

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[james: Replaced a really cool hack, with an even simpler static key in C.
 expanded commit message with Julien's cover-letter ascii art]
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: mm: Fix unused variable warning in zone_sizes_init
Nathan Chancellor [Wed, 16 Oct 2019 14:47:14 +0000 (07:47 -0700)]
arm64: mm: Fix unused variable warning in zone_sizes_init

When building arm64 allnoconfig, CONFIG_ZONE_DMA and CONFIG_ZONE_DMA32
get disabled so there is a warning about max_dma being unused.

../arch/arm64/mm/init.c:215:16: warning: unused variable 'max_dma'
[-Wunused-variable]
        unsigned long max_dma = min;
                      ^
1 warning generated.

Add __maybe_unused to make this clear to the compiler.

Fixes: 1a8e1cef7603 ("arm64: use both ZONE_DMA and ZONE_DMA32")
Reviewed-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64/mm: Poison initmem while freeing with free_reserved_area()
Anshuman Khandual [Fri, 4 Oct 2019 04:23:58 +0000 (09:53 +0530)]
arm64/mm: Poison initmem while freeing with free_reserved_area()

Platform implementation for free_initmem() should poison the memory while
freeing it up. Hence pass across POISON_FREE_INITMEM while calling into
free_reserved_area(). The same is being followed in the generic fallback
for free_initmem() and some other platforms overriding it.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Steven Price <steven.price@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: use generic free_initrd_mem()
Mike Rapoport [Sat, 28 Sep 2019 08:02:26 +0000 (11:02 +0300)]
arm64: use generic free_initrd_mem()

arm64 calls memblock_free() for the initrd area in its implementation of
free_initrd_mem(), but this call has no actual effect that late in the boot
process. By the time initrd is freed, all the reserved memory is managed by
the page allocator and the memblock.reserved is unused, so the only purpose
of the memblock_free() call is to keep track of initrd memory for debugging
and accounting.

Without the memblock_free() call the only difference between arm64 and the
generic versions of free_initrd_mem() is the memory poisoning.

Move memblock_free() call to the generic code, enable it there
for the architectures that define ARCH_KEEP_MEMBLOCK and use the generic
implementation of free_initrd_mem() on arm64.

Tested-by: Anshuman Khandual <anshuman.khandual@arm.com> #arm64
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: Document ICC_CTLR_EL3.PMHE setting requirements
Marc Zyngier [Wed, 2 Oct 2019 09:06:13 +0000 (10:06 +0100)]
arm64: Document ICC_CTLR_EL3.PMHE setting requirements

It goes without saying, but better saying it: the kernel expects
ICC_CTLR_EL3.PMHE to have the same value across all CPUs, and
for that setting not to change during the lifetime of the kernel.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear
Marc Zyngier [Wed, 2 Oct 2019 09:06:12 +0000 (10:06 +0100)]
arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear

The GICv3 architecture specification is incredibly misleading when it
comes to PMR and the requirement for a DSB. It turns out that this DSB
is only required if the CPU interface sends an Upstream Control
message to the redistributor in order to update the RD's view of PMR.

This message is only sent when ICC_CTLR_EL1.PMHE is set, which isn't
the case in Linux. It can still be set from EL3, so some special care
is required. But the upshot is that in the (hopefuly large) majority
of the cases, we can drop the DSB altogether.

This relies on a new static key being set if the boot CPU has PMHE
set. The drawback is that this static key has to be exported to
modules.

Cc: Will Deacon <will@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: hibernate: check pgd table allocation
Pavel Tatashin [Mon, 14 Oct 2019 14:48:24 +0000 (10:48 -0400)]
arm64: hibernate: check pgd table allocation

There is a bug in create_safe_exec_page(), when page table is allocated
it is not checked that table is allocated successfully:

But it is dereferenced in: pgd_none(READ_ONCE(*pgdp)).  Check that
allocation was successful.

Fixes: 82869ac57b5d ("arm64: kernel: Add support for hibernate/suspend-to-disk")
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: cpufeature: Treat ID_AA64ZFR0_EL1 as RAZ when SVE is not enabled
Julien Grall [Mon, 14 Oct 2019 10:21:13 +0000 (11:21 +0100)]
arm64: cpufeature: Treat ID_AA64ZFR0_EL1 as RAZ when SVE is not enabled

If CONFIG_ARM64_SVE=n then we fail to report ID_AA64ZFR0_EL1 as 0 when
read by userspace, despite being required by the architecture. Although
this is theoretically a change in ABI, userspace will first check for
the presence of SVE via the HWCAP or the ID_AA64PFR0_EL1.SVE field
before probing the ID_AA64ZFR0_EL1 register. Given that these are
reported correctly for this configuration, we can safely tighten up the
current behaviour.

Ensure ID_AA64ZFR0_EL1 is treated as RAZ when CONFIG_ARM64_SVE=n.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Fixes: 06a916feca2b ("arm64: Expose SVE2 features for userspace")
Signed-off-by: Will Deacon <will@kernel.org>
4 years agomm: refresh ZONE_DMA and ZONE_DMA32 comments in 'enum zone_type'
Nicolas Saenz Julienne [Wed, 11 Sep 2019 18:25:46 +0000 (20:25 +0200)]
mm: refresh ZONE_DMA and ZONE_DMA32 comments in 'enum zone_type'

These zones usage has evolved with time and the comments were outdated.
This joins both ZONE_DMA and ZONE_DMA32 explanation and gives up to date
examples on how they are used on different architectures.

Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: use both ZONE_DMA and ZONE_DMA32
Nicolas Saenz Julienne [Wed, 11 Sep 2019 18:25:45 +0000 (20:25 +0200)]
arm64: use both ZONE_DMA and ZONE_DMA32

So far all arm64 devices have supported 32 bit DMA masks for their
peripherals. This is not true anymore for the Raspberry Pi 4 as most of
it's peripherals can only address the first GB of memory on a total of
up to 4 GB.

This goes against ZONE_DMA32's intent, as it's expected for ZONE_DMA32
to be addressable with a 32 bit mask. So it was decided to re-introduce
ZONE_DMA in arm64.

ZONE_DMA will contain the lower 1G of memory, which is currently the
memory area addressable by any peripheral on an arm64 device.
ZONE_DMA32 will contain the rest of the 32 bit addressable memory.

Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: rename variables used to calculate ZONE_DMA32's size
Nicolas Saenz Julienne [Wed, 11 Sep 2019 18:25:44 +0000 (20:25 +0200)]
arm64: rename variables used to calculate ZONE_DMA32's size

Let the name indicate that they are used to calculate ZONE_DMA32's size
as opposed to ZONE_DMA.

Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoarm64: mm: use arm64_dma_phys_limit instead of calling max_zone_dma_phys()
Nicolas Saenz Julienne [Wed, 11 Sep 2019 18:25:43 +0000 (20:25 +0200)]
arm64: mm: use arm64_dma_phys_limit instead of calling max_zone_dma_phys()

By the time we call zones_sizes_init() arm64_dma_phys_limit already
contains the result of max_zone_dma_phys(). We use the variable instead
of calling the function directly to save some precious cpu time.

Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agofirmware: arm_sdei: use common SMCCC_CONDUIT_*
Mark Rutland [Fri, 9 Aug 2019 13:22:44 +0000 (14:22 +0100)]
firmware: arm_sdei: use common SMCCC_CONDUIT_*

Now that we have common definitions for SMCCC conduits, move the SDEI
code over to them, and remove the SDEI-specific definitions.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>