OSDN Git Service

uclinux-h8/linux.git
3 years agoMerge branch 'for-next/perf' into for-next/core
Will Deacon [Fri, 12 Feb 2021 15:09:34 +0000 (15:09 +0000)]
Merge branch 'for-next/perf' into for-next/core

Perf and PMU updates including support for Cortex-A78 and the v8.3 SPE
extensions.

* for-next/perf:
  drivers/perf: Replace spin_lock_irqsave to spin_lock
  dt-bindings: arm: add Cortex-A78 binding
  arm64: perf: add support for Cortex-A78
  arm64: perf: Constify static attribute_group structs
  drivers/perf: Prevent forced unbinding of ARM_DMC620_PMU drivers
  perf/arm-cmn: Move IRQs when migrating context
  perf/arm-cmn: Fix PMU instance naming
  perf: Constify static struct attribute_group
  perf: hisi: Constify static struct attribute_group
  perf/imx_ddr: Constify static struct attribute_group
  perf: qcom: Constify static struct attribute_group
  drivers/perf: Add support for ARMv8.3-SPE

3 years agoMerge branch 'for-next/misc' into for-next/core
Will Deacon [Fri, 12 Feb 2021 15:07:34 +0000 (15:07 +0000)]
Merge branch 'for-next/misc' into for-next/core

Miscellaneous arm64 changes for 5.12.

* for-next/misc:
  arm64: Make CPU_BIG_ENDIAN depend on ld.bfd or ld.lld 13.0.0+
  arm64: vmlinux.ld.S: add assertion for tramp_pg_dir offset
  arm64: vmlinux.ld.S: add assertion for reserved_pg_dir offset
  arm64/ptdump:display the Linear Mapping start marker
  arm64: ptrace: Fix missing return in hw breakpoint code
  KVM: arm64: Move __hyp_set_vectors out of .hyp.text
  arm64: Include linux/io.h in mm/mmap.c
  arm64: cacheflush: Remove stale comment
  arm64: mm: Remove unused header file
  arm64/sparsemem: reduce SECTION_SIZE_BITS
  arm64/mm: Add warning for outside range requests in vmemmap_populate()
  arm64: Drop workaround for broken 'S' constraint with GCC 4.9

3 years agoMerge branch 'for-next/kexec' into for-next/core
Will Deacon [Fri, 12 Feb 2021 15:03:53 +0000 (15:03 +0000)]
Merge branch 'for-next/kexec' into for-next/core

Significant steps along the road to leaving the MMU enabled during kexec
relocation.

* for-next/kexec:
  arm64: hibernate: add __force attribute to gfp_t casting
  arm64: kexec: arm64_relocate_new_kernel don't use x0 as temp
  arm64: kexec: arm64_relocate_new_kernel clean-ups and optimizations
  arm64: kexec: call kexec_image_info only once
  arm64: kexec: move relocation function setup
  arm64: trans_pgd: hibernate: idmap the single page that holds the copy page routines
  arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()
  arm64: trans_pgd: pass NULL instead of init_mm to *_populate functions
  arm64: trans_pgd: pass allocator trans_pgd_create_copy
  arm64: trans_pgd: make trans_pgd_map_page generic
  arm64: hibernate: move page handling function to new trans_pgd.c
  arm64: hibernate: variable pudp is used instead of pd4dp
  arm64: kexec: make dtb_mem always enabled

3 years agoMerge branch 'for-next/faultaround' into for-next/core
Will Deacon [Fri, 12 Feb 2021 14:59:10 +0000 (14:59 +0000)]
Merge branch 'for-next/faultaround' into for-next/core

Initialise prefaulted PTEs as 'old' for arm64 when hardware access-flag
updates are supported, which drastically improves vmscan performance.

* for-next/faultaround:
  mm: filemap: Fix microblaze build failure with 'mmu_defconfig'
  mm/nommu: Fix return type of filemap_map_pages()
  mm: Mark anonymous struct field of 'struct vm_fault' as 'const'
  mm: Use static initialisers for immutable fields of 'struct vm_fault'
  mm: Avoid modifying vmf.address in __collapse_huge_page_swapin()
  mm: Pass 'address' to map to do_set_pte() and drop FAULT_FLAG_PREFAULT
  mm: Move immutable fields of 'struct vm_fault' into anonymous struct
  arm64: mm: Implement arch_wants_old_prefaulted_pte()
  mm: Allow architectures to request 'old' entries when prefaulting
  mm: Cleanup faultaround and finish_fault() codepaths

3 years agoMerge branch 'for-next/errata' into for-next/core
Will Deacon [Fri, 12 Feb 2021 14:57:13 +0000 (14:57 +0000)]
Merge branch 'for-next/errata' into for-next/core

Rework of the workaround for Cortex-A76 erratum 1463225 to fit in better
with the ongoing exception entry cleanups and changes to the detection
code for Cortex-A55 erratum 1024718 since it applies to all revisions of
the silicon.

* for-next/errata:
  arm64: entry: consolidate Cortex-A76 erratum 1463225 workaround
  arm64: Extend workaround for erratum 1024718 to all versions of Cortex-A55

3 years agoMerge branch 'for-next/crypto' into for-next/core
Will Deacon [Fri, 12 Feb 2021 14:54:55 +0000 (14:54 +0000)]
Merge branch 'for-next/crypto' into for-next/core

Introduce a new macro to allow yielding the vector unit if preemption
is required. The initial users of this are being merged via the crypto
tree for 5.12.

* for-next/crypto:
  arm64: assembler: add cond_yield macro

3 years agoMerge branch 'for-next/cpufeature' into for-next/core
Will Deacon [Fri, 12 Feb 2021 14:53:19 +0000 (14:53 +0000)]
Merge branch 'for-next/cpufeature' into for-next/core

Support for overriding CPU ID register fields on the command-line, which
allows us to disable certain features which the kernel would otherwise
use unconditionally when detected.

* for-next/cpufeature: (22 commits)
  arm64: cpufeatures: Allow disabling of Pointer Auth from the command-line
  arm64: Defer enabling pointer authentication on boot core
  arm64: cpufeatures: Allow disabling of BTI from the command-line
  arm64: Move "nokaslr" over to the early cpufeature infrastructure
  KVM: arm64: Document HVC_VHE_RESTART stub hypercall
  arm64: Make kvm-arm.mode={nvhe, protected} an alias of id_aa64mmfr1.vh=0
  arm64: Add an aliasing facility for the idreg override
  arm64: Honor VHE being disabled from the command-line
  arm64: Allow ID_AA64MMFR1_EL1.VH to be overridden from the command line
  arm64: cpufeature: Add an early command-line cpufeature override facility
  arm64: Extract early FDT mapping from kaslr_early_init()
  arm64: cpufeature: Use IDreg override in __read_sysreg_by_encoding()
  arm64: cpufeature: Add global feature override facility
  arm64: Move SCTLR_EL1 initialisation to EL-agnostic code
  arm64: Simplify init_el2_state to be non-VHE only
  arm64: Move VHE-specific SPE setup to mutate_to_vhe()
  arm64: Drop early setting of MDSCR_EL2.TPMS
  arm64: Initialise as nVHE before switching to VHE
  arm64: Provide an 'upgrade to VHE' stub hypercall
  arm64: Turn the MMU-on sequence into a macro
  ...

3 years agoMerge branch 'for-next/cosmetic' into for-next/core
Will Deacon [Fri, 12 Feb 2021 14:46:16 +0000 (14:46 +0000)]
Merge branch 'for-next/cosmetic' into for-next/core

Cosmetic changes to tidy up stale comments and fix inconsistent
whitespace. No functional changes here!

* for-next/cosmetic:
  mm/arm64: Correct obsolete comment in do_page_fault()
  arm64: improve whitespace

3 years agodrivers/perf: Replace spin_lock_irqsave to spin_lock
Qi Liu [Tue, 9 Feb 2021 09:42:22 +0000 (17:42 +0800)]
drivers/perf: Replace spin_lock_irqsave to spin_lock

There is no need to do spin_lock_irqsave in context of hard IRQ, so
replace them with spin_lock.

Signed-off-by: Qi Liu <liuqi115@huawei.com>
Link: https://lore.kernel.org/r/1612863742-1551-1-git-send-email-liuqi115@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: filemap: Fix microblaze build failure with 'mmu_defconfig'
Will Deacon [Wed, 10 Feb 2021 11:15:11 +0000 (11:15 +0000)]
mm: filemap: Fix microblaze build failure with 'mmu_defconfig'

Commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault()
codepaths") added a call to 'update_mmu_cache()' in mm/filemap.c, which
breaks the build for microblaze:

  | mm/filemap.c: In function 'filemap_map_pages':
  | mm/filemap.c:3153:3: error: implicit declaration of function 'update_mmu_cache'; did you mean 'update_mmu_tlb'?

Include asm/tlbflush.h in mm/filemap.c to make sure that the function
(or indeed, macro) is available.

Reported-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Link: https://lore.kernel.org/r/20210209202449.GA104837@roeck-us.net
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Make CPU_BIG_ENDIAN depend on ld.bfd or ld.lld 13.0.0+
Nathan Chancellor [Tue, 9 Feb 2021 00:57:20 +0000 (17:57 -0700)]
arm64: Make CPU_BIG_ENDIAN depend on ld.bfd or ld.lld 13.0.0+

Similar to commit 28187dc8ebd9 ("ARM: 9025/1: Kconfig: CPU_BIG_ENDIAN
depends on !LD_IS_LLD"), ld.lld prior to 13.0.0 does not properly
support aarch64 big endian, leading to the following build error when
CONFIG_CPU_BIG_ENDIAN is selected:

ld.lld: error: unknown emulation: aarch64linuxb

This has been resolved in LLVM 13. To avoid errors like this, only allow
CONFIG_CPU_BIG_ENDIAN to be selected if using ld.bfd or ld.lld 13.0.0
and newer.

While we are here, the indentation of this symbol used spaces since its
introduction in commit a872013d6d03 ("arm64: kconfig: allow
CPU_BIG_ENDIAN to be selected"). Change it to tabs to be consistent with
kernel coding style.

Link: https://github.com/ClangBuiltLinux/linux/issues/380
Link: https://github.com/ClangBuiltLinux/linux/issues/1288
Link: https://github.com/llvm/llvm-project/commit/7605a9a009b5fa3bdac07e3131c8d82f6d08feb7
Link: https://github.com/llvm/llvm-project/commit/eea34aae2e74e9b6fbdd5b95f479bc7f397bf387
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20210209005719.803608-1-nathan@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: cpufeatures: Allow disabling of Pointer Auth from the command-line
Marc Zyngier [Mon, 8 Feb 2021 09:57:31 +0000 (09:57 +0000)]
arm64: cpufeatures: Allow disabling of Pointer Auth from the command-line

In order to be able to disable Pointer Authentication  at runtime,
whether it is for testing purposes, or to work around HW issues,
let's add support for overriding the ID_AA64ISAR1_EL1.{GPI,GPA,API,APA}
fields.

This is further mapped on the arm64.nopauth command-line alias.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Tested-by: Srinivas Ramana <sramana@codeaurora.org>
Link: https://lore.kernel.org/r/20210208095732.3267263-23-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Defer enabling pointer authentication on boot core
Srinivas Ramana [Mon, 8 Feb 2021 09:57:30 +0000 (09:57 +0000)]
arm64: Defer enabling pointer authentication on boot core

Defer enabling pointer authentication on boot core until
after its required to be enabled by cpufeature framework.
This will help in controlling the feature dynamically
with a boot parameter.

Signed-off-by: Ajay Patil <pajay@qti.qualcomm.com>
Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
Signed-off-by: Srinivas Ramana <sramana@codeaurora.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/1610152163-16554-2-git-send-email-sramana@codeaurora.org
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-22-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: cpufeatures: Allow disabling of BTI from the command-line
Marc Zyngier [Mon, 8 Feb 2021 09:57:29 +0000 (09:57 +0000)]
arm64: cpufeatures: Allow disabling of BTI from the command-line

In order to be able to disable BTI at runtime, whether it is
for testing purposes, or to work around HW issues, let's add
support for overriding the ID_AA64PFR1_EL1.BTI field.

This is further mapped on the arm64.nobti command-line alias.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Tested-by: Srinivas Ramana <sramana@codeaurora.org>
Link: https://lore.kernel.org/r/20210208095732.3267263-21-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Move "nokaslr" over to the early cpufeature infrastructure
Marc Zyngier [Mon, 8 Feb 2021 09:57:28 +0000 (09:57 +0000)]
arm64: Move "nokaslr" over to the early cpufeature infrastructure

Given that the early cpufeature infrastructure has borrowed quite
a lot of code from the kaslr implementation, let's reimplement
the matching of the "nokaslr" option with it.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-20-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoKVM: arm64: Document HVC_VHE_RESTART stub hypercall
Marc Zyngier [Mon, 8 Feb 2021 09:57:27 +0000 (09:57 +0000)]
KVM: arm64: Document HVC_VHE_RESTART stub hypercall

For completeness, let's document the HVC_VHE_RESTART stub.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-19-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Make kvm-arm.mode={nvhe, protected} an alias of id_aa64mmfr1.vh=0
Marc Zyngier [Mon, 8 Feb 2021 09:57:26 +0000 (09:57 +0000)]
arm64: Make kvm-arm.mode={nvhe, protected} an alias of id_aa64mmfr1.vh=0

Admitedly, passing id_aa64mmfr1.vh=0 on the command-line isn't
that easy to understand, and it is likely that users would much
prefer write "kvm-arm.mode=nvhe", or "...=protected".

So here you go. This has the added advantage that we can now
always honor the "kvm-arm.mode=protected" option, even when
booting on a VHE system.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-18-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Add an aliasing facility for the idreg override
Marc Zyngier [Mon, 8 Feb 2021 09:57:25 +0000 (09:57 +0000)]
arm64: Add an aliasing facility for the idreg override

In order to map the override of idregs to options that a user
can easily understand, let's introduce yet another option
array, which maps an option to the corresponding idreg options.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-17-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Honor VHE being disabled from the command-line
Marc Zyngier [Mon, 8 Feb 2021 09:57:24 +0000 (09:57 +0000)]
arm64: Honor VHE being disabled from the command-line

Finally we can check whether VHE is disabled on the command line,
and not enable it if that's the user's wish.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-16-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Allow ID_AA64MMFR1_EL1.VH to be overridden from the command line
Marc Zyngier [Mon, 8 Feb 2021 09:57:23 +0000 (09:57 +0000)]
arm64: Allow ID_AA64MMFR1_EL1.VH to be overridden from the command line

As we want to be able to disable VHE at runtime, let's match
"id_aa64mmfr1.vh=" from the command line as an override.
This doesn't have much effect yet as our boot code doesn't look
at the cpufeature, but only at the HW registers.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-15-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: cpufeature: Add an early command-line cpufeature override facility
Marc Zyngier [Mon, 8 Feb 2021 09:57:22 +0000 (09:57 +0000)]
arm64: cpufeature: Add an early command-line cpufeature override facility

In order to be able to override CPU features at boot time,
let's add a command line parser that matches options of the
form "cpureg.feature=value", and store the corresponding
value into the override val/mask pair.

No features are currently defined, so no expected change in
functionality.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-14-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Extract early FDT mapping from kaslr_early_init()
Marc Zyngier [Mon, 8 Feb 2021 09:57:21 +0000 (09:57 +0000)]
arm64: Extract early FDT mapping from kaslr_early_init()

As we want to parse more options very early in the kernel lifetime,
let's always map the FDT early. This is achieved by moving that
code out of kaslr_early_init().

No functional change expected.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-13-maz@kernel.org
[will: Ensue KASAN is enabled before running C code]
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: cpufeature: Use IDreg override in __read_sysreg_by_encoding()
Marc Zyngier [Mon, 8 Feb 2021 09:57:20 +0000 (09:57 +0000)]
arm64: cpufeature: Use IDreg override in __read_sysreg_by_encoding()

__read_sysreg_by_encoding() is used by a bunch of cpufeature helpers,
which should take the feature override into account. Let's do that.

For a good measure (and because we are likely to need to further
down the line), make this helper available to the rest of the
non-modular kernel.

Code that needs to know the *real* features of a CPU can still
use read_sysreg_s(), and find the bare, ugly truth.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-12-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: cpufeature: Add global feature override facility
Marc Zyngier [Mon, 8 Feb 2021 09:57:19 +0000 (09:57 +0000)]
arm64: cpufeature: Add global feature override facility

Add a facility to globally override a feature, no matter what
the HW says. Yes, this sounds dangerous, but we do respect the
"safe" value for a given feature. This doesn't mean the user
doesn't need to know what they are doing.

Nothing uses this yet, so we are pretty safe. For now.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-11-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Move SCTLR_EL1 initialisation to EL-agnostic code
Marc Zyngier [Mon, 8 Feb 2021 09:57:18 +0000 (09:57 +0000)]
arm64: Move SCTLR_EL1 initialisation to EL-agnostic code

We can now move the initial SCTLR_EL1 setup to be used for both
EL1 and EL2 setup.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-10-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Simplify init_el2_state to be non-VHE only
Marc Zyngier [Mon, 8 Feb 2021 09:57:17 +0000 (09:57 +0000)]
arm64: Simplify init_el2_state to be non-VHE only

As init_el2_state is now nVHE only, let's simplify it and drop
the VHE setup.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-9-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Move VHE-specific SPE setup to mutate_to_vhe()
Marc Zyngier [Mon, 8 Feb 2021 09:57:16 +0000 (09:57 +0000)]
arm64: Move VHE-specific SPE setup to mutate_to_vhe()

There isn't much that a VHE kernel needs on top of whatever has
been done for nVHE, so let's move the little we need to the
VHE stub (the SPE setup), and drop the init_el2_state macro.

No expected functional change.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-8-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Drop early setting of MDSCR_EL2.TPMS
Marc Zyngier [Mon, 8 Feb 2021 09:57:15 +0000 (09:57 +0000)]
arm64: Drop early setting of MDSCR_EL2.TPMS

When running VHE, we set MDSCR_EL2.TPMS very early on to force
the trapping of EL1 SPE accesses to EL2.

However:
- we are running with HCR_EL2.{E2H,TGE}={1,1}, meaning that there
  is no EL1 to trap from

- before entering a guest, we call kvm_arm_setup_debug(), which
  sets MDCR_EL2_TPMS in the per-vcpu shadow mdscr_el2, which gets
  applied on entry by __activate_traps_common().

The early setting of MDSCR_EL2.TPMS is therefore useless and can
be dropped.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210208095732.3267263-7-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Initialise as nVHE before switching to VHE
Marc Zyngier [Mon, 8 Feb 2021 09:57:14 +0000 (09:57 +0000)]
arm64: Initialise as nVHE before switching to VHE

As we are aiming to be able to control whether we enable VHE or
not, let's always drop down to EL1 first, and only then upgrade
to VHE if at all possible.

This means that if the kernel is booted at EL2, we always start
with a nVHE init, drop to EL1 to initialise the the kernel, and
only then upgrade the kernel EL to EL2 if possible (the process
is obviously shortened for secondary CPUs).

The resume path is handled similarly to a secondary CPU boot.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-6-maz@kernel.org
[will: Avoid calling switch_to_vhe twice on kaslr path]
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: entry: consolidate Cortex-A76 erratum 1463225 workaround
Mark Rutland [Tue, 2 Feb 2021 12:03:41 +0000 (12:03 +0000)]
arm64: entry: consolidate Cortex-A76 erratum 1463225 workaround

The workaround for Cortex-A76 erratum 1463225 is split across the
syscall and debug handlers in separate files. This structure currently
forces us to do some redundant work for debug exceptions from EL0, is a
little difficult to follow, and gets in the way of some future rework of
the exception entry code as it requires exceptions to be unmasked late
in the syscall handling path.

To simplify things, and as a preparatory step for future rework of
exception entry, this patch moves all the workaround logic into
entry-common.c. As the debug handler only needs to run for EL1 debug
exceptions, we no longer call it for EL0 debug exceptions, and no longer
need to check user_mode(regs) as this is always false. For clarity
cortex_a76_erratum_1463225_debug_handler() is changed to return bool.

In the SVC path, the workaround is applied earlier, but this should have
no functional impact as exceptions are still masked. In the debug path
we run the fixup before explicitly disabling preemption, but we will not
attempt to preempt before returning from the exception.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210202120341.28858-1-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Provide an 'upgrade to VHE' stub hypercall
Marc Zyngier [Mon, 8 Feb 2021 09:57:13 +0000 (09:57 +0000)]
arm64: Provide an 'upgrade to VHE' stub hypercall

As we are about to change the way a VHE system boots, let's
provide the core helper, in the form of a stub hypercall that
enables VHE and replicates the full EL1 context at EL2, thanks
to EL1 and VHE-EL2 being extremely similar.

On exception return, the kernel carries on at EL2. Fancy!

Nothing calls this new hypercall yet, so no functional change.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-5-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Turn the MMU-on sequence into a macro
Marc Zyngier [Mon, 8 Feb 2021 09:57:12 +0000 (09:57 +0000)]
arm64: Turn the MMU-on sequence into a macro

Turning the MMU on is a popular sport in the arm64 kernel, and
we do it more than once, or even twice. As we are about to add
even more, let's turn it into a macro.

No expected functional change.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-4-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Fix outdated TCR setup comment
Marc Zyngier [Mon, 8 Feb 2021 09:57:11 +0000 (09:57 +0000)]
arm64: Fix outdated TCR setup comment

The arm64 kernel has long be able to use more than 39bit VAs.
Since day one, actually. Let's rewrite the offending comment.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-3-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Fix labels in el2_setup macros
Marc Zyngier [Mon, 8 Feb 2021 09:57:10 +0000 (09:57 +0000)]
arm64: Fix labels in el2_setup macros

If someone happens to write the following code:

b 1f
init_el2_state vhe
1:
[...]

they will be in for a long debugging session, as the label "1f"
will be resolved *inside* the init_el2_state macro instead of
after it. Not really what one expects.

Instead, rewite the EL2 setup macros to use unambiguous labels,
thanks to the usual macro counter trick.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-2-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Extend workaround for erratum 1024718 to all versions of Cortex-A55
Suzuki K Poulose [Wed, 3 Feb 2021 23:00:57 +0000 (23:00 +0000)]
arm64: Extend workaround for erratum 1024718 to all versions of Cortex-A55

The erratum 1024718 affects Cortex-A55 r0p0 to r2p0. However
we apply the work around for r0p0 - r1p0. Unfortunately this
won't be fixed for the future revisions for the CPU. Thus
extend the work around for all versions of A55, to cover
for r2p0 and any future revisions.

Cc: stable@vger.kernel.org
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/20210203230057.3961239-1-suzuki.poulose@arm.com
[will: Update Kconfig help text]
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm/arm64: Correct obsolete comment in do_page_fault()
Miaohe Lin [Fri, 5 Feb 2021 09:09:19 +0000 (04:09 -0500)]
mm/arm64: Correct obsolete comment in do_page_fault()

commit d8ed45c5dcd4 ("mmap locking API: use coccinelle to convert mmap_sem
rwsem call sites") has convertd down_read_trylock() to mmap_read_trylock().
But it forgot to update the relevant comment.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Link: https://lore.kernel.org/r/20210205090919.63382-1-linmiaohe@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: improve whitespace
Zhiyuan Dai [Thu, 4 Feb 2021 01:43:49 +0000 (09:43 +0800)]
arm64: improve whitespace

In a few places we don't have whitespace between macro parameters,
which makes them hard to read. This patch adds whitespace to clearly
separate the parameters.

In a few places we have unnecessary whitespace around unary operators,
which is confusing, This patch removes the unnecessary whitespace.

Signed-off-by: Zhiyuan Dai <daizhiyuan@phytium.com.cn>
Link: https://lore.kernel.org/r/1612403029-5011-1-git-send-email-daizhiyuan@phytium.com.cn
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: assembler: add cond_yield macro
Ard Biesheuvel [Wed, 3 Feb 2021 11:36:18 +0000 (12:36 +0100)]
arm64: assembler: add cond_yield macro

Add a macro cond_yield that branches to a specified label when called if
the TIF_NEED_RESCHED flag is set and decreasing the preempt count would
make the task preemptible again, resulting in a schedule to occur. This
can be used by kernel mode SIMD code that keeps a lot of state in SIMD
registers, which would make chunking the input in order to perform the
cond_resched() check from C code disproportionately costly.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210203113626.220151-2-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: vmlinux.ld.S: add assertion for tramp_pg_dir offset
Joey Gouly [Tue, 2 Feb 2021 12:36:58 +0000 (12:36 +0000)]
arm64: vmlinux.ld.S: add assertion for tramp_pg_dir offset

Add TRAMP_SWAPPER_OFFSET and use that instead of hardcoding
the offset between swapper_pg_dir and tramp_pg_dir.

Then use TRAMP_SWAPPER_OFFSET to assert that the offset is
correct at link time.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210202123658.22308-3-joey.gouly@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: vmlinux.ld.S: add assertion for reserved_pg_dir offset
Joey Gouly [Tue, 2 Feb 2021 12:36:57 +0000 (12:36 +0000)]
arm64: vmlinux.ld.S: add assertion for reserved_pg_dir offset

Add RESERVED_SWAPPER_OFFSET and use that instead of hardcoding
the offset between swapper_pg_dir and reserved_pg_dir.

Then use RESERVED_SWAPPER_OFFSET to assert that the offset is
correct at link time.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210202123658.22308-2-joey.gouly@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodt-bindings: arm: add Cortex-A78 binding
Seiya Wang [Wed, 3 Feb 2021 05:53:48 +0000 (13:53 +0800)]
dt-bindings: arm: add Cortex-A78 binding

Add compatible for Cortex-A78 PMU

Signed-off-by: Seiya Wang <seiya.wang@mediatek.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210203055348.4935-3-seiya.wang@mediatek.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: perf: add support for Cortex-A78
Seiya Wang [Wed, 3 Feb 2021 05:53:47 +0000 (13:53 +0800)]
arm64: perf: add support for Cortex-A78

Add support for Cortex-A78 using generic PMUv3 for now.

Signed-off-by: Seiya Wang <seiya.wang@mediatek.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210203055348.4935-2-seiya.wang@mediatek.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64/ptdump:display the Linear Mapping start marker
Hailong Liu [Tue, 2 Feb 2021 15:07:49 +0000 (23:07 +0800)]
arm64/ptdump:display the Linear Mapping start marker

The current /sys/kernel/debug/kernel_page_tables does not display the
*Linear Mapping start* marker on arm64, which I think should be paired
with the *Linear Mapping end* marker.

Since *Linear Mapping start* is the first marker, use initialise 'level'
to -1 in order to display it.

Signed-off-by: Hailong Liu <liu.hailong6@zte.com.cn>
Link: https://lore.kernel.org/r/20210202150749.10104-1-liuhailongg6@163.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: ptrace: Fix missing return in hw breakpoint code
Keno Fischer [Tue, 2 Feb 2021 00:21:09 +0000 (19:21 -0500)]
arm64: ptrace: Fix missing return in hw breakpoint code

When delivering a hw-breakpoint SIGTRAP to a compat task via ptrace, the
lack of a 'return' statement means we fallthrough to the native case,
which differs in its handling of 'si_errno'.

Although this looks to be harmless because the subsequent signal is
effectively ignored, it's confusing and unintentional, so add the
missing 'return'.

Signed-off-by: Keno Fischer <keno@juliacomputing.com>
Link: https://lore.kernel.org/r/20210202002109.GA624440@juliacomputing.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: perf: Constify static attribute_group structs
Rikard Falkeborn [Sun, 31 Jan 2021 14:36:15 +0000 (15:36 +0100)]
arm64: perf: Constify static attribute_group structs

The only usage of these is to put their addresses in an array of
pointers to const attribute_group structs. Make them const to allow the
compiler to put them in read-only memory.

Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: Prevent forced unbinding of ARM_DMC620_PMU drivers
Qi Liu [Tue, 2 Feb 2021 07:58:06 +0000 (15:58 +0800)]
drivers/perf: Prevent forced unbinding of ARM_DMC620_PMU drivers

Set "suppress_bind_attrs" to true, so that bind/unbind can be
disabled via sysfs and prevent unbinding ARM_DMC620_PMU drivers
during perf sampling.

Signed-off-by: Qi Liu <liuqi115@huawei.com>
Link: https://lore.kernel.org/r/1612252686-50329-1-git-send-email-liuqi115@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: hibernate: add __force attribute to gfp_t casting
Pavel Tatashin [Mon, 1 Feb 2021 15:03:06 +0000 (10:03 -0500)]
arm64: hibernate: add __force attribute to gfp_t casting

Two new warnings are reported by sparse:

"sparse warnings: (new ones prefixed by >>)"
>> arch/arm64/kernel/hibernate.c:181:39: sparse: sparse: cast to
   restricted gfp_t
>> arch/arm64/kernel/hibernate.c:202:44: sparse: sparse: cast from
   restricted gfp_t

gfp_t has __bitwise type attribute and requires __force added to casting
in order to avoid these warnings.

Fixes: 50f53fb72181 ("arm64: trans_pgd: make trans_pgd_map_page generic")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/r/20210201150306.54099-2-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf/arm-cmn: Move IRQs when migrating context
Robin Murphy [Thu, 28 Jan 2021 13:12:44 +0000 (13:12 +0000)]
perf/arm-cmn: Move IRQs when migrating context

If we migrate the PMU context to another CPU, we need to remember to
retarget the IRQs as well.

Fixes: 0ba64770a2f2 ("perf: Add Arm CMN-600 PMU driver")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/e080640aea4ed8dfa870b8549dfb31221803eb6b.1611839564.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf/arm-cmn: Fix PMU instance naming
Robin Murphy [Thu, 28 Jan 2021 13:12:43 +0000 (13:12 +0000)]
perf/arm-cmn: Fix PMU instance naming

Although it's neat to avoid the suffix for the typical case of a
single PMU, it means systems with multiple CMN instances end up with
inconsistent naming. I think it also breaks perf tool's "uncore alias"
logic if the common instance prefix is also the full name of one.

Avoid any surprises by not trying to be clever and simply numbering
every instance, even when it might technically prove redundant.

Fixes: 0ba64770a2f2 ("perf: Add Arm CMN-600 PMU driver")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/649a2281233f193d59240b13ed91b57337c77b32.1611839564.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoKVM: arm64: Move __hyp_set_vectors out of .hyp.text
Quentin Perret [Thu, 28 Jan 2021 17:38:50 +0000 (17:38 +0000)]
KVM: arm64: Move __hyp_set_vectors out of .hyp.text

The .hyp.text section is supposed to be reserved for the nVHE EL2 code.
However, there is currently one occurrence of EL1 executing code located
in .hyp.text when calling __hyp_{re}set_vectors(), which happen to sit
next to the EL2 stub vectors. While not a problem yet, such patterns
will cause issues when removing the host kernel from the TCB, so a
cleaner split would be preferable.

Fix this by delimiting the end of the .hyp.text section in hyp-stub.S.

Acked-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Quentin Perret <qperret@google.com>
Link: https://lore.kernel.org/r/20210128173850.2478161-1-qperret@google.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm/nommu: Fix return type of filemap_map_pages()
Geert Uytterhoeven [Thu, 28 Jan 2021 10:06:26 +0000 (11:06 +0100)]
mm/nommu: Fix return type of filemap_map_pages()

If CONFIG_MMU is not set (e.g. m68k/m5272c3_defconfig):

    mm/nommu.c:1671:6: error: conflicting types for â€˜filemap_map_pages’
     1671 | void filemap_map_pages(struct vm_fault *vmf,
  |      ^~~~~~~~~~~~~~~~~
    In file included from mm/nommu.c:20:
    ./include/linux/mm.h:2578:19: note: previous declaration of â€˜filemap_map_pages’ was here
     2578 | extern vm_fault_t filemap_map_pages(struct vm_fault *vmf,
  |                   ^~~~~~~~~~~~~~~~~

The signature of filemap_map_pages() was changed, but the nommu
implementation wasn't updated.

Reported-by: noreply@ellerman.id.au
Fixes: f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Link: https://lore.kernel.org/r/20210128100626.2257638-1-geert@linux-m68k.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: kexec: arm64_relocate_new_kernel don't use x0 as temp
Pavel Tatashin [Mon, 25 Jan 2021 19:19:17 +0000 (14:19 -0500)]
arm64: kexec: arm64_relocate_new_kernel don't use x0 as temp

x0 will contain the only argument to arm64_relocate_new_kernel; don't
use it as a temp. Reassigned registers to free-up x0 so we won't need
to copy argument, and can use it at the beginning and at the end of the
function.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-13-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: kexec: arm64_relocate_new_kernel clean-ups and optimizations
Pavel Tatashin [Mon, 25 Jan 2021 19:19:16 +0000 (14:19 -0500)]
arm64: kexec: arm64_relocate_new_kernel clean-ups and optimizations

In preparation to bigger changes to arm64_relocate_new_kernel that would
enable this function to do MMU backed memory copy, do few clean-ups and
optimizations. These include:

1. Call raw_dcache_line_size()  only when relocation is actually going to
   happen. i.e. kdump type kexec, does not need it.

2.  copy_page(dest, src, tmps...) increments dest and src by PAGE_SIZE, so
    no need to store dest prior to calling copy_page and increment it
    after. Also, src is not used after a copy, not need to copy either.

3. For consistency use comment on the same line with instruction when it
   describes the instruction itself.

4. Some comment corrections

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-12-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: kexec: call kexec_image_info only once
Pavel Tatashin [Mon, 25 Jan 2021 19:19:15 +0000 (14:19 -0500)]
arm64: kexec: call kexec_image_info only once

Currently, kexec_image_info() is called during load time, and
right before kernel is being kexec'ed. There is no need to do both.
So, call it only once when segments are loaded and the physical
location of page with copy of arm64_relocate_new_kernel is known.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Acked-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-11-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: kexec: move relocation function setup
Pavel Tatashin [Mon, 25 Jan 2021 19:19:14 +0000 (14:19 -0500)]
arm64: kexec: move relocation function setup

Currently, kernel relocation function is configured in machine_kexec()
at the time of kexec reboot by using control_code_page.

This operation, however, is more logical to be done during kexec_load,
and thus remove from reboot time. Move, setup of this function to
newly added machine_kexec_post_load().

Because once MMU is enabled, kexec control page will contain more than
relocation kernel, but also vector table, add pointer to the actual
function within this page arch.kern_reloc. Currently, it equals to the
beginning of page, we will add offsets later, when vector table is
added.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-10-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: trans_pgd: hibernate: idmap the single page that holds the copy page routines
James Morse [Mon, 25 Jan 2021 19:19:13 +0000 (14:19 -0500)]
arm64: trans_pgd: hibernate: idmap the single page that holds the copy page routines

To resume from hibernate, the contents of memory are restored from
the swap image. This may overwrite any page, including the running
kernel and its page tables.

Hibernate copies the code it uses to do the restore into a single
page that it knows won't be overwritten, and maps it with page tables
built from pages that won't be overwritten.

Today the address it uses for this mapping is arbitrary, but to allow
kexec to reuse this code, it needs to be idmapped. To idmap the page
we must avoid the kernel helpers that have VA_BITS baked in.

Convert create_single_mapping() to take a single PA, and idmap it.
The page tables are built in the reverse order to normal using
pfn_pte() to stir in any bits between 52:48. T0SZ is always increased
to cover 48bits, or 52 if the copy code has bits 52:48 in its PA.

Signed-off-by: James Morse <james.morse@arm.com>
[Adopted the original patch from James to trans_pgd interface, so it can be
commonly used by both Kexec and Hibernate. Some minor clean-ups.]

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/linux-arm-kernel/20200115143322.214247-4-james.morse@arm.com/
Link: https://lore.kernel.org/r/20210125191923.1060122-9-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()
James Morse [Mon, 25 Jan 2021 19:19:12 +0000 (14:19 -0500)]
arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()

Because only the idmap sets a non-standard T0SZ, __cpu_set_tcr_t0sz()
can check for platforms that need to do this using
__cpu_uses_extended_idmap() before doing its work.

The idmap is only built with enough levels, (and T0SZ bits) to map
its single page.

To allow hibernate, and then kexec to idmap their single page copy
routines, __cpu_set_tcr_t0sz() needs to consider additional users,
who may need a different number of levels/T0SZ-bits to the idmap.
(i.e. VA_BITS may be enough for the idmap, but not hibernate/kexec)

Always read TCR_EL1, and check whether any work needs doing for
this request. __cpu_uses_extended_idmap() remains as it is used
by KVM, whose idmap is also part of the kernel image.

This mostly affects the cpuidle path, where we now get an extra
system register read .

CC: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
CC: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-8-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: trans_pgd: pass NULL instead of init_mm to *_populate functions
Pavel Tatashin [Mon, 25 Jan 2021 19:19:11 +0000 (14:19 -0500)]
arm64: trans_pgd: pass NULL instead of init_mm to *_populate functions

trans_pgd_* should be independent from mm context because the tables that
are created by this code are used when there are no mm context around, as
it is between kernels. Simply replace mm_init's with NULL.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Acked-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-7-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: trans_pgd: pass allocator trans_pgd_create_copy
Pavel Tatashin [Mon, 25 Jan 2021 19:19:10 +0000 (14:19 -0500)]
arm64: trans_pgd: pass allocator trans_pgd_create_copy

Make trans_pgd_create_copy and its subroutines to use allocator that is
passed as an argument

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-6-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: trans_pgd: make trans_pgd_map_page generic
Pavel Tatashin [Mon, 25 Jan 2021 19:19:09 +0000 (14:19 -0500)]
arm64: trans_pgd: make trans_pgd_map_page generic

kexec is going to use a different allocator, so make
trans_pgd_map_page to accept allocator as an argument, and also
kexec is going to use a different map protection, so also pass
it via argument.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: Matthias Brugger <mbrugger@suse.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-5-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: hibernate: move page handling function to new trans_pgd.c
Pavel Tatashin [Mon, 25 Jan 2021 19:19:08 +0000 (14:19 -0500)]
arm64: hibernate: move page handling function to new trans_pgd.c

Now, that we abstracted the required functions move them to a new home.
Later, we will generalize these function in order to be useful outside
of hibernation.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-4-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: hibernate: variable pudp is used instead of pd4dp
Pavel Tatashin [Mon, 25 Jan 2021 19:19:07 +0000 (14:19 -0500)]
arm64: hibernate: variable pudp is used instead of pd4dp

There should be p4dp used when p4d page is allocated.
This is not a functional issue, but for the logical correctness this
should be fixed.

Fixes: e9f6376858b9 ("arm64: add support for folded p4d page tables")
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-3-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: kexec: make dtb_mem always enabled
Pavel Tatashin [Mon, 25 Jan 2021 19:19:06 +0000 (14:19 -0500)]
arm64: kexec: make dtb_mem always enabled

Currently, dtb_mem is enabled only when CONFIG_KEXEC_FILE is
enabled. This adds ugly ifdefs to c files.

Always enabled dtb_mem, when it is not used, it is NULL.
Change the dtb_mem to phys_addr_t, as it is a physical address.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-2-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Include linux/io.h in mm/mmap.c
Will Deacon [Wed, 27 Jan 2021 12:52:16 +0000 (12:52 +0000)]
arm64: Include linux/io.h in mm/mmap.c

Commit 507d664450f8 ("arm64: mm: Remove unused header file") removed
a bunch of apparently "unused" header inclusions from our mm/mmap.c
implementation, but in doing so introduced the following warning when
building with W=1:

>> arch/arm64/mm/mmap.c:17:5: warning: no previous prototype for 'valid_phys_addr_range' [-Wmissing-prototypes]
      17 | int valid_phys_addr_range(phys_addr_t addr, size_t size)
         |     ^~~~~~~~~~~~~~~~~~~~~
>> arch/arm64/mm/mmap.c:36:5: warning: no previous prototype for 'valid_mmap_phys_addr_range' [-Wmissing-prototypes]
      36 | int valid_mmap_phys_addr_range(unsigned long pfn, size_t size)
         |     ^~~~~~~~~~~~~~~~~~~~~~~~~~

Add back the linux/io.h header inclusion to pull in the missing
prototypes.

Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/r/202101271438.V9TmBC31-lkp@intel.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: cacheflush: Remove stale comment
Shaokun Zhang [Mon, 25 Jan 2021 11:55:53 +0000 (19:55 +0800)]
arm64: cacheflush: Remove stale comment

Remove stale comment since commit a7ba121215fa ("arm64: use asm-generic/cacheflush.h")

Cc: Christoph Hellwig <hch@lst.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1611575753-36435-1-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: mm: Remove unused header file
Shaokun Zhang [Tue, 26 Jan 2021 12:24:44 +0000 (20:24 +0800)]
arm64: mm: Remove unused header file

Many header files are never used, let's remove them directly.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1611663884-43329-1-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64/sparsemem: reduce SECTION_SIZE_BITS
Sudarshan Rajagopalan [Thu, 21 Jan 2021 05:29:13 +0000 (21:29 -0800)]
arm64/sparsemem: reduce SECTION_SIZE_BITS

memory_block_size_bytes() determines the memory hotplug granularity i.e the
amount of memory which can be hot added or hot removed from the kernel. The
generic value here being MIN_MEMORY_BLOCK_SIZE (1UL << SECTION_SIZE_BITS)
for memory_block_size_bytes() on platforms like arm64 that does not override.

Current SECTION_SIZE_BITS is 30 i.e 1GB which is large and a reduction here
increases memory hotplug granularity, thus improving its agility. A reduced
section size also reduces memory wastage in vmemmmap mapping for sections
with large memory holes. So we try to set the least section size as possible.

A section size bits selection must follow:
(MAX_ORDER - 1 + PAGE_SHIFT) <= SECTION_SIZE_BITS

CONFIG_FORCE_MAX_ZONEORDER is always defined on arm64 and so just following it
would help achieve the smallest section size.

SECTION_SIZE_BITS = (CONFIG_FORCE_MAX_ZONEORDER - 1 + PAGE_SHIFT)

SECTION_SIZE_BITS = 22 (11 - 1 + 12) i.e 4MB   for 4K pages
SECTION_SIZE_BITS = 24 (11 - 1 + 14) i.e 16MB  for 16K pages without THP
SECTION_SIZE_BITS = 25 (12 - 1 + 14) i.e 32MB  for 16K pages with THP
SECTION_SIZE_BITS = 26 (11 - 1 + 16) i.e 64MB  for 64K pages without THP
SECTION_SIZE_BITS = 29 (14 - 1 + 16) i.e 512MB for 64K pages with THP

But there are other problems in reducing SECTION_SIZE_BIT. Reducing it by too
much would over populate /sys/devices/system/memory/ and also consume too many
page->flags bits in the !vmemmap case. Also section size needs to be multiple
of 128MB to have PMD based vmemmap mapping with CONFIG_ARM64_4K_PAGES.

Given these constraints, lets just reduce the section size to 128MB for 4K
and 16K base page size configs, and to 512MB for 64K base page size config.

Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org>
Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
Suggested-by: David Hildenbrand <david@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Steven Price <steven.price@arm.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/43843c5e092bfe3ec4c41e3c8c78a7ee35b69bb0.1611206601.git.sudaraja@codeaurora.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Mark anonymous struct field of 'struct vm_fault' as 'const'
Will Deacon [Thu, 14 Jan 2021 15:44:09 +0000 (15:44 +0000)]
mm: Mark anonymous struct field of 'struct vm_fault' as 'const'

The fields of this struct are only ever read after being initialised, so
mark it 'const' before somebody tries to modify it again. GCC will then
complain (with an error) about modification of these fields after they
have been initialised, although LLVM currently allows them without even
a warning:

https://bugs.llvm.org/show_bug.cgi?id=48755

Hopefully, future versions of LLVM will emit a warning.

Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Use static initialisers for immutable fields of 'struct vm_fault'
Will Deacon [Thu, 14 Jan 2021 15:42:14 +0000 (15:42 +0000)]
mm: Use static initialisers for immutable fields of 'struct vm_fault'

In preparation for const-ifying the anonymous struct field of
'struct vm_fault', ensure that it is initialised using designated
initialisers.

Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Avoid modifying vmf.address in __collapse_huge_page_swapin()
Will Deacon [Thu, 14 Jan 2021 15:33:49 +0000 (15:33 +0000)]
mm: Avoid modifying vmf.address in __collapse_huge_page_swapin()

In preparation for const-ifying the anonymous struct field of
'struct vm_fault', rework __collapse_huge_page_swapin() to avoid
continuously updating vmf.address and instead populate a new
'struct vm_fault' on the stack for each page being processed.

Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Pass 'address' to map to do_set_pte() and drop FAULT_FLAG_PREFAULT
Will Deacon [Thu, 14 Jan 2021 15:24:19 +0000 (15:24 +0000)]
mm: Pass 'address' to map to do_set_pte() and drop FAULT_FLAG_PREFAULT

Rather than modifying the 'address' field of the 'struct vm_fault'
passed to do_set_pte(), leave that to identify the real faulting address
and pass in the virtual address to be mapped by the new pte as a
separate argument.

This makes FAULT_FLAG_PREFAULT redundant, as a prefault entry can be
identified simply by comparing the new address parameter with the
faulting address, so remove the redundant flag at the same time.

Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Move immutable fields of 'struct vm_fault' into anonymous struct
Will Deacon [Wed, 20 Jan 2021 14:34:23 +0000 (14:34 +0000)]
mm: Move immutable fields of 'struct vm_fault' into anonymous struct

'struct vm_fault' contains both information about the fault being
serviced alongside mutable fields contributing to the state of the
fault-handling logic. Unfortunately, the distinction between the two is
not clear-cut, and a number of callers end up manipulating the structure
temporarily before restoring it when returning.

Try to clean this up by moving the immutable fault information into an
anonymous struct, which will later be marked as 'const'. Ideally, the
'flags' field would be part of the new structure too, but it seems as
though the ->page_mkwrite() path is not ready for this yet.

Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/CAHk-=whYs9XsO88iqJzN6NC=D-dp2m0oYXuOoZ=eWnvv=5OA+w@mail.gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf: Constify static struct attribute_group
Rikard Falkeborn [Sun, 17 Jan 2021 21:28:47 +0000 (22:28 +0100)]
perf: Constify static struct attribute_group

The only usage is to put their addresses in an array of pointers to
const struct attribute group. Make them const to allow the compiler
to put them in read-only memory.

Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com>
Link: https://lore.kernel.org/r/20210117212847.21319-5-rikard.falkeborn@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf: hisi: Constify static struct attribute_group
Rikard Falkeborn [Sun, 17 Jan 2021 21:28:46 +0000 (22:28 +0100)]
perf: hisi: Constify static struct attribute_group

The only usage is to put their addresses in an array of pointers to
const struct attribute group. Make them const to allow the compiler
to put them in read-only memory.

Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com>
Link: https://lore.kernel.org/r/20210117212847.21319-4-rikard.falkeborn@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf/imx_ddr: Constify static struct attribute_group
Rikard Falkeborn [Sun, 17 Jan 2021 21:28:45 +0000 (22:28 +0100)]
perf/imx_ddr: Constify static struct attribute_group

The only usage is to put their addresses in an array of pointers to
const struct attribute group. Make them const to allow the compiler
to put them in read-only memory.

Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com>
Link: https://lore.kernel.org/r/20210117212847.21319-3-rikard.falkeborn@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf: qcom: Constify static struct attribute_group
Rikard Falkeborn [Sun, 17 Jan 2021 21:28:44 +0000 (22:28 +0100)]
perf: qcom: Constify static struct attribute_group

The only usage is to put their addresses in an array of pointers to
const struct attribute group. Make them const to allow the compiler
to put them in read-only memory.

Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com>
Link: https://lore.kernel.org/r/20210117212847.21319-2-rikard.falkeborn@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: Add support for ARMv8.3-SPE
Wei Li [Thu, 3 Dec 2020 14:16:09 +0000 (22:16 +0800)]
drivers/perf: Add support for ARMv8.3-SPE

Armv8.3 extends the SPE by adding:
- Alignment field in the Events packet, and filtering on this event
  using PMSEVFR_EL1.
- Support for the Scalable Vector Extension (SVE).

The main additions for SVE are:
- Recording the vector length for SVE operations in the Operation Type
  packet. It is not possible to filter on vector length.
- Incomplete predicate and empty predicate fields in the Events packet,
  and filtering on these events using PMSEVFR_EL1.

Update the check of pmsevfr for empty/partial predicated SVE and
alignment event in SPE driver.

Signed-off-by: Wei Li <liwei391@huawei.com>
Link: https://lore.kernel.org/r/20201203141609.14148-1-liwei391@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: mm: Implement arch_wants_old_prefaulted_pte()
Will Deacon [Tue, 24 Nov 2020 18:49:26 +0000 (18:49 +0000)]
arm64: mm: Implement arch_wants_old_prefaulted_pte()

On CPUs with hardware AF/DBM, initialising prefaulted PTEs as 'old'
improves vmscan behaviour and does not appear to introduce any overhead
elsewhere.

Implement arch_wants_old_prefaulted_pte() to return 'true' if we detect
hardware access flag support at runtime. This can be extended in future
based on MIDR matching if necessary.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Allow architectures to request 'old' entries when prefaulting
Will Deacon [Tue, 24 Nov 2020 18:48:26 +0000 (18:48 +0000)]
mm: Allow architectures to request 'old' entries when prefaulting

Commit 5c0a85fad949 ("mm: make faultaround produce old ptes") changed
the "faultaround" behaviour to initialise prefaulted PTEs as 'old',
since this avoids vmscan wrongly assuming that they are hot, despite
having never been explicitly accessed by userspace. The change has been
shown to benefit numerous arm64 micro-architectures (with hardware
access flag) running Android, where both application launch latency and
direct reclaim time are significantly reduced (by 10%+ and ~80%
respectively).

Unfortunately, commit 315d09bf30c2 ("Revert "mm: make faultaround
produce old ptes"") reverted the change due to it being identified as
the cause of a ~6% regression in unixbench on x86. Experiments on a
variety of recent arm64 micro-architectures indicate that unixbench is
not affected by the original commit, which appears to yield a 0-1%
performance improvement.

Since one size does not fit all for the initial state of prefaulted
PTEs, introduce arch_wants_old_prefaulted_pte(), which allows an
architecture to opt-in to 'old' prefaulted PTEs at runtime based on
whatever criteria it may have.

Cc: Jan Kara <jack@suse.cz>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Reported-by: Vinayak Menon <vinmenon@codeaurora.org>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Cleanup faultaround and finish_fault() codepaths
Kirill A. Shutemov [Sat, 19 Dec 2020 12:19:23 +0000 (15:19 +0300)]
mm: Cleanup faultaround and finish_fault() codepaths

alloc_set_pte() has two users with different requirements: in the
faultaround code, it called from an atomic context and PTE page table
has to be preallocated. finish_fault() can sleep and allocate page table
as needed.

PTL locking rules are also strange, hard to follow and overkill for
finish_fault().

Let's untangle the mess. alloc_set_pte() has gone now. All locking is
explicit.

The price is some code duplication to handle huge pages in faultaround
path, but it should be fine, having overall improvement in readability.

Link: https://lore.kernel.org/r/20201229132819.najtavneutnf7ajp@box
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
[will: s/from from/from/ in comment; spotted by willy]
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64/mm: Add warning for outside range requests in vmemmap_populate()
Anshuman Khandual [Tue, 5 Jan 2021 11:24:11 +0000 (16:54 +0530)]
arm64/mm: Add warning for outside range requests in vmemmap_populate()

vmemmap_populate() does not validate the requested vmemmap address range to
be inside the platform assigned space i.e [VMEMMAP_START..VMEMMAP_END] for
vmemmap. Instead it would just go ahead and create the mapping which might
then overlap with other sections in the kernel virtual address space.

Just adding an warning here for range overrun which would help detect the
problem earlier on, before a potential struct page corruption. This also
makes vmemmap_populate() symmetrical with vmemmap_free() which already has
a similar warning.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/1609845851-25064-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Drop workaround for broken 'S' constraint with GCC 4.9
Marc Zyngier [Mon, 18 Jan 2021 13:01:29 +0000 (13:01 +0000)]
arm64: Drop workaround for broken 'S' constraint with GCC 4.9

Since GCC < 5.1 has been shown to be unsuitable for the arm64 kernel,
let's drop the workaround for the 'S' asm constraint that GCC 4.9
doesn't always grok.

This is effectively a revert of 9fd339a45be5 ("arm64: Work around
broken GCC 4.9 handling of "S" constraint").

Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210118130129.2875949-1-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoLinux 5.11-rc4
Linus Torvalds [Mon, 18 Jan 2021 00:37:05 +0000 (16:37 -0800)]
Linux 5.11-rc4

3 years agoMerge tag 'perf-tools-fixes-2021-01-17' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 17 Jan 2021 21:14:46 +0000 (13:14 -0800)]
Merge tag 'perf-tools-fixes-2021-01-17' of git://git./linux/kernel/git/acme/linux

Pull perf tools fixes from Arnaldo Carvalho de Melo:

 - Fix 'CPU too large' error in Intel PT

 - Correct event attribute sizes in 'perf inject'

 - Sync build_bug.h and kvm.h kernel copies

 - Fix bpf.h header include directive in 5sec.c 'perf trace' bpf example

 - libbpf tests fixes

 - Fix shadow stat 'perf test' for non-bash shells

 - Take cgroups into account for shadow stats in 'perf stat'

* tag 'perf-tools-fixes-2021-01-17' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux:
  perf inject: Correct event attribute sizes
  perf intel-pt: Fix 'CPU too large' error
  perf stat: Take cgroups into account for shadow stats
  perf stat: Introduce struct runtime_stat_data
  libperf tests: Fail when failing to get a tracepoint id
  libperf tests: If a test fails return non-zero
  libperf tests: Avoid uninitialized variable warning
  perf test: Fix shadow stat test for non-bash shells
  tools headers: Syncronize linux/build_bug.h with the kernel sources
  tools headers UAPI: Sync kvm.h headers with the kernel sources
  perf bpf examples: Fix bpf.h header include directive in 5sec.c example

3 years agoMerge tag 'powerpc-5.11-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc...
Linus Torvalds [Sun, 17 Jan 2021 20:28:58 +0000 (12:28 -0800)]
Merge tag 'powerpc-5.11-4' of git://git./linux/kernel/git/powerpc/linux

Pull powerpc fixes from Michael Ellerman:
 "One fix for a lack of alignment in our linker script, that can lead to
  crashes depending on configuration etc.

  One fix for the 32-bit VDSO after the C VDSO conversion.

  Thanks to Andreas Schwab, Ariel Marcovitch, and Christophe Leroy"

* tag 'powerpc-5.11-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
  powerpc/vdso: Fix clock_gettime_fallback for vdso32
  powerpc: Fix alignment bug within the init sections

3 years agoMerge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Linus Torvalds [Sun, 17 Jan 2021 20:16:47 +0000 (12:16 -0800)]
Merge branch 'fixes' of git://git./linux/kernel/git/viro/vfs

Pull misc vfs fixes from Al Viro:
 "Several assorted fixes.

  I still think that audit ->d_name race is better fixed this way for
  the benefit of backports, with any possibly fancier variants done on
  top of it"

* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  dump_common_audit_data(): fix racy accesses to ->d_name
  iov_iter: fix the uaccess area in copy_compat_iovec_from_user
  umount(2): move the flag validity checks first

3 years agomm: don't put pinned pages into the swap cache
Linus Torvalds [Sat, 16 Jan 2021 23:34:57 +0000 (15:34 -0800)]
mm: don't put pinned pages into the swap cache

So technically there is nothing wrong with adding a pinned page to the
swap cache, but the pinning obviously means that the page can't actually
be free'd right now anyway, so it's a bit pointless.

However, the real problem is not with it being a bit pointless: the real
issue is that after we've added it to the swap cache, we'll try to unmap
the page.  That will succeed, because the code in mm/rmap.c doesn't know
or care about pinned pages.

Even the unmapping isn't fatal per se, since the page will stay around
in memory due to the pinning, and we do hold the connection to it using
the swap cache.  But when we then touch it next and take a page fault,
the logic in do_swap_page() will map it back into the process as a
possibly read-only page, and we'll then break the page association on
the next COW fault.

Honestly, this issue could have been fixed in any of those other places:
(a) we could refuse to unmap a pinned page (which makes conceptual
sense), or (b) we could make sure to re-map a pinned page writably in
do_swap_page(), or (c) we could just make do_wp_page() not COW the
pinned page (which was what we historically did before that "mm:
do_wp_page() simplification" commit).

But while all of them are equally valid models for breaking this chain,
not putting pinned pages into the swap cache in the first place is the
simplest one by far.

It's also the safest one: the reason why do_wp_page() was changed in the
first place was that getting the "can I re-use this page" wrong is so
fraught with errors.  If you do it wrong, you end up with an incorrectly
shared page.

As a result, using "page_maybe_dma_pinned()" in either do_wp_page() or
do_swap_page() would be a serious bug since it is only a (very good)
heuristic.  Re-using the page requires a hard black-and-white rule with
no room for ambiguity.

In contrast, saying "this page is very likely dma pinned, so let's not
add it to the swap cache and try to unmap it" is an obviously safe thing
to do, and if the heuristic might very rarely be a false positive, no
harm is done.

Fixes: 09854ba94c6a ("mm: do_wp_page() simplification")
Reported-and-tested-by: Martin Raiber <martin@urbackup.org>
Cc: Pavel Begunkov <asml.silence@gmail.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Peter Xu <peterx@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agoMerge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Linus Torvalds [Sat, 16 Jan 2021 20:25:40 +0000 (12:25 -0800)]
Merge tag 'scsi-fixes' of git://git./linux/kernel/git/jejb/scsi

Pull SCSI fixes from James Bottomley:
 "Nine minor fixes, seven in drivers and two in the core SCSI disk
  driver (sd) which should be harmless involving removing an unused
  variable and quietening a spurious warning"

Signed-off-by: James E.J. Bottomley <jejb@linux.ibm.com>
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
  scsi: sd: Remove obsolete variable in sd_remove()
  scsi: sd: Suppress spurious errors when WRITE SAME is being disabled
  scsi: scsi_debug: Fix memleak in scsi_debug_init()
  scsi: mpt3sas: Fix spelling mistake in Kconfig "compatiblity" -> "compatibility"
  scsi: qedi: Correct max length of CHAP secret
  scsi: ufs: Correct the LUN used in eh_device_reset_handler() callback
  scsi: ufs: Relocate flush of exceptional event
  scsi: ufs: Relax the condition of UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL
  scsi: ufs: Fix possible power drain during system suspend

3 years agodump_common_audit_data(): fix racy accesses to ->d_name
Al Viro [Tue, 5 Jan 2021 19:43:46 +0000 (14:43 -0500)]
dump_common_audit_data(): fix racy accesses to ->d_name

We are not guaranteed the locking environment that would prevent
dentry getting renamed right under us.  And it's possible for
old long name to be freed after rename, leading to UAF here.

Cc: stable@kernel.org # v2.6.2+
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
3 years agoMerge tag 'block-5.11-2021-01-16' of git://git.kernel.dk/linux-block
Linus Torvalds [Sat, 16 Jan 2021 19:39:58 +0000 (11:39 -0800)]
Merge tag 'block-5.11-2021-01-16' of git://git.kernel.dk/linux-block

Pull block fixes from Jens Axboe:
 "Just an nvme pull request via Christoph:

   - don't initialize hwmon for discover controllers (Sagi Grimberg)

   - fix iov_iter handling in nvme-tcp (Sagi Grimberg)

   - fix a preempt warning in nvme-tcp (Sagi Grimberg)

   - fix a possible NULL pointer dereference in nvme (Israel Rukshin)"

* tag 'block-5.11-2021-01-16' of git://git.kernel.dk/linux-block:
  nvme: don't intialize hwmon for discovery controllers
  nvme-tcp: fix possible data corruption with bio merges
  nvme-tcp: Fix warning with CONFIG_DEBUG_PREEMPT
  nvmet-rdma: Fix NULL deref when setting pi_enable and traddr INADDR_ANY

3 years agoMerge tag 'io_uring-5.11-2021-01-16' of git://git.kernel.dk/linux-block
Linus Torvalds [Sat, 16 Jan 2021 19:12:02 +0000 (11:12 -0800)]
Merge tag 'io_uring-5.11-2021-01-16' of git://git.kernel.dk/linux-block

Pull io_uring fixes from Jens Axboe:
 "We still have a pending fix for a cancelation issue, but it's still
  being investigated. In the meantime:

   - Dead mm handling fix (Pavel)

   - SQPOLL setup error handling (Pavel)

   - Flush timeout sequence fix (Marcelo)

   - Missing finish_wait() for one exit case"

* tag 'io_uring-5.11-2021-01-16' of git://git.kernel.dk/linux-block:
  io_uring: ensure finish_wait() is always called in __io_uring_task_cancel()
  io_uring: flush timeouts that should already have expired
  io_uring: do sqo disable on install_fd error
  io_uring: fix null-deref in io_disable_sqo_submit
  io_uring: don't take files/mm for a dead task
  io_uring: drop mm and files after task_work_run

3 years agoMerge tag 'riscv-for-linus-5.11-rc4' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sat, 16 Jan 2021 19:00:08 +0000 (11:00 -0800)]
Merge tag 'riscv-for-linus-5.11-rc4' of git://git./linux/kernel/git/riscv/linux

Pull RISC-V fixes from Palmer Dabbelt:
 "There are a few more fixes than a normal rc4, largely due to the
  bubble introduced by the holiday break:

   - return -ENOSYS for syscall number -1, which previously returned an
     uninitialized value.

   - ensure of_clk_init() has been called in time_init(), without which
     clock drivers may not be initialized.

   - fix sifive,uart0 driver to properly display the baud rate. A fix to
     initialize MPIE that allows interrupts to be processed during
     system calls.

   - avoid erronously begin tracing IRQs when interrupts are disabled,
     which at least triggers suprious lockdep failures.

   - workaround for a warning related to calling smp_processor_id()
     while preemptible. The warning itself is suprious on currently
     availiable systems.

   - properly include the generic time VDSO calls. A fix to our kasan
     address mapping. A fix to the HiFive Unleashed device tree, which
     allows the Ethernet PHY to be properly initialized by Linux (as
     opposed to relying on the bootloader).

   - defconfig update to include SiFive's GPIO driver, which is present
     on the HiFive Unleashed and necessary to initialize the PHY.

   - avoid allocating memory while initializing reserved memory.

   - avoid allocating the last 4K of memory, as pointers there alias
     with syscall errors.

  There are also two cleanups that should have no functional effect but
  do fix build warnings:

   - drop a duplicated definition of PAGE_KERNEL_EXEC.

   - properly declare the asm register SP shim.

   - cleanup the rv32 memory size Kconfig entry, to reflect the actual
     size of memory availiable"

* tag 'riscv-for-linus-5.11-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
  RISC-V: Fix maximum allowed phsyical memory for RV32
  RISC-V: Set current memblock limit
  RISC-V: Do not allocate memblock while iterating reserved memblocks
  riscv: stacktrace: Move register keyword to beginning of declaration
  riscv: defconfig: enable gpio support for HiFive Unleashed
  dts: phy: add GPIO number and active state used for phy reset
  dts: phy: fix missing mdio device and probe failure of vsc8541-01 device
  riscv: Fix KASAN memory mapping.
  riscv: Fixup CONFIG_GENERIC_TIME_VSYSCALL
  riscv: cacheinfo: Fix using smp_processor_id() in preemptible
  riscv: Trace irq on only interrupt is enabled
  riscv: Drop a duplicated PAGE_KERNEL_EXEC
  riscv: Enable interrupts during syscalls with M-Mode
  riscv: Fix sifive serial driver
  riscv: Fix kernel time_init()
  riscv: return -ENOSYS for syscall -1

3 years agomm: don't play games with pinned pages in clear_page_refs
Linus Torvalds [Sun, 10 Jan 2021 01:09:10 +0000 (17:09 -0800)]
mm: don't play games with pinned pages in clear_page_refs

Turning a pinned page read-only breaks the pinning after COW.  Don't do it.

The whole "track page soft dirty" state doesn't work with pinned pages
anyway, since the page might be dirtied by the pinning entity without
ever being noticed in the page tables.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agomm: fix clear_refs_write locking
Linus Torvalds [Fri, 8 Jan 2021 21:13:41 +0000 (13:13 -0800)]
mm: fix clear_refs_write locking

Turning page table entries read-only requires the mmap_sem held for
writing.

So stop doing the odd games with turning things from read locks to write
locks and back.  Just get the write lock.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agoRISC-V: Fix maximum allowed phsyical memory for RV32
Atish Patra [Mon, 11 Jan 2021 23:45:04 +0000 (15:45 -0800)]
RISC-V: Fix maximum allowed phsyical memory for RV32

Linux kernel can only map 1GB of address space for RV32 as the page offset
is set to 0xC0000000. The current description in the Kconfig is confusing
as it indicates that RV32 can support 2GB of physical memory. That is
simply not true for current kernel. In future, a 2GB split support can be
added to allow 2GB physical address space.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
3 years agoRISC-V: Set current memblock limit
Atish Patra [Mon, 11 Jan 2021 23:45:02 +0000 (15:45 -0800)]
RISC-V: Set current memblock limit

Currently, linux kernel can not use last 4k bytes of addressable space
because IS_ERR_VALUE macro treats those as an error. This will be an issue
for RV32 as any memblock allocator potentially allocate chunk of memory
from the end of DRAM (2GB) leading bad address error even though the
address was technically valid.

Fix this issue by limiting the memblock if available memory spans the
entire address space.

Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
3 years agoRISC-V: Do not allocate memblock while iterating reserved memblocks
Atish Patra [Mon, 11 Jan 2021 23:45:01 +0000 (15:45 -0800)]
RISC-V: Do not allocate memblock while iterating reserved memblocks

Currently, resource tree allocates memory blocks while iterating on the
list. It leads to following kernel warning because memblock allocation
also invokes memory block reservation API.

[    0.000000] ------------[ cut here ]------------
[    0.000000] WARNING: CPU: 0 PID: 0 at kernel/resource.c:795
__insert_resource+0x8e/0xd0
[    0.000000] Modules linked in:
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted
5.10.0-00022-ge20097fb37e2-dirty #549
[    0.000000] epc: c00125c2 ra : c001262c sp : c1c01f50
[    0.000000]  gp : c1d456e0 tp : c1c0a980 t0 : ffffcf20
[    0.000000]  t1 : 00000000 t2 : 00000000 s0 : c1c01f60
[    0.000000]  s1 : ffffcf00 a0 : ffffff00 a1 : c1c0c0c4
[    0.000000]  a2 : 80c12b15 a3 : 80402000 a4 : 80402000
[    0.000000]  a5 : c1c0c0c4 a6 : 80c12b15 a7 : f5faf600
[    0.000000]  s2 : c1c0c0c4 s3 : c1c0e000 s4 : c1009a80
[    0.000000]  s5 : c1c0c000 s6 : c1d48000 s7 : c1613b4c
[    0.000000]  s8 : 00000fff s9 : 80000200 s10: c1613b40
[    0.000000]  s11: 00000000 t3 : c1d4a000 t4 : ffffffff

This is also unnecessary as we can pre-compute the total memblocks required
for each memory region and allocate it before the loop. It save precious
boot time not going through memblock allocation code every time.

Fixes: 00ab027a3b82 ("RISC-V: Add kernel image sections to the resource tree")

Reviewed-by: Anup Patel <anup@brainfault.org>
Tested-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
3 years agoiov_iter: fix the uaccess area in copy_compat_iovec_from_user
Christoph Hellwig [Mon, 11 Jan 2021 17:19:26 +0000 (18:19 +0100)]
iov_iter: fix the uaccess area in copy_compat_iovec_from_user

sizeof needs to be called on the compat pointer, not the native one.

Fixes: 89cd35c58bc2 ("iov_iter: transparently handle compat iovecs in import_iovec")
Reported-by: David Laight <David.Laight@ACULAB.COM>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
3 years agoMerge tag 'for-5.11/dm-fixes-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sat, 16 Jan 2021 02:01:17 +0000 (18:01 -0800)]
Merge tag 'for-5.11/dm-fixes-1' of git://git./linux/kernel/git/device-mapper/linux-dm

Pull device mapper fixes from Mike Snitzer:

 - Fix DM-raid's raid1 discard limits so discards work.

 - Select missing Kconfig dependencies for DM integrity and zoned
   targets.

 - Four fixes for DM crypt target's support to optionally bypass kcryptd
   workqueues.

 - Fix DM snapshot merge supports missing data flushes before committing
   metadata.

 - Fix DM integrity data device flushing when external metadata is used.

 - Fix DM integrity's maximum number of supported constructor arguments
   that user can request when creating an integrity device.

 - Eliminate DM core ioctl logging noise when an ioctl is issued without
   required CAP_SYS_RAWIO permission.

* tag 'for-5.11/dm-fixes-1' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
  dm crypt: defer decryption to a tasklet if interrupts disabled
  dm integrity: fix the maximum number of arguments
  dm crypt: do not call bio_endio() from the dm-crypt tasklet
  dm integrity: fix flush with external metadata device
  dm: eliminate potential source of excessive kernel log noise
  dm snapshot: flush merged data before committing metadata
  dm crypt: use GFP_ATOMIC when allocating crypto requests from softirq
  dm crypt: do not wait for backlogged crypto request completion in softirq
  dm zoned: select CONFIG_CRC32
  dm integrity: select CRYPTO_SKCIPHER
  dm raid: fix discard limits for raid1

3 years agoMerge branch 'akpm' (patches from Andrew)
Linus Torvalds [Fri, 15 Jan 2021 23:25:45 +0000 (15:25 -0800)]
Merge branch 'akpm' (patches from Andrew)

Merge misc fixes from Andrew Morton:
 "10 patches.

  Subsystems affected by this patch series: MAINTAINERS and mm (slub,
  pagealloc, memcg, kasan, vmalloc, migration, hugetlb, memory-failure,
  and process_vm_access)"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
  mm/process_vm_access.c: include compat.h
  mm,hwpoison: fix printing of page flags
  MAINTAINERS: add Vlastimil as slab allocators maintainer
  mm/hugetlb: fix potential missing huge page size info
  mm: migrate: initialize err in do_migrate_pages
  mm/vmalloc.c: fix potential memory leak
  arm/kasan: fix the array size of kasan_early_shadow_pte[]
  mm/memcontrol: fix warning in mem_cgroup_page_lruvec()
  mm/page_alloc: add a missing mm_page_alloc_zone_locked() tracepoint
  mm, slub: consider rest of partial list if acquire_slab() fails