OSDN Git Service

uclinux-h8/linux.git
3 years agopowerpc/security: Make symbol 'stf_barrier' static
Li Huafei [Thu, 8 Apr 2021 03:39:51 +0000 (11:39 +0800)]
powerpc/security: Make symbol 'stf_barrier' static

The sparse tool complains as follows:

arch/powerpc/kernel/security.c:253:6: warning:
 symbol 'stf_barrier' was not declared. Should it be static?

This symbol is not used outside of security.c, so this commit marks it
static.

Signed-off-by: Li Huafei <lihuafei1@huawei.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210408033951.28369-1-lihuafei1@huawei.com
3 years agopowerpc/32s: Define a MODULE area below kernel text all the time
Christophe Leroy [Thu, 1 Apr 2021 13:30:43 +0000 (13:30 +0000)]
powerpc/32s: Define a MODULE area below kernel text all the time

On book3s/32, the segment below kernel text is used for module
allocation when CONFIG_STRICT_KERNEL_RWX is defined.

In order to benefit from the powerpc specific module_alloc()
function which allocate modules with 32 Mbytes from
end of kernel text, use that segment below PAGE_OFFSET at all time.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/a46dcdd39a9e80b012d86c294c4e5cd8d31665f3.1617283827.git.christophe.leroy@csgroup.eu
3 years agopowerpc/8xx: Define a MODULE area below kernel text
Christophe Leroy [Thu, 1 Apr 2021 13:30:42 +0000 (13:30 +0000)]
powerpc/8xx: Define a MODULE area below kernel text

On the 8xx, TASK_SIZE is 0x80000000. The space between TASK_SIZE
and PAGE_OFFSET is not used.

In order to benefit from the powerpc specific module_alloc()
function which allocate modules with 32 Mbytes from
end of kernel text, define MODULES_VADDR and MODULES_END.

Set a 256Mb area just below PAGE_OFFSET, like book3s/32.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/a225606d5b3a8bc53fe612ad52c855c60b0a0a58.1617283827.git.christophe.leroy@csgroup.eu
3 years agopowerpc/modules: Load modules closer to kernel text
Christophe Leroy [Thu, 1 Apr 2021 13:30:41 +0000 (13:30 +0000)]
powerpc/modules: Load modules closer to kernel text

On book3s/32, when STRICT_KERNEL_RWX is selected, modules are
allocated on the segment just before kernel text, ie on the
0xb0000000-0xbfffffff when PAGE_OFFSET is 0xc0000000.

On the 8xx, TASK_SIZE is 0x80000000. The space between TASK_SIZE and
PAGE_OFFSET is not used and could be used for modules.

The idea comes from ARM architecture.

Having modules just below PAGE_OFFSET offers an opportunity to
minimise the distance between kernel text and modules and avoid
trampolines in modules to access kernel functions or other module
functions.

When MODULES_VADDR is defined, powerpc has it's own module_alloc()
function. In that function, first try to allocate the module
above the limit defined by '_etext - 32M'. Then if the allocation
fails, fallback to the entire MODULES area.

DEBUG logs in module_32.c without the patch:

[ 1572.588822] module_32: Applying ADD relocate section 13 to 12
[ 1572.588891] module_32: Doing plt for call to 0xc00671a4 at 0xcae04024
[ 1572.588964] module_32: Initialized plt for 0xc00671a4 at cae04000
[ 1572.589037] module_32: REL24 value = CAE04000. location = CAE04024
[ 1572.589110] module_32: Location before: 48000001.
[ 1572.589171] module_32: Location after: 4BFFFFDD.
[ 1572.589231] module_32: ie. jump to 03FFFFDC+CAE04024 = CEE04000
[ 1572.589317] module_32: Applying ADD relocate section 15 to 14
[ 1572.589386] module_32: Doing plt for call to 0xc00671a4 at 0xcadfc018
[ 1572.589457] module_32: Initialized plt for 0xc00671a4 at cadfc000
[ 1572.589529] module_32: REL24 value = CADFC000. location = CADFC018
[ 1572.589601] module_32: Location before: 48000000.
[ 1572.589661] module_32: Location after: 4BFFFFE8.
[ 1572.589723] module_32: ie. jump to 03FFFFE8+CADFC018 = CEDFC000

With the patch:

[  279.404671] module_32: Applying ADD relocate section 13 to 12
[  279.404741] module_32: REL24 value = C00671B4. location = BF808024
[  279.404814] module_32: Location before: 48000001.
[  279.404874] module_32: Location after: 4885F191.
[  279.404933] module_32: ie. jump to 0085F190+BF808024 = C00671B4
[  279.405016] module_32: Applying ADD relocate section 15 to 14
[  279.405085] module_32: REL24 value = C00671B4. location = BF800018
[  279.405156] module_32: Location before: 48000000.
[  279.405215] module_32: Location after: 4886719C.
[  279.405275] module_32: ie. jump to 0086719C+BF800018 = C00671B4

We see that with the patch, no plt entries are set.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/0c3d5cb8a4dfdf6ca1b8aeb385c01470d6628d55.1617283827.git.christophe.leroy@csgroup.eu
3 years agopowerpc/mm: Add cond_resched() while removing hpte mappings
Vaibhav Jain [Sun, 4 Apr 2021 16:31:48 +0000 (22:01 +0530)]
powerpc/mm: Add cond_resched() while removing hpte mappings

While removing large number of mappings from hash page tables for
large memory systems as soft-lockup is reported because of the time
spent inside htap_remove_mapping() like one below:

 watchdog: BUG: soft lockup - CPU#8 stuck for 23s!
 <snip>
 NIP plpar_hcall+0x38/0x58
 LR  pSeries_lpar_hpte_invalidate+0x68/0xb0
 Call Trace:
  0x1fffffffffff000 (unreliable)
  pSeries_lpar_hpte_removebolted+0x9c/0x230
  hash__remove_section_mapping+0xec/0x1c0
  remove_section_mapping+0x28/0x3c
  arch_remove_memory+0xfc/0x150
  devm_memremap_pages_release+0x180/0x2f0
  devm_action_release+0x30/0x50
  release_nodes+0x28c/0x300
  device_release_driver_internal+0x16c/0x280
  unbind_store+0x124/0x170
  drv_attr_store+0x44/0x60
  sysfs_kf_write+0x64/0x90
  kernfs_fop_write+0x1b0/0x290
  __vfs_write+0x3c/0x70
  vfs_write+0xd4/0x270
  ksys_write+0xdc/0x130
  system_call+0x5c/0x70

Fix this by adding a cond_resched() to the loop in
htap_remove_mapping() that issues hcall to remove hpte mapping. The
call to cond_resched() is issued every HZ jiffies which should prevent
the soft-lockup from being reported.

Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210404163148.321346-1-vaibhav@linux.ibm.com
3 years agopowerpc/papr_scm: Implement support for H_SCM_FLUSH hcall
Shivaprasad G Bhat [Mon, 29 Mar 2021 17:36:43 +0000 (13:36 -0400)]
powerpc/papr_scm: Implement support for H_SCM_FLUSH hcall

Add support for ND_REGION_ASYNC capability if the device tree
indicates 'ibm,hcall-flush-required' property in the NVDIMM node.
Flush is done by issuing H_SCM_FLUSH hcall to the hypervisor.

If the flush request failed, the hypervisor is expected to
to reflect the problem in the subsequent nvdimm H_SCM_HEALTH call.

This patch prevents mmap of namespaces with MAP_SYNC flag if the
nvdimm requires an explicit flush[1].

References:
[1] https://github.com/avocado-framework-tests/avocado-misc-tests/blob/master/memory/ndctl.py.data/map_sync.c

Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
[mpe: Use unsigned long / long instead of uint64_t/int64_t]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/161703936121.36.7260632399582101498.stgit@e1fbed493c87
3 years agopowerpc/signal32: Fix build failure with CONFIG_SPE
Christophe Leroy [Sun, 11 Apr 2021 16:39:53 +0000 (16:39 +0000)]
powerpc/signal32: Fix build failure with CONFIG_SPE

Add missing fault exit label in unsafe_copy_from_user() in order to
avoid following build failure with CONFIG_SPE

  CC      arch/powerpc/kernel/signal_32.o
arch/powerpc/kernel/signal_32.c: In function 'restore_user_regs':
arch/powerpc/kernel/signal_32.c:565:36: error: macro "unsafe_copy_from_user" requires 4 arguments, but only 3 given
  565 |           ELF_NEVRREG * sizeof(u32));
      |                                    ^
In file included from ./include/linux/uaccess.h:11,
                 from ./include/linux/sched/task.h:11,
                 from ./include/linux/sched/signal.h:9,
                 from ./include/linux/rcuwait.h:6,
                 from ./include/linux/percpu-rwsem.h:7,
                 from ./include/linux/fs.h:33,
                 from ./include/linux/huge_mm.h:8,
                 from ./include/linux/mm.h:707,
                 from arch/powerpc/kernel/signal_32.c:17:
./arch/powerpc/include/asm/uaccess.h:428: note: macro "unsafe_copy_from_user" defined here
  428 | #define unsafe_copy_from_user(d, s, l, e) \
      |
arch/powerpc/kernel/signal_32.c:564:3: error: 'unsafe_copy_from_user' undeclared (first use in this function); did you mean 'raw_copy_from_user'?
  564 |   unsafe_copy_from_user(current->thread.evr, &sr->mc_vregs,
      |   ^~~~~~~~~~~~~~~~~~~~~
      |   raw_copy_from_user
arch/powerpc/kernel/signal_32.c:564:3: note: each undeclared identifier is reported only once for each function it appears in
make[3]: *** [arch/powerpc/kernel/signal_32.o] Error 1

Fixes: 627b72bee84d ("powerpc/signal32: Convert restore_[tm]_user_regs() to user access block")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/aad2cb1801a3cc99bc27081022925b9fc18a0dfb.1618159169.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Remove powerpc specific definition of 'ptrdiff_t'
Christophe Leroy [Mon, 5 Apr 2021 09:57:27 +0000 (09:57 +0000)]
powerpc/32: Remove powerpc specific definition of 'ptrdiff_t'

For unknown reason, old commit d27dfd388715 ("Import pre2.0.8")
changed 'ptrdiff_t' from 'int' to 'long'.

GCC expects it as 'int' really, and this leads to the following
warning when building KFENCE:

  CC      mm/kfence/report.o
In file included from ./include/linux/printk.h:7,
                 from ./include/linux/kernel.h:16,
                 from mm/kfence/report.c:10:
mm/kfence/report.c: In function 'kfence_report_error':
./include/linux/kern_levels.h:5:18: warning: format '%td' expects argument of type 'ptrdiff_t', but argument 6 has type 'long int' [-Wformat=]
    5 | #define KERN_SOH "\001"  /* ASCII Start Of Header */
      |                  ^~~~~~
./include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
   11 | #define KERN_ERR KERN_SOH "3" /* error conditions */
      |                  ^~~~~~~~
./include/linux/printk.h:343:9: note: in expansion of macro 'KERN_ERR'
  343 |  printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
      |         ^~~~~~~~
mm/kfence/report.c:213:3: note: in expansion of macro 'pr_err'
  213 |   pr_err("Out-of-bounds %s at 0x%p (%luB %s of kfence-#%td):\n",
      |   ^~~~~~

<asm-generic/uapi/posix-types.h> defines it as 'int', and
defines 'size_t' and 'ssize_t' exactly as powerpc do, so
remove the powerpc specific definitions and fallback on
generic ones.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e43d133bf52fa19e577f64f3a3a38cedc570377d.1617616601.git.christophe.leroy@csgroup.eu
3 years agopowerpc: iommu: fix build when neither PCI or IBMVIO is set
Randy Dunlap [Sun, 4 Apr 2021 19:26:23 +0000 (12:26 -0700)]
powerpc: iommu: fix build when neither PCI or IBMVIO is set

When neither CONFIG_PCI nor CONFIG_IBMVIO is set/enabled, iommu.c has a
build error. The fault injection code is not useful in that kernel config,
so make the FAIL_IOMMU option depend on PCI || IBMVIO.

Prevents this build error (warning escalated to error):
../arch/powerpc/kernel/iommu.c:178:30: error: 'fail_iommu_bus_notifier' defined but not used [-Werror=unused-variable]
  178 | static struct notifier_block fail_iommu_bus_notifier = {

Fixes: d6b9a81b2a45 ("powerpc: IOMMU fault injection")
Reported-by: kernel test robot <lkp@intel.com>
Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210404192623.10697-1-rdunlap@infradead.org
3 years agopowerpc/pseries: remove unneeded semicolon
Yang Li [Tue, 6 Apr 2021 01:33:05 +0000 (09:33 +0800)]
powerpc/pseries: remove unneeded semicolon

Eliminate the following coccicheck warning:
./arch/powerpc/platforms/pseries/lpar.c:1633:2-3: Unneeded semicolon

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1617672785-81372-1-git-send-email-yang.lee@linux.alibaba.com
3 years agopowerpc/64s: power4 nap fixup in C
Nicholas Piggin [Tue, 6 Apr 2021 02:55:08 +0000 (12:55 +1000)]
powerpc/64s: power4 nap fixup in C

There is no need for this to be in asm, use the new intrrupt entry wrapper.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Tested-by: Andreas Schwab <schwab@linux-m68k.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210406025508.821718-1-npiggin@gmail.com
3 years agopowerpc/perf: Fix PMU constraint check for EBB events
Athira Rajeev [Tue, 6 Apr 2021 16:16:01 +0000 (12:16 -0400)]
powerpc/perf: Fix PMU constraint check for EBB events

The power PMU group constraints includes check for EBB events to make
sure all events in a group must agree on EBB. This will prevent
scheduling EBB and non-EBB events together. But in the existing check,
settings for constraint mask and value is interchanged. Patch fixes the
same.

Before the patch, PMU selftest "cpu_event_pinned_vs_ebb_test" fails with
below in dmesg logs. This happens because EBB event gets enabled along
with a non-EBB cpu event.

  [35600.453346] cpu_event_pinne[41326]: illegal instruction (4)
  at 10004a18 nip 10004a18 lr 100049f8 code 1 in
  cpu_event_pinned_vs_ebb_test[10000000+10000]

Test results after the patch:

  $ ./pmu/ebb/cpu_event_pinned_vs_ebb_test
  test: cpu_event_pinned_vs_ebb
  tags: git_version:v5.12-rc5-93-gf28c3125acd3-dirty
  Binding to cpu 8
  EBB Handler is at 0x100050c8
  read error on event 0x7fffe6bd4040!
  PM_RUN_INST_CMPL: result 9872 running/enabled 37930432
  success: cpu_event_pinned_vs_ebb

This bug was hidden by other logic until commit 1908dc911792 (perf:
Tweak perf_event_attr::exclusive semantics).

Fixes: 4df489991182 ("powerpc/perf: Add power8 EBB support")
Reported-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[mpe: Mention commit 1908dc911792]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1617725761-1464-1-git-send-email-atrajeev@linux.vnet.ibm.com
3 years agoselftests/powerpc: Suggest memtrace instead of /dev/mem for ci memory
Jordan Niethe [Thu, 25 Feb 2021 03:21:07 +0000 (14:21 +1100)]
selftests/powerpc: Suggest memtrace instead of /dev/mem for ci memory

The suggested alternative for getting cache-inhibited memory with 'mem='
and /dev/mem is pretty hacky. Also, PAPR guests do not allow system
memory to be mapped cache-inhibited so despite /dev/mem being available
this will not work which can cause confusion.  Instead recommend using
the memtrace buffers. memtrace is only available on powernv so there
will not be any chance of trying to do this in a guest.

Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210225032108.1458352-2-jniethe5@gmail.com
3 years agopowerpc/powernv/memtrace: Allow mmaping trace buffers
Jordan Niethe [Thu, 25 Feb 2021 03:21:06 +0000 (14:21 +1100)]
powerpc/powernv/memtrace: Allow mmaping trace buffers

Let the memory removed from the linear mapping to be used for the trace
buffers be mmaped. This is a useful way of providing cache-inhibited
memory for the alignment_handler selftest.

Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
[mpe: make memtrace_mmap() static as noticed by lkp@intel.com]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210225032108.1458352-1-jniethe5@gmail.com
3 years agopowerpc/kexec: Don't use .machine ppc64 in trampoline_64.S
Michael Ellerman [Mon, 15 Mar 2021 03:41:59 +0000 (14:41 +1100)]
powerpc/kexec: Don't use .machine ppc64 in trampoline_64.S

As best as I can tell the ".machine" directive in trampoline_64.S is no
longer, or never was, necessary.

It was added in commit 0d97631392c2 ("powerpc: Add purgatory for
kexec_file_load() implementation."), which created the file based on
the kexec-tools purgatory. It may be/have-been necessary in the
kexec-tools version, but we have a completely different build system,
and we already pass the desired CPU flags, eg:

  gcc ... -m64 -Wl,-a64 -mabi=elfv2 -Wa,-maltivec -Wa,-mpower4 -Wa,-many
  ... arch/powerpc/purgatory/trampoline_64.S

So drop the ".machine" directive and rely on the assembler flags.

Reported-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>
Link: https://lore.kernel.org/r/20210315034159.315675-1-mpe@ellerman.id.au
3 years agopowerpc/64: Move security code into security.c
Michael Ellerman [Fri, 26 Mar 2021 10:12:01 +0000 (21:12 +1100)]
powerpc/64: Move security code into security.c

When the original spectre/meltdown mitigations were merged we put them
in setup_64.c for lack of a better place.

Since then we created security.c for some of the other mitigation
related code. But it should all be in there.

This sort of code movement can cause trouble for backports, but
hopefully this code is relatively stable these days (famous last words).

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210326101201.1973552-1-mpe@ellerman.id.au
3 years agopowerpc/mm/64s: Allow STRICT_KERNEL_RWX again
Michael Ellerman [Wed, 31 Mar 2021 00:38:45 +0000 (11:38 +1100)]
powerpc/mm/64s: Allow STRICT_KERNEL_RWX again

We have now fixed the known bugs in STRICT_KERNEL_RWX for Book3S
64-bit Hash and Radix MMUs, see preceding commits, so allow the
option to be selected again.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210331003845.216246-6-mpe@ellerman.id.au
3 years agopowerpc/mm/64s/hash: Add real-mode change_memory_range() for hash LPAR
Michael Ellerman [Wed, 31 Mar 2021 00:38:44 +0000 (11:38 +1100)]
powerpc/mm/64s/hash: Add real-mode change_memory_range() for hash LPAR

When we enabled STRICT_KERNEL_RWX we received some reports of boot
failures when using the Hash MMU and running under phyp. The crashes
are intermittent, and often exhibit as a completely unresponsive
system, or possibly an oops.

One example, which was caught in xmon:

  [   14.068327][    T1] devtmpfs: mounted
  [   14.069302][    T1] Freeing unused kernel memory: 5568K
  [   14.142060][  T347] BUG: Unable to handle kernel instruction fetch
  [   14.142063][    T1] Run /sbin/init as init process
  [   14.142074][  T347] Faulting instruction address: 0xc000000000004400
  cpu 0x2: Vector: 400 (Instruction Access) at [c00000000c7475e0]
      pc: c000000000004400: exc_virt_0x4400_instruction_access+0x0/0x80
      lr: c0000000001862d4: update_rq_clock+0x44/0x110
      sp: c00000000c747880
     msr: 8000000040001031
    current = 0xc00000000c60d380
    paca    = 0xc00000001ec9de80   irqmask: 0x03   irq_happened: 0x01
      pid   = 347, comm = kworker/2:1
  ...
  enter ? for help
  [c00000000c747880c0000000001862d4 update_rq_clock+0x44/0x110 (unreliable)
  [c00000000c7478f0c000000000198794 update_blocked_averages+0xb4/0x6d0
  [c00000000c7479f0c000000000198e40 update_nohz_stats+0x90/0xd0
  [c00000000c747a20c0000000001a13b4 _nohz_idle_balance+0x164/0x390
  [c00000000c747b10c0000000001a1af8 newidle_balance+0x478/0x610
  [c00000000c747be0c0000000001a1d48 pick_next_task_fair+0x58/0x480
  [c00000000c747c40c000000000eaab5c __schedule+0x12c/0x950
  [c00000000c747cd0c000000000eab3e8 schedule+0x68/0x120
  [c00000000c747d00c00000000016b730 worker_thread+0x130/0x640
  [c00000000c747da0c000000000174d50 kthread+0x1a0/0x1b0
  [c00000000c747e10c00000000000e0f0 ret_from_kernel_thread+0x5c/0x6c

This shows that CPU 2, which was idle, woke up and then appears to
randomly take an instruction fault on a completely valid area of
kernel text.

The cause turns out to be the call to hash__mark_rodata_ro(), late in
boot. Due to the way we layout text and rodata, that function actually
changes the permissions for all of text and rodata to read-only plus
execute.

To do the permission change we use a hypervisor call, H_PROTECT. On
phyp that appears to be implemented by briefly removing the mapping of
the kernel text, before putting it back with the updated permissions.
If any other CPU is executing during that window, it will see spurious
faults on the kernel text and/or data, leading to crashes.

To fix it we use stop machine to collect all other CPUs, and then have
them drop into real mode (MMU off), while we change the mapping. That
way they are unaffected by the mapping temporarily disappearing.

We don't see this bug on KVM because KVM always use VPM=1, where
faults are directed to the hypervisor, and the fault will be
serialised vs the h_protect() by HPTE_V_HVLOCK.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210331003845.216246-5-mpe@ellerman.id.au
3 years agopowerpc/mm/64s/hash: Factor out change_memory_range()
Michael Ellerman [Wed, 31 Mar 2021 00:38:43 +0000 (11:38 +1100)]
powerpc/mm/64s/hash: Factor out change_memory_range()

Pull the loop calling hpte_updateboltedpp() out of
hash__change_memory_range() into a helper function. We need it to be a
separate function for the next patch.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210331003845.216246-4-mpe@ellerman.id.au
3 years agopowerpc/64s: Use htab_convert_pte_flags() in hash__mark_rodata_ro()
Michael Ellerman [Wed, 31 Mar 2021 00:38:42 +0000 (11:38 +1100)]
powerpc/64s: Use htab_convert_pte_flags() in hash__mark_rodata_ro()

In hash__mark_rodata_ro() we pass the raw PP_RXXX value to
hash__change_memory_range(). That has the effect of setting the key to
zero, because PP_RXXX contains no key value.

Fix it by using htab_convert_pte_flags(), which knows how to convert a
pgprot into a pp value, including the key.

Fixes: d94b827e89dc ("powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash translation")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Link: https://lore.kernel.org/r/20210331003845.216246-3-mpe@ellerman.id.au
3 years agopowerpc/pseries: Add key to flags in pSeries_lpar_hpte_updateboltedpp()
Michael Ellerman [Wed, 31 Mar 2021 00:38:41 +0000 (11:38 +1100)]
powerpc/pseries: Add key to flags in pSeries_lpar_hpte_updateboltedpp()

The flags argument to plpar_pte_protect() (aka. H_PROTECT), includes
the key in bits 9-13, but currently we always set those bits to zero.

In the past that hasn't been a problem because we always used key 0
for the kernel, and updateboltedpp() is only used for kernel mappings.

However since commit d94b827e89dc ("powerpc/book3s64/kuap: Use Key 3
for kernel mapping with hash translation") we are now inadvertently
changing the key (to zero) when we call plpar_pte_protect().

That hasn't broken anything because updateboltedpp() is only used for
STRICT_KERNEL_RWX, which is currently disabled on 64s due to other
bugs.

But we want to fix that, so first we need to pass the key correctly to
plpar_pte_protect(). We can't pass our newpp value directly in, we
have to convert it into the form expected by the hcall.

The hcall we're using here is H_PROTECT, which is specified in section
14.5.4.1.6 of LoPAPR v1.1.

It takes a `flags` parameter, and the description for flags says:

 * flags: AVPN, pp0, pp1, pp2, key0-key4, n, and for the CMO
   option: CMO Option flags as defined in Table 189‚

If you then go to the start of the parent section, 14.5.4.1, on page
405, it says:

Register Linkage (For hcall() tokens 0x04 - 0x18)
 * On Call
   * R3 function call token
   * R4 flags (see Table 178‚ “Page Frame Table Access flags field
     definition‚” on page 401)

Then you have to go to section 14.5.3, and on page 394 there is a list
of hcalls and their tokens (table 176), and there you can see that
H_PROTECT == 0x18.

Finally you can look at table 178, on page 401, where it specifies the
layout of the bits for the key:

 Bit     Function
 -----------------
 50-54 | key0-key4

Those are big-endian bit numbers, converting to normal bit numbers you
get bits 9-13, or 0x3e00.

In the kernel we have:

  #define HPTE_R_KEY_HI ASM_CONST(0x3000000000000000)
  #define HPTE_R_KEY_LO ASM_CONST(0x0000000000000e00)

So the LO bits of newpp are already in the right place, and the HI
bits need to be shifted down by 48.

Fixes: d94b827e89dc ("powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash translation")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210331003845.216246-2-mpe@ellerman.id.au
3 years agopowerpc/mm/64s: Add _PAGE_KERNEL_ROX
Michael Ellerman [Wed, 31 Mar 2021 00:38:40 +0000 (11:38 +1100)]
powerpc/mm/64s: Add _PAGE_KERNEL_ROX

In the past we had a fallback definition for _PAGE_KERNEL_ROX, but we
removed that in commit d82fd29c5a8c ("powerpc/mm: Distribute platform
specific PAGE and PMD flags and definitions") and added definitions
for each MMU family.

However we missed adding a definition for 64s, which was not really a
bug because it's currently not used.

But we'd like to use PAGE_KERNEL_ROX in a future patch so add a
definition now.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210331003845.216246-1-mpe@ellerman.id.au
3 years agoselftests/powerpc: Test for spurious kernel memory faults on radix
Jordan Niethe [Mon, 8 Feb 2021 03:29:57 +0000 (14:29 +1100)]
selftests/powerpc: Test for spurious kernel memory faults on radix

Previously when mapping kernel memory on radix, no ptesync was
included which would periodically lead to unhandled spurious faults.
Mapping kernel memory is used when code patching with Strict RWX
enabled. As suggested by Chris Riedl, turning ftrace on and off does a
large amount of code patching so is a convenient way to see this kind
of fault.

Add a selftest to try and trigger this kind of a spurious fault. It
tests for 30 seconds which is usually long enough for the issue to
show up.

Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
[mpe: Rename it to better reflect what it does, rather than the symptom]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210208032957.1232102-2-jniethe5@gmail.com
3 years agopowerpc/64s: Fix pte update for kernel memory on radix
Jordan Niethe [Mon, 8 Feb 2021 03:29:56 +0000 (14:29 +1100)]
powerpc/64s: Fix pte update for kernel memory on radix

When adding a PTE a ptesync is needed to order the update of the PTE
with subsequent accesses otherwise a spurious fault may be raised.

radix__set_pte_at() does not do this for performance gains. For
non-kernel memory this is not an issue as any faults of this kind are
corrected by the page fault handler. For kernel memory these faults
are not handled. The current solution is that there is a ptesync in
flush_cache_vmap() which should be called when mapping from the
vmalloc region.

However, map_kernel_page() does not call flush_cache_vmap(). This is
troublesome in particular for code patching with Strict RWX on radix.
In do_patch_instruction() the page frame that contains the instruction
to be patched is mapped and then immediately patched. With no ordering
or synchronization between setting up the PTE and writing to the page
it is possible for faults.

As the code patching is done using __put_user_asm_goto() the resulting
fault is obscured - but using a normal store instead it can be seen:

  BUG: Unable to handle kernel data access on write at 0xc008000008f24a3c
  Faulting instruction address: 0xc00000000008bd74
  Oops: Kernel access of bad area, sig: 11 [#1]
  LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA PowerNV
  Modules linked in: nop_module(PO+) [last unloaded: nop_module]
  CPU: 4 PID: 757 Comm: sh Tainted: P           O      5.10.0-rc5-01361-ge3c1b78c8440-dirty #43
  NIP:  c00000000008bd74 LR: c00000000008bd50 CTR: c000000000025810
  REGS: c000000016f634a0 TRAP: 0300   Tainted: P           O       (5.10.0-rc5-01361-ge3c1b78c8440-dirty)
  MSR:  9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE>  CR: 44002884  XER: 00000000
  CFAR: c00000000007c68c DAR: c008000008f24a3c DSISR: 42000000 IRQMASK: 1

This results in the kind of issue reported here:
  https://lore.kernel.org/linuxppc-dev/15AC5B0E-A221-4B8C-9039-FA96B8EF7C88@lca.pw/

Chris Riedl suggested a reliable way to reproduce the issue:
  $ mount -t debugfs none /sys/kernel/debug
  $ (while true; do echo function > /sys/kernel/debug/tracing/current_tracer ; echo nop > /sys/kernel/debug/tracing/current_tracer ; done) &

Turning ftrace on and off does a large amount of code patching which
in usually less then 5min will crash giving a trace like:

   ftrace-powerpc: (____ptrval____): replaced (4b473b11) != old (60000000)
   ------------[ ftrace bug ]------------
   ftrace failed to modify
   [<c000000000bf8e5c>] napi_busy_loop+0xc/0x390
    actual:   11:3b:47:4b
   Setting ftrace call site to call ftrace function
   ftrace record flags: 80000001
    (1)
    expected tramp: c00000000006c96c
   ------------[ cut here ]------------
   WARNING: CPU: 4 PID: 809 at kernel/trace/ftrace.c:2065 ftrace_bug+0x28c/0x2e8
   Modules linked in: nop_module(PO-) [last unloaded: nop_module]
   CPU: 4 PID: 809 Comm: sh Tainted: P           O      5.10.0-rc5-01360-gf878ccaf250a #1
   NIP:  c00000000024f334 LR: c00000000024f330 CTR: c0000000001a5af0
   REGS: c000000004c8b760 TRAP: 0700   Tainted: P           O       (5.10.0-rc5-01360-gf878ccaf250a)
   MSR:  900000000282b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE>  CR: 28008848  XER: 20040000
   CFAR: c0000000001a9c98 IRQMASK: 0
   GPR00: c00000000024f330 c000000004c8b9f0 c000000002770600 0000000000000022
   GPR04: 00000000ffff7fff c000000004c8b6d0 0000000000000027 c0000007fe9bcdd8
   GPR08: 0000000000000023 ffffffffffffffd8 0000000000000027 c000000002613118
   GPR12: 0000000000008000 c0000007fffdca00 0000000000000000 0000000000000000
   GPR16: 0000000023ec37c5 0000000000000000 0000000000000000 0000000000000008
   GPR20: c000000004c8bc90 c0000000027a2d20 c000000004c8bcd0 c000000002612fe8
   GPR24: 0000000000000038 0000000000000030 0000000000000028 0000000000000020
   GPR28: c000000000ff1b68 c000000000bf8e5c c00000000312f700 c000000000fbb9b0
   NIP ftrace_bug+0x28c/0x2e8
   LR  ftrace_bug+0x288/0x2e8
   Call Trace:
     ftrace_bug+0x288/0x2e8 (unreliable)
     ftrace_modify_all_code+0x168/0x210
     arch_ftrace_update_code+0x18/0x30
     ftrace_run_update_code+0x44/0xc0
     ftrace_startup+0xf8/0x1c0
     register_ftrace_function+0x4c/0xc0
     function_trace_init+0x80/0xb0
     tracing_set_tracer+0x2a4/0x4f0
     tracing_set_trace_write+0xd4/0x130
     vfs_write+0xf0/0x330
     ksys_write+0x84/0x140
     system_call_exception+0x14c/0x230
     system_call_common+0xf0/0x27c

To fix this when updating kernel memory PTEs using ptesync.

Fixes: f1cb8f9beba8 ("powerpc/64s/radix: avoid ptesync after set_pte and ptep_set_access_flags")
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Tidy up change log slightly]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210208032957.1232102-1-jniethe5@gmail.com
3 years agopowerpc: Spelling/typo fixes
Bhaskar Chowdhury [Wed, 20 Jan 2021 14:20:21 +0000 (19:50 +0530)]
powerpc: Spelling/typo fixes

Various spelling/typo fixes.

Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
3 years agopowerpc: Switch to relative jump labels
Christophe Leroy [Tue, 23 Mar 2021 15:47:59 +0000 (15:47 +0000)]
powerpc: Switch to relative jump labels

Convert powerpc to relative jump labels.

Before the patch, pseries_defconfig vmlinux.o has:
9074 __jump_table  0003f2a0  0000000000000000  0000000000000000  01321fa8  2**0

With the patch, the same config gets:
9074 __jump_table  0002a0e0  0000000000000000  0000000000000000  01321fb4  2**0

Size is 258720 without the patch, 172256 with the patch.
That's a 33% size reduction.

Largely copied from commit c296146c058c ("arm64/kernel: jump_label:
Switch to relative references")

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/828348da7868eda953ce023994404dfc49603b64.1616514473.git.christophe.leroy@csgroup.eu
3 years agopowerpc/bpf: Reallocate BPF registers to volatile registers when possible on PPC32
Christophe Leroy [Mon, 22 Mar 2021 16:37:53 +0000 (16:37 +0000)]
powerpc/bpf: Reallocate BPF registers to volatile registers when possible on PPC32

When the BPF routine doesn't call any function, the non volatile
registers can be reallocated to volatile registers in order to
avoid having to save them/restore on the stack.

Before this patch, the test #359 ADD default X is:

   0: 7c 64 1b 78  mr      r4,r3
   4: 38 60 00 00  li      r3,0
   8: 94 21 ff b0  stwu    r1,-80(r1)
   c: 60 00 00 00  nop
  10: 92 e1 00 2c  stw     r23,44(r1)
  14: 93 01 00 30  stw     r24,48(r1)
  18: 93 21 00 34  stw     r25,52(r1)
  1c: 93 41 00 38  stw     r26,56(r1)
  20: 39 80 00 00  li      r12,0
  24: 39 60 00 00  li      r11,0
  28: 3b 40 00 00  li      r26,0
  2c: 3b 20 00 00  li      r25,0
  30: 7c 98 23 78  mr      r24,r4
  34: 7c 77 1b 78  mr      r23,r3
  38: 39 80 00 42  li      r12,66
  3c: 39 60 00 00  li      r11,0
  40: 7d 8c d2 14  add     r12,r12,r26
  44: 39 60 00 00  li      r11,0
  48: 7d 83 63 78  mr      r3,r12
  4c: 82 e1 00 2c  lwz     r23,44(r1)
  50: 83 01 00 30  lwz     r24,48(r1)
  54: 83 21 00 34  lwz     r25,52(r1)
  58: 83 41 00 38  lwz     r26,56(r1)
  5c: 38 21 00 50  addi    r1,r1,80
  60: 4e 80 00 20  blr

After this patch, the same test has become:

   0: 7c 64 1b 78  mr      r4,r3
   4: 38 60 00 00  li      r3,0
   8: 94 21 ff b0  stwu    r1,-80(r1)
   c: 60 00 00 00  nop
  10: 39 80 00 00  li      r12,0
  14: 39 60 00 00  li      r11,0
  18: 39 00 00 00  li      r8,0
  1c: 38 e0 00 00  li      r7,0
  20: 7c 86 23 78  mr      r6,r4
  24: 7c 65 1b 78  mr      r5,r3
  28: 39 80 00 42  li      r12,66
  2c: 39 60 00 00  li      r11,0
  30: 7d 8c 42 14  add     r12,r12,r8
  34: 39 60 00 00  li      r11,0
  38: 7d 83 63 78  mr      r3,r12
  3c: 38 21 00 50  addi    r1,r1,80
  40: 4e 80 00 20  blr

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/b94562d7d2bb21aec89de0c40bb3cd91054b65a2.1616430991.git.christophe.leroy@csgroup.eu
3 years agopowerpc/bpf: Implement extended BPF on PPC32
Christophe Leroy [Mon, 22 Mar 2021 16:37:52 +0000 (16:37 +0000)]
powerpc/bpf: Implement extended BPF on PPC32

Implement Extended Berkeley Packet Filter on Powerpc 32

Test result with test_bpf module:

test_bpf: Summary: 378 PASSED, 0 FAILED, [354/366 JIT'ed]

Registers mapping:

[BPF_REG_0] = r11-r12
/* function arguments */
[BPF_REG_1] = r3-r4
[BPF_REG_2] = r5-r6
[BPF_REG_3] = r7-r8
[BPF_REG_4] = r9-r10
[BPF_REG_5] = r21-r22 (Args 9 and 10 come in via the stack)
/* non volatile registers */
[BPF_REG_6] = r23-r24
[BPF_REG_7] = r25-r26
[BPF_REG_8] = r27-r28
[BPF_REG_9] = r29-r30
/* frame pointer aka BPF_REG_10 */
[BPF_REG_FP] = r17-r18
/* eBPF jit internal registers */
[BPF_REG_AX] = r19-r20
[TMP_REG] = r31

As PPC32 doesn't have a redzone in the stack, a stack frame must always
be set in order to host at least the tail count counter.

The stack frame remains for tail calls, it is set by the first callee
and freed by the last callee.

r0 is used as temporary register as much as possible. It is referenced
directly in the code in order to avoid misusing it, because some
instructions interpret it as value 0 instead of register r0
(ex: addi, addis, stw, lwz, ...)

The following operations are not implemented:

case BPF_ALU64 | BPF_DIV | BPF_X: /* dst /= src */
case BPF_ALU64 | BPF_MOD | BPF_X: /* dst %= src */
case BPF_STX | BPF_XADD | BPF_DW: /* *(u64 *)(dst + off) += src */

The following operations are only implemented for power of two constants:

case BPF_ALU64 | BPF_MOD | BPF_K: /* dst %= imm */
case BPF_ALU64 | BPF_DIV | BPF_K: /* dst /= imm */

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/61d8b149176ddf99e7d5cef0b6dc1598583ca202.1616430991.git.christophe.leroy@csgroup.eu
3 years agopowerpc/asm: Add some opcodes in asm/ppc-opcode.h for PPC32 eBPF
Christophe Leroy [Mon, 22 Mar 2021 16:37:51 +0000 (16:37 +0000)]
powerpc/asm: Add some opcodes in asm/ppc-opcode.h for PPC32 eBPF

The following opcodes will be needed for the implementation
of eBPF for PPC32. Add them in asm/ppc-opcode.h

PPC_RAW_ADDE
PPC_RAW_ADDZE
PPC_RAW_ADDME
PPC_RAW_MFLR
PPC_RAW_ADDIC
PPC_RAW_ADDIC_DOT
PPC_RAW_SUBFC
PPC_RAW_SUBFE
PPC_RAW_SUBFIC
PPC_RAW_SUBFZE
PPC_RAW_ANDIS
PPC_RAW_NOR

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/f7bd573a368edd78006f8a5af508c726e7ce1ed2.1616430991.git.christophe.leroy@csgroup.eu
3 years agopowerpc/bpf: Change values of SEEN_ flags
Christophe Leroy [Mon, 22 Mar 2021 16:37:50 +0000 (16:37 +0000)]
powerpc/bpf: Change values of SEEN_ flags

Because PPC32 will use more non volatile registers,
move SEEN_ flags to positions 0-2 which corresponds to special
registers.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/608faa1dc3ecfead649e15392abd07b00313d2ba.1616430991.git.christophe.leroy@csgroup.eu
3 years agopowerpc/bpf: Move common functions into bpf_jit_comp.c
Christophe Leroy [Mon, 22 Mar 2021 16:37:49 +0000 (16:37 +0000)]
powerpc/bpf: Move common functions into bpf_jit_comp.c

Move into bpf_jit_comp.c the functions that will remain common to
PPC64 and PPC32 when we add support of EBPF for PPC32.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/2c339d77fb168ef12b213ccddfee3cb6c8ce8ae1.1616430991.git.christophe.leroy@csgroup.eu
3 years agopowerpc/bpf: Move common helpers into bpf_jit.h
Christophe Leroy [Mon, 22 Mar 2021 16:37:48 +0000 (16:37 +0000)]
powerpc/bpf: Move common helpers into bpf_jit.h

Move functions bpf_flush_icache(), bpf_is_seen_register() and
bpf_set_seen_register() in order to reuse them in future
bpf_jit_comp32.c

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/28e8d5a75e64807d7e9d39a4b52658755e259f8c.1616430991.git.christophe.leroy@csgroup.eu
3 years agopowerpc/bpf: Change register numbering for bpf_set/is_seen_register()
Christophe Leroy [Mon, 22 Mar 2021 16:37:47 +0000 (16:37 +0000)]
powerpc/bpf: Change register numbering for bpf_set/is_seen_register()

Instead of using BPF register number as input in functions
bpf_set_seen_register() and bpf_is_seen_register(), use
CPU register number directly.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/0cd2506f598e7095ea43e62dca1f472de5474a0d.1616430991.git.christophe.leroy@csgroup.eu
3 years agopowerpc/bpf: Remove classical BPF support for PPC32
Christophe Leroy [Mon, 22 Mar 2021 16:37:46 +0000 (16:37 +0000)]
powerpc/bpf: Remove classical BPF support for PPC32

At the time being, PPC32 has Classical BPF support.

The test_bpf module exhibits some failure:

test_bpf: #298 LD_IND byte frag jited:1 ret 202 != 66 FAIL (1 times)
test_bpf: #299 LD_IND halfword frag jited:1 ret 51958 != 17220 FAIL (1 times)
test_bpf: #301 LD_IND halfword mixed head/frag jited:1 ret 51958 != 1305 FAIL (1 times)
test_bpf: #303 LD_ABS byte frag jited:1 ret 202 != 66 FAIL (1 times)
test_bpf: #304 LD_ABS halfword frag jited:1 ret 51958 != 17220 FAIL (1 times)
test_bpf: #306 LD_ABS halfword mixed head/frag jited:1 ret 51958 != 1305 FAIL (1 times)

test_bpf: Summary: 371 PASSED, 7 FAILED, [119/366 JIT'ed]

Fixing this is not worth the effort. Instead, remove support for
classical BPF and prepare for adding Extended BPF support instead.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/fbc3e4fcc9c8f6131d6c705212530b2aa50149ee.1616430991.git.christophe.leroy@csgroup.eu
3 years agopowerpc/signal32: Simplify logging in sigreturn()
Christophe Leroy [Fri, 19 Mar 2021 11:07:00 +0000 (11:07 +0000)]
powerpc/signal32: Simplify logging in sigreturn()

Same spirit as commit debf122c777f ("powerpc/signal32: Simplify logging
in handle_rt_signal32()"), remove this intermediate 'addr' local var.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/638fa99530beb29f82f94370057d110e91272acc.1616151715.git.christophe.leroy@csgroup.eu
3 years agopowerpc/signal32: Convert do_setcontext[_tm]() to user access block
Christophe Leroy [Fri, 19 Mar 2021 11:06:58 +0000 (11:06 +0000)]
powerpc/signal32: Convert do_setcontext[_tm]() to user access block

Add unsafe_get_user_sigset() and transform PPC32 get_sigset_t()
into an unsafe version unsafe_get_sigset_t().

Then convert do_setcontext() and do_setcontext_tm() to use
user_read_access_begin/end.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/9273ba664db769b8d9c7540ae91395e346e4945e.1616151715.git.christophe.leroy@csgroup.eu
3 years agopowerpc/signal32: Convert restore_[tm]_user_regs() to user access block
Christophe Leroy [Fri, 19 Mar 2021 11:06:57 +0000 (11:06 +0000)]
powerpc/signal32: Convert restore_[tm]_user_regs() to user access block

Convert restore_user_regs() and restore_tm_user_regs()
to use user_access_read_begin/end blocks.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/181adf15a6f644efcd1aeafb355f3578ff1b6bc5.1616151715.git.christophe.leroy@csgroup.eu
3 years agopowerpc/signal32: Reorder user reads in restore_tm_user_regs()
Christophe Leroy [Fri, 19 Mar 2021 11:06:56 +0000 (11:06 +0000)]
powerpc/signal32: Reorder user reads in restore_tm_user_regs()

In restore_tm_user_regs(), regroup the reads from 'sr' and the ones
from 'tm_sr' together in order to allow two block user accesses
in following patch.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/7c518b9a4c8e5ae9a3bfb647bc8b20bf820233af.1616151715.git.christophe.leroy@csgroup.eu
3 years agopowerpc/signal32: Perform access_ok() inside restore_user_regs()
Christophe Leroy [Fri, 19 Mar 2021 11:06:55 +0000 (11:06 +0000)]
powerpc/signal32: Perform access_ok() inside restore_user_regs()

In preparation of using user_access_begin/end in restore_user_regs(),
move the access_ok() inside the function.

It makes no difference as the behaviour on a failed access_ok() is
the same as on failed restore_user_regs().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c106eb2f37c3040f1fd38b40e50c670feb7cb835.1616151715.git.christophe.leroy@csgroup.eu
3 years agopowerpc/signal32: Remove ifdefery in middle of if/else in sigreturn()
Christophe Leroy [Fri, 19 Mar 2021 11:06:54 +0000 (11:06 +0000)]
powerpc/signal32: Remove ifdefery in middle of if/else in sigreturn()

In the same spirit as commit f1cf4f93de2f ("powerpc/signal32: Remove
ifdefery in middle of if/else")

MSR_TM_ACTIVE() is always defined and returns always 0 when
CONFIG_PPC_TRANSACTIONAL_MEM is not selected, so the awful
ifdefery in the middle of an if/else can be removed.

Make 'msr_hi' a 'long long' to avoid build failure on PPC32
due to the 32 bits left shift.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/a4b48b2f0be1ef13fc8e57452b7f8350da28d521.1616151715.git.christophe.leroy@csgroup.eu
3 years agopowerpc/signal32: Rename save_user_regs_unsafe() and save_general_regs_unsafe()
Christophe Leroy [Fri, 19 Mar 2021 11:06:53 +0000 (11:06 +0000)]
powerpc/signal32: Rename save_user_regs_unsafe() and save_general_regs_unsafe()

Convention is to prefix functions with __unsafe_ instead of
suffixing it with _unsafe.

Rename save_user_regs_unsafe() and save_general_regs_unsafe()
accordingly, that is respectively __unsafe_save_general_regs() and
__unsafe_save_user_regs().

Suggested-by: Christopher M. Riedl <cmr@codefail.de>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/8cef43607e5b35a7fd0829dec812d88beb570df2.1616151715.git.christophe.leroy@csgroup.eu
3 years agopowerpc/signal: Add unsafe_copy_ck{fpr/vsx}_from_user
Christophe Leroy [Fri, 19 Mar 2021 11:06:52 +0000 (11:06 +0000)]
powerpc/signal: Add unsafe_copy_ck{fpr/vsx}_from_user

Add unsafe_copy_ckfpr_from_user() and unsafe_copy_ckvsx_from_user()

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1040687aa27553d19f749f7fb48f0c07af98ee2d.1616151715.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Also perform 64 bits copies in unsafe_copy_from_user() on ppc32
Christophe Leroy [Fri, 19 Mar 2021 11:06:51 +0000 (11:06 +0000)]
powerpc/uaccess: Also perform 64 bits copies in unsafe_copy_from_user() on ppc32

Similarly to commit 5cf773fc8f37 ("powerpc/uaccess: Also perform
64 bits copies in unsafe_copy_to_user() on ppc32")

ppc32 has an efficiant 64 bits unsafe_get_user(), so also use it in
order to unroll loops more.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/308e65d9237a14e8c0e3b22919fcf0b5e5592608.1616151715.git.christophe.leroy@csgroup.eu
3 years agosignal: Add unsafe_get_compat_sigset()
Christophe Leroy [Fri, 19 Mar 2021 11:06:50 +0000 (11:06 +0000)]
signal: Add unsafe_get_compat_sigset()

In the same way as commit 14026b94ccfe ("signal: Add
unsafe_put_compat_sigset()"), this time add
unsafe_get_compat_sigset() macro which is the 'unsafe'
version of get_compat_sigset()

For the bigendian, use unsafe_get_user() directly
to avoid intermediate copy through the stack.

For the littleendian, use a straight unsafe_copy_from_user().

This commit adds the generic fallback for unsafe_copy_from_user().
Architectures wanting to use unsafe_get_compat_sigset() have to
make sure they have their own unsafe_copy_from_user().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/b05bf434ee13c76bc9df5f02653a10db5e7b54e5.1616151715.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Use asm goto for get_user when compiler supports it
Christophe Leroy [Wed, 10 Mar 2021 17:46:54 +0000 (17:46 +0000)]
powerpc/uaccess: Use asm goto for get_user when compiler supports it

clang 11 and future GCC are supporting asm goto with outputs.

Use it to implement get_user in order to get better generated code.

Note that clang requires to set x in the default branch of
__get_user_size_goto() otherwise is compliant about x not being
initialised :puzzled:

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/403745b5aaa1b315bb4e8e46c1ba949e77eecec0.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Introduce __get_user_size_goto()
Christophe Leroy [Wed, 10 Mar 2021 17:46:53 +0000 (17:46 +0000)]
powerpc/uaccess: Introduce __get_user_size_goto()

We have got two places doing a goto based on the result
of __get_user_size_allowed().

Refactor that into __get_user_size_goto().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/def8a39289e02653cfb1583b3b19837de9efed3a.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Refactor get/put_user() and __get/put_user()
Christophe Leroy [Wed, 10 Mar 2021 17:46:52 +0000 (17:46 +0000)]
powerpc/uaccess: Refactor get/put_user() and __get/put_user()

Make get_user() do the access_ok() check then call __get_user().
Make put_user() do the access_ok() check then call __put_user().

Then embed  __get_user_size() and __put_user_size() in
__get_user() and __put_user().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/eebc554f6a81f570c46ea3551000ff5b886e4faa.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Rename __get/put_user_check/nocheck
Christophe Leroy [Wed, 10 Mar 2021 17:46:51 +0000 (17:46 +0000)]
powerpc/uaccess: Rename __get/put_user_check/nocheck

__get_user_check() becomes get_user()
__put_user_check() becomes put_user()
__get_user_nocheck() becomes __get_user()
__put_user_nocheck() becomes __put_user()

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/41d7e45f4733f0e61e63824e4865b4e049db74d6.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Split out __get_user_nocheck()
Christophe Leroy [Wed, 10 Mar 2021 17:46:50 +0000 (17:46 +0000)]
powerpc/uaccess: Split out __get_user_nocheck()

One part of __get_user_nocheck() is used for __get_user(),
the other part for unsafe_get_user().

Move the part dedicated to unsafe_get_user() in it.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/618fe2e0626b308a5a063d5baac827b968e85c32.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Remove calls to __get_user_bad() and __put_user_bad()
Christophe Leroy [Wed, 10 Mar 2021 17:46:49 +0000 (17:46 +0000)]
powerpc/uaccess: Remove calls to __get_user_bad() and __put_user_bad()

__get_user_bad() and __put_user_bad() are functions that are
declared but not defined, in order to make the link fail in
case they are called.

Nowadays, we have BUILD_BUG() and BUILD_BUG_ON() for that, and
they have the advantage to break the build earlier as it breaks
it at compile time instead of link time.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d7d839e994f49fae4ff7b70fac72bd951272436b.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Remove __chk_user_ptr() in __get/put_user
Christophe Leroy [Wed, 10 Mar 2021 17:46:48 +0000 (17:46 +0000)]
powerpc/uaccess: Remove __chk_user_ptr() in __get/put_user

Commit d02f6b7dab82 ("powerpc/uaccess: Evaluate macro arguments once,
before user access is allowed") changed the __chk_user_ptr()
argument from the passed ptr pointer to the locally
declared __gu_addr. But __gu_addr is locally defined as __user
so the check is pointless.

During kernel build __chk_user_ptr() voids and is only evaluated
during sparse checks so it should have been armless to leave the
original pointer check there.

Nevertheless, this check is indeed redundant with the assignment
above which casts the ptr pointer to the local __user __gu_addr.
In case of mismatch, sparse will detect it there, so the
__check_user_ptr() is not needed anywhere else than in access_ok().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/69f17d75046733b891ab2e668dbf464787cdf598.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Remove __unsafe_put_user_goto()
Christophe Leroy [Wed, 10 Mar 2021 17:46:47 +0000 (17:46 +0000)]
powerpc/uaccess: Remove __unsafe_put_user_goto()

__unsafe_put_user_goto() is just an intermediate layer to
__put_user_size_goto() without added value other than doing
the __user pointer type checking.

Do the __user pointer type checking in __put_user_size_goto()
and remove __unsafe_put_user_goto().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/b6552149209aebd887a6977272b06a41256bdb9f.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Call might_fault() inconditionaly
Christophe Leroy [Wed, 10 Mar 2021 17:46:46 +0000 (17:46 +0000)]
powerpc/uaccess: Call might_fault() inconditionaly

Commit 6bfd93c32a50 ("powerpc: Fix incorrect might_sleep in
__get_user/__put_user on kernel addresses") added a check to not call
might_sleep() on kernel addresses. This was to enable the use of
__get_user() in the alignment exception handler for any address.

Then commit 95156f0051cb ("lockdep, mm: fix might_fault() annotation")
added a check of the address space in might_fault(), based on
set_fs() logic. But this didn't solve the powerpc alignment exception
case as it didn't call set_fs(KERNEL_DS).

Nowadays, set_fs() is gone, previous patch fixed the alignment
exception handler and __get_user/__put_user are not supposed to be
used anymore to read kernel memory.

Therefore the is_kernel_addr() check has become useless and can be
removed.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e0a980a4dc7a2551183dd5cb30f46eafdbee390c.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/align: Don't use __get_user_instr() on kernel addresses
Christophe Leroy [Wed, 10 Mar 2021 17:46:45 +0000 (17:46 +0000)]
powerpc/align: Don't use __get_user_instr() on kernel addresses

In the old days, when we didn't have kernel userspace access
protection and had set_fs(), it was wise to use __get_user()
and friends to read kernel memory.

Nowadays, get_user() is granting userspace access and is exclusively
for userspace access.

In alignment exception handler, use probe_kernel_read_inst()
instead of __get_user_instr() for reading instructions in kernel.

This will allow to remove the is_kernel_addr() check in
__get/put_user() in a following patch.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d9ecbce00178484e66ca7adec2ff210058037704.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Move get_user_instr helpers in asm/inst.h
Christophe Leroy [Wed, 10 Mar 2021 17:46:44 +0000 (17:46 +0000)]
powerpc/uaccess: Move get_user_instr helpers in asm/inst.h

Those helpers use get_user helpers but they don't participate
in their implementation, so they do not belong to asm/uaccess.h

Move them in asm/inst.h

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/2c6e83581b4fa434aa7cf2fa7714c41e98f57007.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Remove __get/put_user_inatomic()
Christophe Leroy [Wed, 10 Mar 2021 17:46:43 +0000 (17:46 +0000)]
powerpc/uaccess: Remove __get/put_user_inatomic()

Powerpc is the only architecture having _inatomic variants of
__get_user() and __put_user() accessors. They were introduced
by commit e68c825bb016 ("[POWERPC] Add inatomic versions of __get_user
and __put_user").

Those variants expand to the _nosleep macros instead of expanding
to the _nocheck macros. The only difference between the _nocheck
and the _nosleep macros is the call to might_fault().

Since commit 662bbcb2747c ("mm, sched: Allow uaccess in atomic with
pagefault_disable()"), __get/put_user() can be used in atomic parts
of the code, therefore __get/put_user_inatomic() have become useless.

Remove __get_user_inatomic() and __put_user_inatomic().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1e5c895669e8d54a7810b62dc61eb111f33c2c37.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/align: Convert emulate_spe() to user_access_begin
Christophe Leroy [Fri, 12 Mar 2021 13:25:11 +0000 (13:25 +0000)]
powerpc/align: Convert emulate_spe() to user_access_begin

This patch converts emulate_spe() to using user_access_begin
logic.

Since commit 662bbcb2747c ("mm, sched: Allow uaccess in atomic with
pagefault_disable()"), might_fault() doesn't fire when called from
sections where pagefaults are disabled, which must be the case
when using _inatomic variants of __get_user and __put_user. So
the might_fault() in user_access_begin() is not a problem.

There was a verification of user_mode() together with the access_ok(),
but there is a second verification of user_mode() just after, that
leads to immediate return. The access_ok() is now part of the
user_access_begin which is called after that other user_mode()
verification, so no need to check user_mode() again.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c95a648fdf75992c9d88f3c73cc23e7537fcf2ad.1615555354.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Define ___get_user_instr() for ppc32
Christophe Leroy [Wed, 10 Mar 2021 17:46:41 +0000 (17:46 +0000)]
powerpc/uaccess: Define ___get_user_instr() for ppc32

Define simple ___get_user_instr() for ppc32 instead of
defining ppc32 versions of the three get_user_instr()
helpers.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e02f83ec74f26d76df2874f0ce4d5cc69c3469ae.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Remove __get_user_allowed() and unsafe_op_wrap()
Christophe Leroy [Wed, 10 Mar 2021 17:46:40 +0000 (17:46 +0000)]
powerpc/uaccess: Remove __get_user_allowed() and unsafe_op_wrap()

Those two macros have only one user which is unsafe_get_user().

Put everything in one place and remove them.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/439179c5e54c18f2cb8bdf1eea13ea0ef6b98375.1615398265.git.christophe.leroy@csgroup.eu
3 years agopowerpc/mm: Revert "powerpc/mm: Remove DEBUG_VM_PGTABLE support on powerpc"
Aneesh Kumar K.V [Thu, 18 Mar 2021 03:48:55 +0000 (09:18 +0530)]
powerpc/mm: Revert "powerpc/mm: Remove DEBUG_VM_PGTABLE support on powerpc"

This reverts commit 675bceb097e6 ("powerpc/mm: Remove DEBUG_VM_PGTABLE support on powerpc")

All the related issues are fixed as of commit:
  f14312e1ed1e ("mm/debug_vm_pgtable: avoid doing memory allocation with pgtable_t mapped.")

Hence re-enable it.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210318034855.74513-1-aneesh.kumar@linux.ibm.com
3 years agopowerpc/pseries: Only register vio drivers if vio bus exists
Michael Ellerman [Tue, 16 Mar 2021 01:09:38 +0000 (12:09 +1100)]
powerpc/pseries: Only register vio drivers if vio bus exists

The vio bus is a fake bus, which we use on pseries LPARs (guests) to
discover devices provided by the hypervisor. There's no need or sense
in creating the vio bus on bare metal systems.

Which is why commit 4336b9337824 ("powerpc/pseries: Make vio and
ibmebus initcalls pseries specific") made the initialisation of the
vio bus only happen in LPARs.

However as a result of that commit we now see errors at boot on bare
metal systems:

  Driver 'hvc_console' was unable to register with bus_type 'vio' because the bus was not initialized.
  Driver 'tpm_ibmvtpm' was unable to register with bus_type 'vio' because the bus was not initialized.

This happens because those drivers are built-in, and are calling
vio_register_driver(). It in turn calls driver_register() with a
reference to vio_bus_type, but we haven't registered vio_bus_type with
the driver core.

Fix it by also guarding vio_register_driver() with a check to see if
we are on pseries.

Fixes: 4336b9337824 ("powerpc/pseries: Make vio and ibmebus initcalls pseries specific")
Reported-by: Paul Menzel <pmenzel@molgen.mpg.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Tested-by: Paul Menzel <pmenzel@molgen.mpg.de>
Reviewed-by: Tyrel Datwyler <tyreld@linux.ibm.com>
Link: https://lore.kernel.org/r/20210316010938.525657-1-mpe@ellerman.id.au
3 years agopowerpc/powernv: Remove unneeded variable: "rc"
dingsenjie [Fri, 26 Mar 2021 11:53:56 +0000 (19:53 +0800)]
powerpc/powernv: Remove unneeded variable: "rc"

Remove unneeded variable: "rc".

Signed-off-by: dingsenjie <dingsenjie@yulong.com>
Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210326115356.12444-1-dingsenjie@163.com
3 years agopowerpc: Fix HAVE_HARDLOCKUP_DETECTOR_ARCH build configuration
Chen Huang [Sat, 27 Mar 2021 09:49:00 +0000 (09:49 +0000)]
powerpc: Fix HAVE_HARDLOCKUP_DETECTOR_ARCH build configuration

When compiling the powerpc with the SMP disabled, it shows the issue:

arch/powerpc/kernel/watchdog.c: In function ‘watchdog_smp_panic’:
arch/powerpc/kernel/watchdog.c:177:4: error: implicit declaration of function ‘smp_send_nmi_ipi’; did you mean ‘smp_send_stop’? [-Werror=implicit-function-declaration]
  177 |    smp_send_nmi_ipi(c, wd_lockup_ipi, 1000000);
      |    ^~~~~~~~~~~~~~~~
      |    smp_send_stop
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:273: arch/powerpc/kernel/watchdog.o] Error 1
make[1]: *** [scripts/Makefile.build:534: arch/powerpc/kernel] Error 2
make: *** [Makefile:1980: arch/powerpc] Error 2
make: *** Waiting for unfinished jobs....

We found that powerpc used ipi to implement hardlockup watchdog, so the
HAVE_HARDLOCKUP_DETECTOR_ARCH should depend on the SMP.

Fixes: 2104180a5369 ("powerpc/64s: implement arch-specific hardlockup watchdog")
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Chen Huang <chenhuang5@huawei.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210327094900.938555-1-chenhuang5@huawei.com
3 years agopowerpc/pseries/hotplug-cpu: Show 'last online CPU' error in dlpar_cpu_offline()
Daniel Henrique Barboza [Tue, 23 Mar 2021 20:50:56 +0000 (17:50 -0300)]
powerpc/pseries/hotplug-cpu: Show 'last online CPU' error in dlpar_cpu_offline()

One of the reasons that dlpar_cpu_offline can fail is when attempting to
offline the last online CPU of the kernel. This can be observed in a
pseries QEMU guest that has hotplugged CPUs. If the user offlines all
other CPUs of the guest, and a hotplugged CPU is now the last online
CPU, trying to reclaim it will fail.

The current error message in this situation returns rc with -EBUSY and a
generic explanation, e.g.:

  pseries-hotplug-cpu: Failed to offline CPU PowerPC,POWER9, rc: -16

EBUSY can be caused by other conditions, such as cpu_hotplug_disable
being true. Throwing a more specific error message for this case,
instead of just "Failed to offline CPU", makes it clearer that the error
is in fact a known error situation instead of other generic/unknown
cause.

This patch adds a 'last online' check in dlpar_cpu_offline() to catch
the 'last online CPU' offline error, eturning a more informative error
message:

  pseries-hotplug-cpu: Unable to remove last online CPU PowerPC,POWER9

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210323205056.52768-2-danielhb413@gmail.com
3 years agoselftests/powerpc: unmark non-kernel-doc comments
Randy Dunlap [Thu, 25 Mar 2021 20:08:20 +0000 (13:08 -0700)]
selftests/powerpc: unmark non-kernel-doc comments

Drop the 'beginning of kernel-doc' notation markers (/**)
in places that are not in kernel-doc format.

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210325200820.16594-1-rdunlap@infradead.org
3 years agocxl: Fix couple of spellings
Bhaskar Chowdhury [Mon, 22 Mar 2021 02:33:07 +0000 (08:03 +0530)]
cxl: Fix couple of spellings

s/filesytem/filesystem/
s/symantics/semantics/

Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Acked-by: Andrew Donnellan <ajd@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210322023307.168754-1-unixbhaskar@gmail.com
3 years agopowerpc/irq: Inline call_do_irq() and call_do_softirq()
Christophe Leroy [Fri, 19 Mar 2021 06:34:43 +0000 (06:34 +0000)]
powerpc/irq: Inline call_do_irq() and call_do_softirq()

call_do_irq() and call_do_softirq() are simple enough to be
worth inlining.

Inlining them avoids an mflr/mtlr pair plus a save/reload on stack.

This is inspired from S390 arch. Several other arches do more or
less the same. The way sparc arch does seems odd thought.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210320122227.345427-1-mpe@ellerman.id.au
3 years agopowerpc/setup_64: Fix sparse warnings
He Ying [Tue, 16 Mar 2021 04:11:48 +0000 (00:11 -0400)]
powerpc/setup_64: Fix sparse warnings

Sparse warns:
  warning: symbol 'rfi_flush' was not declared.
  warning: symbol 'entry_flush' was not declared.
  warning: symbol 'uaccess_flush' was not declared.

Define 'entry_flush' and 'uaccess_flush' as static because they are
not referenced outside the file. Include asm/security_features.h in
which 'rfi_flush' is declared.

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: He Ying <heying24@huawei.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210316041148.29694-1-heying24@huawei.com
3 years agopowerpc/embedded6xx: Remove CONFIG_MV64X60
Christophe Leroy [Thu, 18 Mar 2021 17:25:07 +0000 (17:25 +0000)]
powerpc/embedded6xx: Remove CONFIG_MV64X60

Commit 92c8c16f3457 ("powerpc/embedded6xx: Remove C2K board support")
moved the last selector of CONFIG_MV64X60.

As it is not a user selectable config, it can be removed.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Wolfram Sang <wsa@kernel.org> # for I2C
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/19e57d16692dcd1ca67ba880d7273a57fab416aa.1616085654.git.christophe.leroy@csgroup.eu
3 years agopowerpc/iommu/debug: fix ifnullfree.cocci warnings
kernel test robot [Thu, 18 Mar 2021 23:44:41 +0000 (07:44 +0800)]
powerpc/iommu/debug: fix ifnullfree.cocci warnings

arch/powerpc/kernel/iommu.c:76:2-16: WARNING: NULL check before some freeing functions is not needed.

 NULL check before some freeing functions is not needed.

 Based on checkpatch warning
 "kfree(NULL) is safe this check is probably not required"
 and kfreeaddr.cocci by Julia Lawall.

Generated by: scripts/coccinelle/free/ifnullfree.cocci

Fixes: 691602aab9c3 ("powerpc/iommu/debug: Add debugfs entries for IOMMU tables")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: kernel test robot <lkp@intel.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210318234441.GA63469@f8e20a472e81
3 years agopowerpc: Fix arch_stack_walk() to have running function as first entry
Christophe Leroy [Tue, 16 Mar 2021 07:57:16 +0000 (07:57 +0000)]
powerpc: Fix arch_stack_walk() to have running function as first entry

It seems like other architectures, namely x86 and arm64 and riscv
at least, include the running function as top entry when saving
stack trace with save_stack_trace_regs().

Functionnalities like KFENCE expect it.

Do the same on powerpc, it allows KFENCE and other users to
properly identify the faulting function as depicted below.
Before the patch KFENCE was identifying finish_task_switch.isra
as the faulting function.

[   14.937370] ==================================================================
[   14.948692] BUG: KFENCE: invalid read in test_invalid_access+0x54/0x108
[   14.948692]
[   14.956814] Invalid read at 0xdf98800a:
[   14.960664]  test_invalid_access+0x54/0x108
[   14.964876]  finish_task_switch.isra.0+0x54/0x23c
[   14.969606]  kunit_try_run_case+0x5c/0xd0
[   14.973658]  kunit_generic_run_threadfn_adapter+0x24/0x30
[   14.979079]  kthread+0x15c/0x174
[   14.982342]  ret_from_kernel_thread+0x14/0x1c
[   14.986731]
[   14.988236] CPU: 0 PID: 111 Comm: kunit_try_catch Tainted: G    B             5.12.0-rc1-01537-g95f6e2088d7e-dirty #4682
[   14.999795] NIP:  c016ec2c LR: c02f517c CTR: c016ebd8
[   15.004851] REGS: e2449d90 TRAP: 0301   Tainted: G    B              (5.12.0-rc1-01537-g95f6e2088d7e-dirty)
[   15.015274] MSR:  00009032 <EE,ME,IR,DR,RI>  CR: 22000004  XER: 00000000
[   15.022043] DAR: df98800a DSISR: 20000000
[   15.022043] GPR00: c02f517c e2449e50 c1142080 e100dd24 c084b13c 00000008 c084b32b c016ebd8
[   15.022043] GPR08: c0850000 df988000 c0d10000 e2449eb0 22000288
[   15.040581] NIP [c016ec2c] test_invalid_access+0x54/0x108
[   15.046010] LR [c02f517c] kunit_try_run_case+0x5c/0xd0
[   15.051181] Call Trace:
[   15.053637] [e2449e50] [c005a68c] finish_task_switch.isra.0+0x54/0x23c (unreliable)
[   15.061338] [e2449eb0] [c02f517c] kunit_try_run_case+0x5c/0xd0
[   15.067215] [e2449ed0] [c02f648c] kunit_generic_run_threadfn_adapter+0x24/0x30
[   15.074472] [e2449ef0] [c004e7b0] kthread+0x15c/0x174
[   15.079571] [e2449f30] [c001317c] ret_from_kernel_thread+0x14/0x1c
[   15.085798] Instruction dump:
[   15.088784] 8129d608 38e7ebd8 81020280 911f004c 39000000 995f0024 907f0028 90ff001c
[   15.096613] 3949000a 915f0020 3d40c0d1 3d00c085 <8929000a3908adb0 812a4b98 3d40c02f
[   15.104612] ==================================================================

Fixes: 35de3b1aa168 ("powerpc: Implement save_stack_trace_regs() to enable kprobe stack tracing")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Marco Elver <elver@google.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/21324f9e2f21d1640c8397b4d1d857a9355a2283.1615881400.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Convert stacktrace to generic ARCH_STACKWALK
Christophe Leroy [Tue, 16 Mar 2021 07:57:15 +0000 (07:57 +0000)]
powerpc: Convert stacktrace to generic ARCH_STACKWALK

This patch converts powerpc stacktrace to the generic ARCH_STACKWALK
implemented by commit 214d8ca6ee85 ("stacktrace: Provide common
infrastructure")

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/73b36bbb101299760b95ecd2cd3a46554bea8bf9.1615881400.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Rename 'tsk' parameter into 'task'
Christophe Leroy [Tue, 16 Mar 2021 07:57:14 +0000 (07:57 +0000)]
powerpc: Rename 'tsk' parameter into 'task'

To better match generic code, rename 'tsk' to 'task' in
some stacktrace functions in preparation of following
patch which converts powerpc to generic ARCH_STACKWALK.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/117f0200e11961af6c0fdf85c98373e5dcf96a47.1615881400.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Activate HAVE_RELIABLE_STACKTRACE for all
Christophe Leroy [Tue, 16 Mar 2021 07:57:13 +0000 (07:57 +0000)]
powerpc: Activate HAVE_RELIABLE_STACKTRACE for all

CONFIG_HAVE_RELIABLE_STACKTRACE is applicable to all, no
reason to limit it to book3s/64le

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/955248c6423cb068c5965923121ba31d4dd2fdde.1615881400.git.christophe.leroy@csgroup.eu
3 years agopowerpc/book3s64/kuap: Move Kconfig varriables to BOOK3S_64
Aneesh Kumar K.V [Thu, 18 Mar 2021 03:48:29 +0000 (09:18 +0530)]
powerpc/book3s64/kuap: Move Kconfig varriables to BOOK3S_64

With below two commits:
commit c91435d95c49 ("powerpc/book3s64/hash/kuep: Enable KUEP on hash")
commit b2ff33a10c8b ("powerpc/book3s64/hash/kuap: Enable kuap on hash")
the kernel now supports kuap/kuep with hash translation. Hence select the
Kconfig even when radix is disabled.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210318034829.72255-1-aneesh.kumar@linux.ibm.com
3 years agopowerpc/kernel: Trivial typo fix in kgdb.c
Bhaskar Chowdhury [Wed, 17 Mar 2021 09:04:13 +0000 (14:34 +0530)]
powerpc/kernel: Trivial typo fix in kgdb.c

s/procesing/processing/

Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210317090413.120891-1-unixbhaskar@gmail.com
3 years agopowerpc/64s: Fix hash fault to use TRAP accessor
Nicholas Piggin [Tue, 16 Mar 2021 10:52:05 +0000 (20:52 +1000)]
powerpc/64s: Fix hash fault to use TRAP accessor

Hash faults use the trap vector to decide whether this is an
instruction or data fault. This should use the TRAP accessor
rather than open access regs->trap.

This won't cause a problem at the moment because 64s only uses
trap flags for system call interrupts (the norestart flag), but
that could change if any other trap flags get used in future.

Fixes: a4922f5442e7e ("powerpc/64s: move the hash fault handling logic to C")
Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210316105205.407767-1-npiggin@gmail.com
3 years agopowerpc/mm: Remove unneeded #ifdef CONFIG_PPC_MEM_KEYS
Christophe Leroy [Mon, 15 Mar 2021 14:52:51 +0000 (14:52 +0000)]
powerpc/mm: Remove unneeded #ifdef CONFIG_PPC_MEM_KEYS

In fault.c, #ifdef CONFIG_PPC_MEM_KEYS is not needed because all
functions are always defined, and arch_vma_access_permitted()
always returns true when CONFIG_PPC_MEM_KEYS is not defined so
access_pkey_error() will return false so bad_access_pkey()
will never be called.

Include linux/pkeys.h to get a definition of vma_pkeys() for
bad_access_pkey().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/8038392f38d81f2ad169347efac29146f553b238.1615819955.git.christophe.leroy@csgroup.eu
3 years agopowerpc/fsl-pci: Fix section mismatch warning
Michael Ellerman [Sun, 14 Mar 2021 09:33:41 +0000 (20:33 +1100)]
powerpc/fsl-pci: Fix section mismatch warning

Section mismatch in reference from the function .fsl_add_bridge() to
the function .init.text:.setup_pci_cmd()

fsl_add_bridge() is not __init, and can't be, and is the only caller
of setup_pci_cmd(). Fix it by making setup_pci_cmd() non-init.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210314093341.132986-1-mpe@ellerman.id.au
3 years agopowerpc: Fix section mismatch warning in smp_setup_pacas()
Michael Ellerman [Sun, 14 Mar 2021 09:33:33 +0000 (20:33 +1100)]
powerpc: Fix section mismatch warning in smp_setup_pacas()

Section mismatch in reference from the function .smp_setup_pacas() to
the function .init.text:.allocate_paca()

The only caller of smp_setup_pacas() is setup_arch() which is __init,
so mark smp_setup_pacas() __init.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210314093333.132657-1-mpe@ellerman.id.au
3 years agopowerpc/64s: Fold update_current_thread_[i]amr() into their only callers
Michael Ellerman [Sun, 14 Mar 2021 09:33:20 +0000 (20:33 +1100)]
powerpc/64s: Fold update_current_thread_[i]amr() into their only callers

lkp reported warnings in some configuration due to
update_current_thread_amr() being unused:

  arch/powerpc/mm/book3s64/pkeys.c:284:20: error: unused function 'update_current_thread_amr'
  static inline void update_current_thread_amr(u64 value)

Which is because it's only use is inside an ifdef. We could move it
inside the ifdef, but it's a single line function and only has one
caller, so just fold it in.

Similarly update_current_thread_iamr() is small and only called once,
so fold it in also.

Fixes: 48a8ab4eeb82 ("powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode.")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210314093320.132331-1-mpe@ellerman.id.au
3 years agopowerpc/eeh: Fix build failure with CONFIG_PROC_FS=n
Michael Ellerman [Sun, 14 Mar 2021 09:33:00 +0000 (20:33 +1100)]
powerpc/eeh: Fix build failure with CONFIG_PROC_FS=n

The build fails with CONFIG_PROC_FS=n:

  arch/powerpc/kernel/eeh.c:1571:12: error: ‘proc_eeh_show’ defined but not used
   1571 | static int proc_eeh_show(struct seq_file *m, void *v)

Wrap proc_eeh_show() in an ifdef to avoid it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210314093300.131998-1-mpe@ellerman.id.au
3 years agopowerpc/pci: fix warning comparing pointer to 0
Jiapeng Chong [Mon, 15 Mar 2021 07:35:24 +0000 (15:35 +0800)]
powerpc/pci: fix warning comparing pointer to 0

Fix the following coccicheck warning:

./arch/powerpc/platforms/maple/pci.c:37:16-17: WARNING comparing pointer
to 0.

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1615793724-97015-1-git-send-email-jiapeng.chong@linux.alibaba.com
3 years agopowerpc/xive: use true and false for bool variable
Yang Li [Mon, 15 Mar 2021 07:24:56 +0000 (15:24 +0800)]
powerpc/xive: use true and false for bool variable

fixed the following coccicheck:
./arch/powerpc/sysdev/xive/spapr.c:552:8-9: WARNING: return of 0/1 in
function 'xive_spapr_match' with return type bool

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1615793096-83758-1-git-send-email-yang.lee@linux.alibaba.com
3 years agopowerpc/asm-offsets: GPR14 is not needed either
Christophe Leroy [Mon, 15 Mar 2021 11:01:26 +0000 (11:01 +0000)]
powerpc/asm-offsets: GPR14 is not needed either

Commit aac6a91fea93 ("powerpc/asm: Remove unused symbols in
asm-offsets.c") removed GPR15 to GPR31 but kept GPR14,
probably because it pops up in a couple of comments when doing
a grep.

However, it was never used either, so remove it as well.

Fixes: aac6a91fea93 ("powerpc/asm: Remove unused symbols in asm-offsets.c")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/9881c68fbca004f9ea18fc9473f630e11ccd6417.1615806071.git.christophe.leroy@csgroup.eu
3 years agopowerpc/math: Fix missing __user qualifier for get_user() and other sparse warnings
Christophe Leroy [Mon, 15 Mar 2021 12:00:09 +0000 (12:00 +0000)]
powerpc/math: Fix missing __user qualifier for get_user() and other sparse warnings

Sparse reports the following problems:

arch/powerpc/math-emu/math.c:228:21: warning: Using plain integer as NULL pointer
arch/powerpc/math-emu/math.c:228:31: warning: Using plain integer as NULL pointer
arch/powerpc/math-emu/math.c:228:41: warning: Using plain integer as NULL pointer
arch/powerpc/math-emu/math.c:228:51: warning: Using plain integer as NULL pointer
arch/powerpc/math-emu/math.c:237:13: warning: incorrect type in initializer (different address spaces)
arch/powerpc/math-emu/math.c:237:13:    expected unsigned int [noderef] __user *_gu_addr
arch/powerpc/math-emu/math.c:237:13:    got unsigned int [usertype] *
arch/powerpc/math-emu/math.c:226:1: warning: symbol 'do_mathemu' was not declared. Should it be static?

Add missing __user qualifier when casting pointer used in get_user()

Use NULL instead of 0 to initialise opX local variables.

Add a prototype for do_mathemu() (Added in processor.h like sparc)

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e4d1aae7604d89c98a52dfd8ce8443462e595670.1615809591.git.christophe.leroy@csgroup.eu
3 years agopowerpc/mm/book3s64: Fix a typo in mmu_context.c
Bhaskar Chowdhury [Fri, 12 Mar 2021 11:25:37 +0000 (16:55 +0530)]
powerpc/mm/book3s64: Fix a typo in mmu_context.c

s/detalis/details/

Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210312112537.4585-1-unixbhaskar@gmail.com
3 years agopowerpc/64e: Trivial spelling fixes throughout head_fsl_booke.S
Bhaskar Chowdhury [Sun, 14 Mar 2021 22:04:36 +0000 (03:34 +0530)]
powerpc/64e: Trivial spelling fixes throughout head_fsl_booke.S

Trivial spelling fixes throughout the file.

Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210314220436.3417083-1-unixbhaskar@gmail.com
3 years agopowerpc/Makefile: Remove workaround for gcc versions below 4.9
Christophe Leroy [Wed, 10 Mar 2021 12:43:12 +0000 (12:43 +0000)]
powerpc/Makefile: Remove workaround for gcc versions below 4.9

Commit 6ec4476ac825 ("Raise gcc version requirement to 4.9")
made it impossible to build with gcc 4.8 and under.

Remove related workaround.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/a1e552006b8c51f23edd2f6cabdd9a986c631146.1615380184.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Manage KUAP in C
Christophe Leroy [Fri, 12 Mar 2021 12:50:51 +0000 (12:50 +0000)]
powerpc/32: Manage KUAP in C

Move all KUAP management in C.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/199365ddb58d579daf724815f2d0acb91cc49d19.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/8xx: Create C version of kuap save/restore/check helpers
Christophe Leroy [Fri, 12 Mar 2021 12:50:50 +0000 (12:50 +0000)]
powerpc/8xx: Create C version of kuap save/restore/check helpers

In preparation of porting PPC32 to C syscall entry/exit,
create C version of kuap_save_and_lock() and kuap_user_restore() and
kuap_kernel_restore() and kuap_assert_locked() and
kuap_get_and_assert_locked() on 8xx.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/156a7c4b669d26785391422a5581a1d919544c9a.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Create C version of kuap save/restore/check helpers
Christophe Leroy [Fri, 12 Mar 2021 12:50:49 +0000 (12:50 +0000)]
powerpc/32s: Create C version of kuap save/restore/check helpers

In preparation of porting PPC32 to C syscall entry/exit,
create C version of kuap_save_and_lock() and kuap_user_restore() and
kuap_kernel_restore() and kuap_assert_locked() and
kuap_get_and_assert_locked() on book3s/32.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/2be8fb729da4a0f9863b25e1b9d547174fcd5056.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/64s: Make kuap_check_amr() and kuap_get_and_check_amr() generic
Christophe Leroy [Fri, 12 Mar 2021 12:50:48 +0000 (12:50 +0000)]
powerpc/64s: Make kuap_check_amr() and kuap_get_and_check_amr() generic

In preparation of porting powerpc32 to C syscall entry/exit,
rename kuap_check_amr() and kuap_get_and_check_amr() as
kuap_assert_locked() and kuap_get_and_assert_locked(), and move in the
generic asm/kup.h the stub for when CONFIG_PPC_KUAP is not selected.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/f82614d9b17b83abd739aa18fc08811815d0c2e3.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Move KUEP locking/unlocking in C
Christophe Leroy [Fri, 12 Mar 2021 12:50:47 +0000 (12:50 +0000)]
powerpc/32s: Move KUEP locking/unlocking in C

This can be done in C, do it.

Unrolling the loop gains approx. 15% performance.

From now on, prepare_transfer_to_handler() is only for
interrupts from kernel.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/4eadd873927e9a73c3d1dfe2f9497353465514cf.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Only use prepare_transfer_to_handler function on book3s/32 and e500
Christophe Leroy [Fri, 12 Mar 2021 12:50:46 +0000 (12:50 +0000)]
powerpc/32: Only use prepare_transfer_to_handler function on book3s/32 and e500

Only book3s/32 and e500 have significative work to do in
prepare_transfer_to_handler.

Other 32 bit have nothing to do at all.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/b5e29ca0e557c11340415a13fe8b107189d315e1.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Return directly from power_save_ppc32_restore()
Christophe Leroy [Fri, 12 Mar 2021 12:50:45 +0000 (12:50 +0000)]
powerpc/32: Return directly from power_save_ppc32_restore()

transfer_to_handler_cont: is now just a blr.

Directly perform blr in power_save_ppc32_restore().

Also remove useless setting of r11 in e500 version of
power_save_ppc32_restore().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e337506e08a4df95b11d2290104b92f0dcdb5548.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Save remaining registers in exception prolog
Christophe Leroy [Fri, 12 Mar 2021 12:50:44 +0000 (12:50 +0000)]
powerpc/32: Save remaining registers in exception prolog

Save non volatile registers, XER, CTR, MSR and NIP in exception prolog.

Also assign proper value to r2 and r3 there.

For now, recalculate thread pointer in prepare_transfer_to_handler.
It will disappear once KUAP is ported to C.

And remove the comment which is now completely wrong.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/56f0cde9dd0362edf2ddba4d887552013eee7329.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Refactor saving of volatile registers in exception prologs
Christophe Leroy [Fri, 12 Mar 2021 12:50:43 +0000 (12:50 +0000)]
powerpc/32: Refactor saving of volatile registers in exception prologs

Exception prologs all do the same at the end:
- Save trapno in stack
- Mark stack with exception marker
- Save r0
- Save r3 to r8

Refactor that into a COMMON_EXCEPTION_PROLOG_END macro.
At the same time use r1 instead of r11.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e1c45d2e895e0693c42d2a6840df1105a148efea.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Remove the xfer parameter in EXCEPTION() macro
Christophe Leroy [Fri, 12 Mar 2021 12:50:42 +0000 (12:50 +0000)]
powerpc/32: Remove the xfer parameter in EXCEPTION() macro

The xfer parameter is not used anymore, remove it.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/17c7d68bd18f7d2f1ab24a1a20d9ed33bbcda741.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Dismantle EXC_XFER_STD/LITE/TEMPLATE
Christophe Leroy [Fri, 12 Mar 2021 12:50:41 +0000 (12:50 +0000)]
powerpc/32: Dismantle EXC_XFER_STD/LITE/TEMPLATE

In order to get more control in exception prolog, dismantle
all non standard exception macros, finishing with EXC_XFER_STD
and EXC_XFER_LITE and EXC_XFER_TEMPLATE.

Also remove transfer_to_handler_full and ret_from_except and
ret_from_except_full as they are not used anymore.

Last parameter of EXCEPTION() is now ignored, will be removed
in a later patch to avoid too much churn.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ca5795d04a220586b7037dbbbe6951dfa9e768eb.1615552867.git.christophe.leroy@csgroup.eu