OSDN Git Service

arm64: tlb: Ensure we execute an ISB following walk cache invalidation
authorWill Deacon <will@kernel.org>
Thu, 22 Aug 2019 14:03:45 +0000 (15:03 +0100)
committerWill Deacon <will@kernel.org>
Tue, 27 Aug 2019 16:38:26 +0000 (17:38 +0100)
commit51696d346c49c6cf4f29e9b20d6e15832a2e3408
tree108f92a17d8490fc6c0d895204f58d1576ce56d8
parentd0b7a302d58abe24ed0f32a0672dd4c356bb73db
arm64: tlb: Ensure we execute an ISB following walk cache invalidation

05f2d2f83b5a ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable")
added a new TLB invalidation helper which is used when freeing
intermediate levels of page table used for kernel mappings, but is
missing the required ISB instruction after completion of the TLBI
instruction.

Add the missing barrier.

Cc: <stable@vger.kernel.org>
Fixes: 05f2d2f83b5a ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable")
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
arch/arm64/include/asm/tlbflush.h