OSDN Git Service

arm64: mm: Pass original fault address to handle_mm_fault()
authorGavin Shan <gshan@redhat.com>
Mon, 14 Jun 2021 12:27:01 +0000 (20:27 +0800)
committerWill Deacon <will@kernel.org>
Tue, 15 Jun 2021 11:39:30 +0000 (12:39 +0100)
Currently, the lower bits of fault address is cleared before it's
passed to handle_mm_fault(). It's unnecessary since generic code
does same thing since the commit 1a29d85eb0f19 ("mm: use vmf->address
instead of of vmf->virtual_address").

This passes the original fault address to handle_mm_fault() in case
the generic code needs to know the exact fault address.

Signed-off-by: Gavin Shan <gshan@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/20210614122701.100515-1-gshan@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
arch/arm64/mm/fault.c

index 6786cf1..bd9a0bb 100644 (file)
@@ -509,7 +509,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
         */
        if (!(vma->vm_flags & vm_flags))
                return VM_FAULT_BADACCESS;
-       return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, regs);
+       return handle_mm_fault(vma, addr, mm_flags, regs);
 }
 
 static bool is_el0_instruction_abort(unsigned int esr)