OSDN Git Service

mm: replace mmap with vma write lock assertions when operating on a vma
authorSuren Baghdasaryan <surenb@google.com>
Fri, 4 Aug 2023 15:27:21 +0000 (08:27 -0700)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 21 Aug 2023 20:37:45 +0000 (13:37 -0700)
Vma write lock assertion always includes mmap write lock assertion and
additional vma lock checks when per-VMA locks are enabled. Replace
weaker mmap_assert_write_locked() assertions with stronger
vma_assert_write_locked() ones when we are operating on a vma which
is expected to be locked.

Link: https://lkml.kernel.org/r/20230804152724.3090321-4-surenb@google.com
Suggested-by: Jann Horn <jannh@google.com>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Linus Torvalds <torvalds@linuxfoundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/hugetlb.c
mm/memory.c

index 851457a..abfdcaf 100644 (file)
@@ -5029,7 +5029,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
                                        src_vma->vm_start,
                                        src_vma->vm_end);
                mmu_notifier_invalidate_range_start(&range);
-               mmap_assert_write_locked(src);
+               vma_assert_write_locked(src_vma);
                raw_write_seqcount_begin(&src->write_protect_seq);
        } else {
                /*
index 1113ee6..039dcbb 100644 (file)
@@ -1312,7 +1312,7 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
                 * Use the raw variant of the seqcount_t write API to avoid
                 * lockdep complaining about preemptibility.
                 */
-               mmap_assert_write_locked(src_mm);
+               vma_assert_write_locked(src_vma);
                raw_write_seqcount_begin(&src_mm->write_protect_seq);
        }