OSDN Git Service

mm/rmap: fix assumptions of THP size
authorMatthew Wilcox (Oracle) <willy@infradead.org>
Fri, 16 Oct 2020 03:05:46 +0000 (20:05 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 16 Oct 2020 18:11:15 +0000 (11:11 -0700)
Ask the page what size it is instead of assuming it's PMD size.  Do this
for anon pages as well as file pages for when someone decides to support
that.  Leave the assumption alone for pages which are PMD mapped; we don't
currently grow THPs beyond PMD size, so we don't need to change this code
yet.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-9-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/rmap.c

index 9425260..1b84945 100644 (file)
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1205,7 +1205,7 @@ void page_add_file_rmap(struct page *page, bool compound)
        VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page);
        lock_page_memcg(page);
        if (compound && PageTransHuge(page)) {
-               for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) {
+               for (i = 0, nr = 0; i < thp_nr_pages(page); i++) {
                        if (atomic_inc_and_test(&page[i]._mapcount))
                                nr++;
                }
@@ -1246,7 +1246,7 @@ static void page_remove_file_rmap(struct page *page, bool compound)
 
        /* page still mapped by someone else? */
        if (compound && PageTransHuge(page)) {
-               for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) {
+               for (i = 0, nr = 0; i < thp_nr_pages(page); i++) {
                        if (atomic_add_negative(-1, &page[i]._mapcount))
                                nr++;
                }
@@ -1293,7 +1293,7 @@ static void page_remove_anon_compound_rmap(struct page *page)
                 * Subpages can be mapped with PTEs too. Check how many of
                 * them are still mapped.
                 */
-               for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) {
+               for (i = 0, nr = 0; i < thp_nr_pages(page); i++) {
                        if (atomic_add_negative(-1, &page[i]._mapcount))
                                nr++;
                }
@@ -1303,10 +1303,10 @@ static void page_remove_anon_compound_rmap(struct page *page)
                 * page of the compound page is unmapped, but at least one
                 * small page is still mapped.
                 */
-               if (nr && nr < HPAGE_PMD_NR)
+               if (nr && nr < thp_nr_pages(page))
                        deferred_split_huge_page(page);
        } else {
-               nr = HPAGE_PMD_NR;
+               nr = thp_nr_pages(page);
        }
 
        if (unlikely(PageMlocked(page)))