OSDN Git Service

mm: page_alloc: remove stale CMA guard code
authorJohannes Weiner <hannes@cmpxchg.org>
Thu, 24 Aug 2023 15:38:21 +0000 (11:38 -0400)
committerAndrew Morton <akpm@linux-foundation.org>
Sat, 2 Sep 2023 22:17:34 +0000 (15:17 -0700)
In the past, movable allocations could be disallowed from CMA through
PF_MEMALLOC_PIN.  As CMA pages are funneled through the MOVABLE pcplist,
this required filtering that cornercase during allocations, such that
pinnable allocations wouldn't accidentally get a CMA page.

However, since 8e3560d963d2 ("mm: honor PF_MEMALLOC_PIN for all movable
pages"), PF_MEMALLOC_PIN automatically excludes __GFP_MOVABLE.  Once
again, MOVABLE implies CMA is allowed.

Remove the stale filtering code.  Also remove a stale comment that was
introduced as part of the filtering code, because the filtering let
order-0 pages fall through to the buddy allocator.  See 1d91df85f399
("mm/page_alloc: handle a missing case for memalloc_nocma_{save/restore}
APIs") for context.  The comment's been obsolete since the introduction of
the explicit ALLOC_HIGHATOMIC flag in eb2e2b425c69 ("mm/page_alloc:
explicitly record high-order atomic allocations in alloc_flags").

Link: https://lkml.kernel.org/r/20230824153821.243148-1-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/page_alloc.c

index 4524598..0c5be12 100644 (file)
@@ -2641,12 +2641,6 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
        do {
                page = NULL;
                spin_lock_irqsave(&zone->lock, flags);
-               /*
-                * order-0 request can reach here when the pcplist is skipped
-                * due to non-CMA allocation context. HIGHATOMIC area is
-                * reserved for high-order atomic allocation, so order-0
-                * request should skip it.
-                */
                if (alloc_flags & ALLOC_HIGHATOMIC)
                        page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
                if (!page) {
@@ -2780,17 +2774,10 @@ struct page *rmqueue(struct zone *preferred_zone,
        WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1));
 
        if (likely(pcp_allowed_order(order))) {
-               /*
-                * MIGRATE_MOVABLE pcplist could have the pages on CMA area and
-                * we need to skip it when CMA area isn't allowed.
-                */
-               if (!IS_ENABLED(CONFIG_CMA) || alloc_flags & ALLOC_CMA ||
-                               migratetype != MIGRATE_MOVABLE) {
-                       page = rmqueue_pcplist(preferred_zone, zone, order,
-                                       migratetype, alloc_flags);
-                       if (likely(page))
-                               goto out;
-               }
+               page = rmqueue_pcplist(preferred_zone, zone, order,
+                                      migratetype, alloc_flags);
+               if (likely(page))
+                       goto out;
        }
 
        page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags,