OSDN Git Service

mm/vmalloc: do not adjust the search size for alignment overhead
authorUladzislau Rezki (Sony) <urezki@gmail.com>
Fri, 5 Nov 2021 20:39:31 +0000 (13:39 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Sat, 6 Nov 2021 20:30:37 +0000 (13:30 -0700)
We used to include an alignment overhead into a search length, in that
case we guarantee that a found area will definitely fit after applying a
specific alignment that user specifies.  From the other hand we do not
guarantee that an area has the lowest address if an alignment is >=
PAGE_SIZE.

It means that, when a user specifies a special alignment together with a
range that corresponds to an exact requested size then an allocation
will fail.  This is what happens to KASAN, it wants the free block that
exactly matches a specified range during onlining memory banks:

    [root@vm-0 fedora]# echo online > /sys/devices/system/memory/memory82/state
    [root@vm-0 fedora]# echo online > /sys/devices/system/memory/memory83/state
    [root@vm-0 fedora]# echo online > /sys/devices/system/memory/memory85/state
    [root@vm-0 fedora]# echo online > /sys/devices/system/memory/memory84/state
    vmap allocation for size 16777216 failed: use vmalloc=<size> to increase size
    bash: vmalloc: allocation failure: 16777216 bytes, mode:0x6000c0(GFP_KERNEL), nodemask=(null),cpuset=/,mems_allowed=0
    CPU: 4 PID: 1644 Comm: bash Kdump: loaded Not tainted 4.18.0-339.el8.x86_64+debug #1
    Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
    Call Trace:
     dump_stack+0x8e/0xd0
     warn_alloc.cold.90+0x8a/0x1b2
     ? zone_watermark_ok_safe+0x300/0x300
     ? slab_free_freelist_hook+0x85/0x1a0
     ? __get_vm_area_node+0x240/0x2c0
     ? kfree+0xdd/0x570
     ? kmem_cache_alloc_node_trace+0x157/0x230
     ? notifier_call_chain+0x90/0x160
     __vmalloc_node_range+0x465/0x840
     ? mark_held_locks+0xb7/0x120

Fix it by making sure that find_vmap_lowest_match() returns lowest start
address with any given alignment value, i.e.  for alignments bigger then
PAGE_SIZE the algorithm rolls back toward parent nodes checking right
sub-trees if the most left free block did not fit due to alignment
overhead.

Link: https://lkml.kernel.org/r/20211004142829.22222-1-urezki@gmail.com
Fixes: 68ad4a330433 ("mm/vmalloc.c: keep track of free blocks for vmap allocation")
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Reported-by: Ping Fang <pifang@redhat.com>
Tested-by: David Hildenbrand <david@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Hillf Danton <hdanton@sina.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/vmalloc.c

index 0a740fb..0d835bd 100644 (file)
@@ -1195,18 +1195,14 @@ find_vmap_lowest_match(unsigned long size,
 {
        struct vmap_area *va;
        struct rb_node *node;
-       unsigned long length;
 
        /* Start from the root. */
        node = free_vmap_area_root.rb_node;
 
-       /* Adjust the search size for alignment overhead. */
-       length = size + align - 1;
-
        while (node) {
                va = rb_entry(node, struct vmap_area, rb_node);
 
-               if (get_subtree_max_size(node->rb_left) >= length &&
+               if (get_subtree_max_size(node->rb_left) >= size &&
                                vstart < va->va_start) {
                        node = node->rb_left;
                } else {
@@ -1216,9 +1212,9 @@ find_vmap_lowest_match(unsigned long size,
                        /*
                         * Does not make sense to go deeper towards the right
                         * sub-tree if it does not have a free block that is
-                        * equal or bigger to the requested search length.
+                        * equal or bigger to the requested search size.
                         */
-                       if (get_subtree_max_size(node->rb_right) >= length) {
+                       if (get_subtree_max_size(node->rb_right) >= size) {
                                node = node->rb_right;
                                continue;
                        }
@@ -1226,15 +1222,23 @@ find_vmap_lowest_match(unsigned long size,
                        /*
                         * OK. We roll back and find the first right sub-tree,
                         * that will satisfy the search criteria. It can happen
-                        * only once due to "vstart" restriction.
+                        * due to "vstart" restriction or an alignment overhead
+                        * that is bigger then PAGE_SIZE.
                         */
                        while ((node = rb_parent(node))) {
                                va = rb_entry(node, struct vmap_area, rb_node);
                                if (is_within_this_va(va, size, align, vstart))
                                        return va;
 
-                               if (get_subtree_max_size(node->rb_right) >= length &&
+                               if (get_subtree_max_size(node->rb_right) >= size &&
                                                vstart <= va->va_start) {
+                                       /*
+                                        * Shift the vstart forward. Please note, we update it with
+                                        * parent's start address adding "1" because we do not want
+                                        * to enter same sub-tree after it has already been checked
+                                        * and no suitable free block found there.
+                                        */
+                                       vstart = va->va_start + 1;
                                        node = node->rb_right;
                                        break;
                                }