OSDN Git Service

iommu: fix MAX_ORDER usage in __iommu_dma_alloc_pages()
authorKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Wed, 15 Mar 2023 11:31:32 +0000 (14:31 +0300)
committerAndrew Morton <akpm@linux-foundation.org>
Thu, 6 Apr 2023 02:42:46 +0000 (19:42 -0700)
MAX_ORDER is not inclusive: the maximum allocation order buddy allocator
can deliver is MAX_ORDER-1.

Fix MAX_ORDER usage in __iommu_dma_alloc_pages().

Also use GENMASK() instead of hard to read "(2U << order) - 1" magic.

Link: https://lkml.kernel.org/r/20230315113133.11326-10-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Acked-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
drivers/iommu/dma-iommu.c

index 99b2646..ac996fd 100644 (file)
@@ -736,7 +736,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev,
        struct page **pages;
        unsigned int i = 0, nid = dev_to_node(dev);
 
-       order_mask &= (2U << MAX_ORDER) - 1;
+       order_mask &= GENMASK(MAX_ORDER - 1, 0);
        if (!order_mask)
                return NULL;
 
@@ -756,7 +756,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev,
                 * than a necessity, hence using __GFP_NORETRY until
                 * falling back to minimum-order allocations.
                 */
-               for (order_mask &= (2U << __fls(count)) - 1;
+               for (order_mask &= GENMASK(__fls(count), 0);
                     order_mask; order_mask &= ~order_size) {
                        unsigned int order = __fls(order_mask);
                        gfp_t alloc_flags = gfp;