OSDN Git Service

dm thin: set minimum_io_size to pool's data block size
authorMike Snitzer <snitzer@redhat.com>
Fri, 18 Jul 2014 21:59:43 +0000 (17:59 -0400)
committerMike Snitzer <snitzer@redhat.com>
Fri, 1 Aug 2014 16:30:35 +0000 (12:30 -0400)
Before, if the block layer's limit stacking didn't establish an
optimal_io_size that was compatible with the thin-pool's data block size
we'd set optimal_io_size to the data block size and minimum_io_size to 0
(which the block layer adjusts to be physical_block_size).

Update pool_io_hints() to set both minimum_io_size and optimal_io_size
to the thin-pool's data block size.  This fixes an issue reported where
mkfs.xfs would create more XFS Allocation Groups on thinp volumes than
on a normal linear LV of comparable size, see:
https://bugzilla.redhat.com/show_bug.cgi?id=1003227

Reported-by: Chris Murphy <lists@colorremedies.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
drivers/md/dm-thin.c

index 0e844a5..4843801 100644 (file)
@@ -3177,7 +3177,7 @@ static void pool_io_hints(struct dm_target *ti, struct queue_limits *limits)
         */
        if (io_opt_sectors < pool->sectors_per_block ||
            do_div(io_opt_sectors, pool->sectors_per_block)) {
-               blk_limits_io_min(limits, 0);
+               blk_limits_io_min(limits, pool->sectors_per_block << SECTOR_SHIFT);
                blk_limits_io_opt(limits, pool->sectors_per_block << SECTOR_SHIFT);
        }