OSDN Git Service

btrfs: Don't hardcode the csum size in btrfs_ordered_sum_size
authorNikolay Borisov <nborisov@suse.com>
Wed, 7 Feb 2018 09:19:10 +0000 (11:19 +0200)
committerDavid Sterba <dsterba@suse.com>
Mon, 26 Mar 2018 13:09:29 +0000 (15:09 +0200)
Currently the function uses a hardcoded value for the checksum size of
a sector. This is fine, given that we currently support only a single
algorithm, whose checksum is 4 bytes == sizeof(u32). Despite not
having other algorithms, btrfs' design supports using a different
algorithm whith different space requirements. To future-proof the code
query the size of the currently used algorithm from the in-memory copy
of the super block. No functional changes.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Su Yue <suy.fnst@cn.fujitsu.com>
Signed-off-by: David Sterba <dsterba@suse.com>
fs/btrfs/ordered-data.h

index 56c4c0e..c53e2cf 100644 (file)
@@ -151,7 +151,9 @@ static inline int btrfs_ordered_sum_size(struct btrfs_fs_info *fs_info,
                                         unsigned long bytes)
 {
        int num_sectors = (int)DIV_ROUND_UP(bytes, fs_info->sectorsize);
-       return sizeof(struct btrfs_ordered_sum) + num_sectors * sizeof(u32);
+       int csum_size = btrfs_super_csum_size(fs_info->super_copy);
+
+       return sizeof(struct btrfs_ordered_sum) + num_sectors * csum_size;
 }
 
 static inline void